Biomedical Engineering Reference
In-Depth Information
becomes
x ( k + 1)
x ( k )
g ( k ) / H ( k )
=
(5.6)
where H ( k ) is the value of the second derivative at the current point x ( k ) .
For this particular function, we have
6 x ) exp
x 2
H ( x )
=
(4 x 3
and I can now repeat the argument given above, but this time searching for stationary points
rather than roots of the equation.
Figure 5.8 shows the gradient (the full curve), and the first two iterations of the Newton-
Raphson algorithm for the stationary point, starting at x (1)
=
0.2. The gradient is zero at
infinity and at x
0.7071.
3
2.5
2
1.5
1
0.5
-1.5
-1
-0.5
0
0.5
1
1.5
-0.5
-1
Figure 5.8 Location of stationary point
In the case of a function of many variables x 1 , x 2 , ..., x n , the Newton-Raphson
optimization algorithm can be written
H ( k ) 1 g ( k )
X ( k + 1)
=
X ( k )
(5.7)
where I have collected the x 's into a column vector X and so on. The formula can be easily
derived from the Taylor expansion.
We therefore have to calculate the gradient and the Hessian at each successive point in
the iterations. Normally, the algorithm needs an infinite number of steps but it will find the
minimum of any quadratic function in just one step, starting at an arbitrary point on the
Search WWH ::




Custom Search