Graphics Reference
In-Depth Information
t
+
1 , is given by setting the
The minimizer of this function, which we'll denote
θ
gradient of Equation ( A.22 ) to zero:
1
2 F
θ
F
θ ( θ
t
+
1
t
t
t
θ
= θ
2 ( θ
)
)
(A.25)
When we add a step size parameter to Equation ( A.25 ), and substitute the
expansion in Equation ( A.23 ), we obtain:
J
1
N
2 f
θ ))
(
x k ;
θ )
t
+
1
t
t
) J
t
t
t
) [
t
θ
= θ
+ ρ
( θ
( θ
)
1 (
x k
f
(
x k ;
( θ
)
J
( θ
x
f
( θ
) ]
2
θ
k
=
(A.26)
t
θ , the parame-
θ
Under certain conditions, if
is a good estimate of the minimizer
θ . The
new estimate can then replace the old (i.e., incrementing t by 1) and Equation ( A.26 )
reapplied. The iterations terminatewhen the term J
t
+
1 produced by Equation ( A.26 ) are an incrementally better estimate of
ters
θ
t
) (
t
inEquation ( A.26 )
becomes vanishingly small (which is the condition in Equation ( A.20 )). The iteration
suggested by Equation ( A.26 ) is called the Newton-Raphsonmethod .
The convergence of the algorithm is governed by the choice of the step size param-
eter
( θ
x
f
( θ
))
1, corresponding to explicitly minimizing the
quadratic approximation at each iteration, but in practice we can choose
ρ
. One possibility is to choose
ρ =
1to
simply reduce the cost function in the direction of the vector on the right-hand side
of Equation ( A.26 ).
A simplification of Equation ( A.26 ) is obtained by dropping the second term in the
inverted matrix, producing the update equation
ρ<
J
1
t
+
1
t
t
) J
t
t
) (
t
θ
= θ
+ ρ
( θ
( θ
)
J
( θ
x
f
( θ
))
(A.27)
This simplified iteration is known as Gauss's method or the Gauss-Newtonmethod .
A third algorithm is obtained by assuming that
the inverted matrix in
Equation ( A.26 ) is equal to the identity; that is,
t
+
1
t
t
) (
t
θ
= θ
+ ρ
J
( θ
x
f
( θ
))
(A.28)
The iteration suggested by Equation ( A.28 ) is the well-known steepest descent
method . The descent direction J
t
t
) (
( θ
x
f
( θ
))
is simply the gradient of the
function F .
Finally, the Levenberg-Marquardt method is obtained as a cross between the
Gauss-Newton and steepest descent methods. The Levenberg-Marquardt iteration is
based on the recursion:
J
t I N × N 1
t
+
1
t
t
) J
t
t
) (
t
θ
= θ
+ ρ
( θ
( θ
) + λ
J
( θ
x
f
( θ
))
(A.29)
t
When
λ
=
0, the Levenberg-Marquardt iteration is the same as a Gauss-Newton
t
iteration. As
, the Levenberg-Marquardt iteration tends to a step along the
gradient. Careful tuning of the parameter
λ
→∞
t as the algorithm progresses generally
leads to a quicker convergence rate than either of the two sub-methods. The idea is
to encourage the Gauss-Newton method to be globally convergent by ensuring that
λ
Search WWH ::




Custom Search