Graphics Programs Reference
In-Depth Information
ifnumIter==k+p
omega = 2/(1 + sqrt(1 - (dx/dx1)ˆ(1/p)));
end
end
error('Too many iterations')
Conjugate Gradient Method
Consider the problem of finding the vector
x
that minimizes the scalar function
1
2
x
T
Ax
b
T
x
f
(
x
)
=
−
(2.37)
where thematrix
A
is
symmetric
and
positive definite
.Because
f
(
x
) isminimizedwhen
its gradient
∇
f
=
Ax
−
b
iszero, we see that minimizationisequivalenttosolving
Ax
=
b
(2.38)
Gradient methods accomplish the minimizationbyiteration,starting with an
initial vector
x
0
.Each iterativecycle
k
computes arefined solution
x
k
+
1
=
x
k
+
α
k
s
k
(2.39)
The
step length
α
k
is chosen so that
x
k
+
1
minimizes
f
(
x
k
+
1
) in the
search direction
s
k
.
That is,
x
k
+
1
must satisfyEq. (2.38):
A
(
x
k
+
α
k
s
k
)
=
b
(a)
Introducing the
residual
r
k
=
b
−
Ax
k
(2.40)
r
k
. Premultiplying both sides by
s
k
and solving for
Eq. (a) becomes
α
As
k
=
α
k
, we
obtain
s
k
r
k
s
k
As
k
α
k
=
(2.41)
We arestill left with the problem of determining the search direction
s
k
. Intuition
tells us to choose
s
k
=−
∇
r
k
,since this is the direction of the largest negative
change in
f
(
x
). The resulting procedure isknown as the
method of steepest descent
.
It is not apopular algorithmdue to slow convergence. The moreefficientconjugate
gradient methoduses the search direction
f
=
s
k
+
1
=
r
k
+
1
+
β
k
s
k
(2.42)
β
k
is chosen so that the twosuccessivesearch directions are
conjugate
(noninterfering)toeach other, meaning
s
k
+
1
As
k
The constant
=
0.Substituting for
s
k
+
1
from
Search WWH ::
Custom Search