Graphics Programs Reference
In-Depth Information
Eq. (2.42), we get r k + 1 + β k s k As k =
0, which yields
r k + 1 As k
s k As k
β k =−
(2.43)
Here is the outline of the conjugate gradient algorithm:
Choose x 0 (any vectorwill do, butone close to solutionresults in feweriterations)
r 0
b
Ax 0
s 0
r 0 (lackingaprevioussearch direction, choose the direction of steepest
descent)
do with k
=
0
,
1
,
2
,...
s k r k
s k As k
x k + 1
α k
x k + α k s k
r k + 1
b
Ax k + 1
if
|
r k + 1
|≤ ε
exit loop (convergence criterion;
ε
is the error tolerance)
r k + 1 As k
s k As k
β k ←−
s k + 1
r k + 1 + β k s k
end do
It can be shown that the residual vectors r 1 ,
r 2 ,
r 3 ,...
producedbythe algorithm
are mutually orthogonal;i.e., r i ·
r j =
=
j . Now suppose that wehavecarried out
enough iterationstohavecomputed the whole set of n residual vectors. The residual
resulting from the next iterationmust be anull vector ( r n + 1 =
0, i
0 ), indicating that the
solution has been obtained. It thus appearsthat the conjugate gradient algorithm
is not an iterative methodat all, since it reaches the exact solutionafter n compu-
tationalcycles. Inpractice, however,convergence is usuallyachievedinless than n
iterations.
The conjugate gradient methodis not competitive with direct methods in the
solution of small sets of equations. Its strength lies in the handling of large, sparse
systems(where most elements of A arezero). It is importanttonote that A enters the
algorithm only through its multiplicationbyavector; i.e., in the form Av , where v is
avector (either x k + 1 or s k ). If A issparse, it is possible to write an efficientsubroutine
for the multiplication and pass iton to the conjugate gradient algorithm.
conjGrad
The function conjGrad shown below implements the conjugate gradient algorithm.
The maximum allowable number of iterations is set to n . Note that conjGrad calls
Search WWH ::




Custom Search