Graphics Reference
In-Depth Information
limited to a small number of iterations even simpler algorithms such as
Gauss-Seidel (not covered here) tend to be significantly better than plain
CG. However, there is a trick up our sleeve that can speed this up, called
preconditioning . Preconditioned conjugate gradient (PCG)iswhatwewill
be using.
More generally, CG takes more iterations the further A is from being
the identity matrix, I . It should be immediately obvious that solving a
system with the identity matrix is pretty easy—the solution of Ip = b
is just p = b ! How exactly we measure how far A is from the identity
is beyond the scope of this topic—something called the condition number
of A . The idea behind preconditioning is that the solution of Ap = b is
the same as the solution of MAp = Mb for any invertible matrix M .If
M is approximately the inverse of A ,sothat MA is really close to being
the identity matrix, then CG should be able to solve the preconditioned
equations MAp = Mb really fast. PCG is just a clever way of applying CG
to these preconditioned equations without actually having to form them.
Before we actually get to the details of PCG, we need to talk about
convergence. When do we know to stop? How do we check to see that
our current guess is close enough to the solution? Ideally we would just
measure the norm of the difference between our current guess and the exact
solution—but of course that requires knowing the exact solution! So we
will instead look at a vector called the residual :
r i = b
Ap i .
That is, if p i is the i th guess at the true solution, the i th residual r i is
just how far away it is from satisfying the equation Ap = b .Whenwe
hit the exact solution, the residual is exactly zero. 1 Therefore, we stop
our iteration when the norm of the residual is small enough, below some
tolerance.
That brings us to the next question: how small is small enough? And
what norm do you use to measure r i ? Think back to what r i means phys-
ically. These equations resulted from deriving that b
Ap is the negative
of the finite difference estimate of the divergence of u n +1 , which we want
to be zero. Thus the residual measures exactly how much divergence there
will be in the velocity field after we've updated it with our current estimate
of the pressure. It seems sensible, then, to take the infinity norm of the
residual (the maximum absolute value of any entry) and compare that to
1 In fact, it's not hard to see that the residual is just A times the error: r i = A ( p exact
p i ).
Search WWH ::




Custom Search