Graphics Reference
In-Depth Information
vast body of work, beyond one particular algorithm that is both fairly e-
cient and extremely simple to implement, even for irregular domains, called
MICCG(0) , or more fully modified incomplete Cholesky conjugate gradient,
level zero . Quite a mouthful! Let's go through it slowly.
One of the many properties that A has is that it is symmetric positive
definite (SPD). Technically this means that q T Aq > 0 for any non-zero
vector q .
Actually, before going on I should be a little more careful. A might
just be symmetric positive semi-definite, meaning that q T Aq
0(with
zero achieved for some non-zero q ). If there is some fluid region entirely
surrounded by solid walls, with no empty air cells, then A will not be
strictly positive definite. In that case, A is singular in fact—it doesn't have
an inverse. That doesn't necessarily mean there isn't a solution, however.
If the divergences (the right-hand side) satisfy a compatibility condition
then life is good and there is a solution. The compatibility condition is
simply that the velocities of the solid walls are compatible with the fluid
contained within being incompressible—i.e., the fluid-solid boundary faces
have wall velocities that add up to zero, so that the flow in is balanced
by the flow out—we will discuss how to ensure that this is the case at the
end of the chapter. In fact, not only will there be a solution, but there are
infinitely many solutions! You can take any solution for pressure and add
an arbitrary constant to it and get another solution, it turns out. However,
when we take the pressure gradient for the velocity update, the constants
cancel so we don't actually care which solution we get. They're all good.
One particularly useful algorithm for solving symmetric positive semi-
definite linear systems is called the conjugate gradient algorithm, usually
abbreviated as CG. It's an iterative method, meaning that we start with
a guess at the solution and in each iteration improve on it, stopping when
we think we are accurate enough. CG chooses the iterative updates to the
guess to minimize a particular measure of the error and thus can be guar-
anteed to converge to the solution eventually. Another very nice aspect of
CG, as compared to Gaussian elimination for example, is that each itera-
tion only involves multiplying A by a vector, adding vectors, multiplying
vectors by scalar numbers, and computing a few dot-products—all of which
are very easy to code, and most of which can be done with calls to highly
optimized standard libraries (more on this below).
The problem with CG, however, is that the larger the grid, the longer it
takes to converge. It can be shown that the number of iterations it takes to
converge to some desired accuracy is proportional to the width of the grid:
the maximum number of grid cells in any one direction. In practice, when
Search WWH ::




Custom Search