Geoscience Reference
In-Depth Information
end at south and north boundaries. The TDMA can be used at the strips along j -lines.
The sweeping order among the strips in each step is arranged in such a sequence that
the “latest” values at the interface can be transferred from one strip to the next strip.
This multiblock ADI method for a 2-D problem has almost the same efficiency as the
single-block version. However, the domain is divided differently in two steps, which
requires the transpose of the matrices.
The multiblock ADI method shown in Fig. 8.8 can be run on parallel computers.
Its efficiency can be enhanced by applying the “red-black” ordering method among
blocks. The blocks are “colored” as red and black alternately, and the calculation
sequence among blocks follows the two steps similar to those used in the “red-black”
ordered Gauss-Seidel method.
Figure 8.8 Strategy of domain decomposition for ADI method.
The SIP method is recursive, making its extension from single to multiple blocks
less straightforward. One may divide the entire domain into several, such as four sub-
domains shown in Fig. 8.1. The global coefficient matrix is correspondingly split into
a system of diagonal blocks A ii , which contain the elements connecting the points
that belong to the i th subdomain, and off-diagonal blocks A ij ( i
j ), which repre-
sent the interaction of subdomains i and j . Then the SIP method is applied in each
subdomain, while the terms related to the blocks A ij are put in the source term. The
global iteration matrix is selected so that the blocks are decoupled, i.e., M ij
=
=
0 for
i
j . Each diagonal block matrix M ii is decomposed into L and U matrices in the
normal way. The multiblock SIP method can be parallelized. However, the informa-
tion on boundary conditions is transmitted more slowly to the interior in a multiblock
domain than in a single-block domain; thus, the convergence speed may deteriorate
with the increase in the number of blocks. The details can be found in Ferziger and
Peric (1995).
In addition, the speed-up of parallel computations also depends on the information
transfer among processors, which is machine-dependent. A detailed analysis of this
problem can be found in Golub and Ortega (1993) and Shyy et al . (1997).
=
Search WWH ::




Custom Search