Information Technology Reference
In-Depth Information
Following Sect. 7.4, we will use an explicit method based on finite differences
for solving this two-dimensional diffusion equation. First, we introduce a uniform
spatial mesh .x i D ix; y j D jy/ , with 0 i n x , 0 j n y and x D
1=n x , y D 1=n y . Then, the approximate solution of u will be sought on the mesh
points and at discrete time levels t ` D `t , `>0 .
(a) Show that an explicit numerical scheme for solving ( 10.22 ) on the inner mesh
points at t D t `C1 is of the following form:
C ˇ u i 1;j
C u i;j1
C tf i;j ;
(10.23)
u `C1
i;j
D ˛ u i;j
C u i C1;j
C u i;jC1
where 1 i n x 1 , 1 j n y 1 and
˛ D 1 2 t
x 2
;
t
y 2
t
x 2 ;
t
y 2 :
C
ˇD
D
(b) Derive the formulas for computing the numerical solutions on the four bound-
aries, following the discussions in Sects. 7.4.2 and 7.4.3.
(c) Implement the above explicit numerical scheme as a serial program in the
C programming language, where the main computation given by ( 10.23 )is
implemented as a double-layered for -loop. For each of the four boundaries,
a single-layered for -loop needs to be implemented according to (b).
(d) Parallelize the serial program using the OpenMP directive #pragma omp for .
What are the obtained speedup results?
(e) Propose a two-dimensional domain decomposition for dividing the inner mesh
points into P x P y subdomains. More specifically, you should find a mapping
.i; j / ! .p x ;p y / that can assign each inner mesh point .i; j / to a unique
subdomain with id .p x ;p y / 2 Œ0; P x 1 Œ0; P y 1 .
(f) On each subdomain, the assigned mesh points should be expanded with one
layer of additional points, which either lie on the actual physical boundaries
or work as ghost points toward the neighboring subdomains. For a subdomain
with id .p x ;p y / , which values of the local u `C1 solution should be sent to which
neighboring subdomains?
(g) Implement a new parallel program using MPI based on the above domain
decomposition. The MPI Sendrecv command can be used to enable inter-
subdomain communication.
(h) An approach to hiding the communication overhead is to use so-called non-
blocking communication commands in MPI. On a parallel system that is capa-
ble of carrying out communication tasks at the same time as computations,
non-blocking communication calls can be used to initiate the communication
(without waiting for its conclusion), followed immediately by computations.
The simplest non-blocking MPI commands are MPI Isend and MPI Irecv ,
which upon return do not imply that the action of sending or receiving a mes-
sage is completed. The standard MPI Send and MPI Recv commands are thus
 
Search WWH ::




Custom Search