Civil Engineering Reference
In-Depth Information
iel start Starting element number per processor
ieq start Starting equation number per processor
numpe
Local processor number or rank
Number of “processing elements”
npes
MPI parameter for error checking
ier
These variables must not be additionally declared.
12.2.3 MPI library routines
The same MPI routines are used in all programs. There are only a dozen or so that are
listed below with their purposes (see Appendix F for further details of subroutines used in
this Chapter and their arguments):
Initialise MPI (hidden)
MPI INITIALIZE
Close MPI: must appear
shutdown
Distributed version of dot product
DOT PRODUCT P
Distributed version of array SUM
N.B. We take the liberty of using capitals as if these were
part of Fortran.
SUM P
Finds the l2 norm of a distributed vector
norm p
Finds how many processors are being used
find pe procs
Finds number of elements per processor (variable)
calc nels pp
Finds number of equations per processor (variable)
calc neq pp
Finds maximum of a distributed integer variable
reduce
Builds
distributed g vectors
(see
Section 3.7.10
for
make ggl
description of g )
gather See Section 12.2.8
scatter See Section 12.2.8
checon par Convergence check for distributed vectors
reindex fixed nodes See Section 12.2.9
bcast inputdata pxxx See Section 12.2.5.
12.2.4 The pp appendage
Distributed arrays and their upper bounds carry the appendage _pp . A difference from the
serial programs is that it is more convenient to begin array addresses at 1 rather than 0. So
the serial p(0:neq) becomes p_pp(neq_pp) in parallel.
12.2.5 Reading and writing
The simple approach adopted here is that data are read, and results written, on a single
(not necessarily the same) processor. Data, having been read on one processor, are then
broadcast to all other processors by MPI routines such as bcast_inputdata_p121 that
Search WWH ::




Custom Search