Biomedical Engineering Reference
In-Depth Information
operators using MPI Op create . The details of this function may be found in any
MPI reference such as [19, 32].
After the MPI Allreduce returns, the variable sum at every process contains
the sum of the absolute values of the correlations between every pair of voxels. The
master process prints this value to the standard output and returns to the function
main . All the compute processes also return to the function main , which calls
MPI Finalize before returning. This completes the description of the MPI-based
parallel program that computes all-pair correlations.
This section has introduced nine basic MPI functions that are sufficient to write
simple but useful parallel programs. These functions, also summarized in Table 5.2,
are MPI Init , MPI Finalize , MPI Comm size , MPI Comm rank , MPI Send ,
MPI Recv , MPI Bcast , MPI Barrier ,and MPI Allreduce . These were intro-
duced through a series of programs starting with a simple hello-world program.
Through these programs, some of the issues faced by the parallel programmers
were discussed. These issues were related to interprocess synchronization, consis-
tency, importance of MPI semantics, data and work distribution, load balancing,
and program efficiency. The programs presented in this section demonstrated how
some of the issues may be addressed.
5.4.2 Other MPI Features
While it is possible to write fairly complex MPI programs using the nine MPI
functions described earlier, the rest of the MPI functions provide several other
useful functionalities for more advanced programmers. Some of the key features
are briefly summarized next. More details can be found in a detailed MPI reference
[19, 32] or formal MPI specifications [14, 28].
Communicators
All the MPI functions described earlier (with the exception of MPI Init and
MPI Finalize ) require a communicator argument to be specified. The communi-
Table 5.2 A Summary of Nine MPI Functions Introduced in This Chapter
Functionality
MPI Function
Initialization
MPI Init(int *argc, char ***argv)
Cleanup
MPI Finalize(void)
Get job size
MPI Comm size(MPI Comm comm, int *size)
Get process rank
MPI Comm rank(MPI Comm comm, int *rank)
Point-to-point send
MPI Send(void* buffer, int count, MPI Datatype datatype,
int destination, int tag, MPI Comm comm)
Point-to-point receive
MPI Recv(void* buffer, int count, MPI Datatype datatype,
int sender, int tag, MPI Comm comm, MPI Status *status)
Synchronization
MPI Barrier(MPI Comm comm )
Broadcast
MPI Bcast(void* buffer, int count, MPI Datatype datatype,
int root, MPI Comm comm )
Global reduction
int MPI Allreduce(void* sendbuffer, void* receivebuffer,
int count, MPI Datatype datatype,
MPI Op operation, MPI Comm comm)
 
Search WWH ::




Custom Search