Biomedical Engineering Reference
In-Depth Information
Asynchronous Communication
The functions
MPI Send
and
MPI Recv
provide the functionality of a
blocking
communication. The
MPI Send
call blocks until the data has either been sent on
the network or copied into the system buffers. The application is free to modify
the buffer given as the input argument to the function
MPI Send
, as soon as the
call returns. Similarly, the
MPI Recv
function call blocks until the required data
is available and copied into the receive buffer specified in the call.
For performance reasons and to obtain better overlap of computation
and communication, it is desirable to have functions using which the ap-
plications can initiate a communication asynchronously, monitor its progress,
check the availability of the data, and get information about the state
of the network. Several asynchronous communication functions are avail-
able to achieve this goal. These functions include
MPI Isend
,
MPI Ibsend
,
MPI Issend
,and
MPI Irsend
for different flavors of asynchronous send opera-
tions,
MPI Irecv
for asynchronous receive operation,
MPI Test
,
MPI Testany
,
MPI Testall
,and
MPI Testsome
for monitoring the progress of a previously
initiated communication operation,
MPI Wait
,
MPI Waitany
,
MPI Waitall
,
and
MPI Waitsome
for blocking until the specified communication operation is
complete,
MPI Probe
for checking if there is a message ready to be delivered to
the process, and
MPI Cancel
for canceling a previously initiated asynchronous
communication.
Collective Communication Operations
In addition to
MPI Bcast
and
MPI Allreduce
, several other collective commu-
nication operations are available in MPI. The function
MPI Gather
is used to ag-
gregate data from all the processes of a communicator into a single large buffer of a
specified
root
process. The scatter operation, using the function
MPI Scatter
,is
the opposite. It slices an array stored at a root process and distributes it to the other
processes in the communicator. The functions
MPI Gatherv
and
MPI Scatterv
may be used when the size of the data at different processes is different. The
functions
MPI Allgather
and
MPI Allgatherv
are variants of
MPI Gather
and
MPI Gatherv
, where the data collected at the root process is simultaneously
broadcast to all the processes. The functions
MPI Alltoall
and
MPI Alltoallv
are used for more general communication pattern, in which each node has data
for every other node. For example, the function MPI Alltoall may be directly used
for the transpose step of parallel FFT implementations [25].
MPI-IO
The MPI-2 specifications [28] provide a natural approach to file I/O that is consis-
tent with the overall design of MPI. It supports basic file operations such as open,
close, read, write, delete, and resize, in addition to enhanced I/O capabilities.
The concept of the
view
of a file is central to the design of MPI-IO. It is defined
as a repetitive pattern of noncontiguous blocks of the file data that is accessible and
appears contiguous to a process. A view is created using a
displacement
,an
etype
,
and a
filetype
. An etype is an elementary unit of transfer in MPI-IO. All the file I/O
is done in units that are multiples of an etype. An etype may be defined as a fixed
Search WWH ::
Custom Search