Hardware Reference
In-Depth Information
Messages are typed and many data types are supported, including characters,
short, regular, and long integers, single- and double-precision floating-point num-
bers, and so on. It is also possible to construct other types derived from these.
MPI supports an extensive set of communication operations. The most basic
one is used to send messages as follows:
MPI Send(buffer, count, data type, destination, tag, communicator)
This call sends a buffer with count number of items of the specified data type to the
destination. The tag field labels the message so the receiver can say it only wants
to receive a message with that tag. The last field tells which process group the
destination is in (the destination field is just an index into the list of processes for
the specified group). The corresponding call for receiving a message is
MPI Recv(&buffer, count, data type, source, tag, communicator, &status)
which announces that the receiver is looking for a message of a certain type from a
certain source with a certain tag.
MPI supports four basic communication modes. Mode 1 is synchronous, in
which the sender may not begin sending until the receiver has called MPI Recv .
Mode 2 is buffered, in which this restriction does not hold. Mode 3 is standard,
which is implementation dependent and can be either synchronous or buffered.
Mode 4 is ready, in which the sender claims the receiver is available (as in syn-
chronous), but no check is made. Each of these primitives comes in a blocking and
a nonblocking version, leading to eight primitives in all. Receiving has only two
variants: blocking and nonblocking.
MPI supports collective communication, including broadcast, scatter/gather,
total exchange, aggregation, and barrier. For all forms of collective communica-
tion, all the processes in a group must make the call and with compatible argu-
ments. Failure to do this is an error. A typical form of collective communication
is for processes organized in a tree, in which values propagate up from the leaves
to the root, undergoing some processing at each step, for example, adding up the
values or taking the maximum.
A basic concept in MPI is the virtual topology , in which the processes can be
arranged in a tree, ring, grid, torus, or other topology by the user per application.
Such an arrangement provides a way to name communication paths and facilitates
communication.
MPI-2 adds dynamic processes, remote memory access, nonblocking collective
communication, scalable I/O support, real-time processing, and many other new
features that are beyond the scope of this topic. In the scientific community, a bat-
tle raged for years between the MPI and PVM camps. The PVM side said that
PVM was easier to learn and simpler to use. The MPI side said the MPI does more
and also points out that MPI is a formal standard with a standardization committee
and an official defining document. The PVM side agreed but claimed that the lack
 
 
Search WWH ::




Custom Search