Hardware Reference
In-Depth Information
buffering. It has the severe disadvantage that the sender remains blocked until the
receiver has gotten and acknowledged receipt of the message.
In buffered message passing , when a message is sent before the receiver is
ready, the message is buffered somewhere, for example, in a mailbox, until the re-
ceiver takes it out. Thus in buffered message passing, a sender can continue after a
send , even if the receiver is busy with something else. Since the message has ac-
tually been sent, the sender is free to reuse the message buffer immediately. This
scheme reduces the time the sender has to wait. Basically, as soon as the system
has sent the message the sender can continue. However, the sender now has no
guarantee that the message was correctly received. Even if communication is re-
liable, the receiver may have crashed before getting the message.
In nonblocking message passing , the sender is allowed to continue im-
mediately after making the call. All the library does is tell the operating system to
do the call later, when it has time. As a consequence, the sender is hardly blocked
at all. The disadvantage of this method is that when the sender continues after the
send , it may not reuse the message buffer as the message may not yet have been
sent. Somehow it has to find out when it can reuse the buffer. One idea is to have
it poll the system to ask. The other is to get an interrupt when the buffer is avail-
able. Neither of these makes the software any simpler.
Below we will briefly discuss a popular message-passing system available on
many multicomputers: MPI.
MPI—Message-Passing Interface
For quite a few years, the most popular communication package for multicom-
puters was PVM ( Parallel Virtual Machine ) (Geist et al., 1994, and Sunderram,
1990). In recent years it has been largely replaced by MPI ( Message-Passing
Interface ). MPI is much richer and more complex than PVM, with many more li-
brary calls, many more options, and many more parameters per call. The original
version of MPI, now called MPI-1, was augmented by MPI-2 in 1997. Below we
will give a very cursory introduction to MPI-1 (which contains all the basics), then
say a little about what was added in MPI-2. For more information about MPI, see
Gropp et al. (1994) and Snir et al. (1996).
MPI-1 does not deal with process creation or management, as PVM does. It is
up to the user to create processes using local system calls. Once they have been
created, they are arranged into static, unchanging process groups. It is with these
groups that MPI works.
MPI is based on four major concepts: communicators, message data types,
communication operations, and virtual topologies. A communicator is a process
group plus a context. A context is a label that identifies something, such as a phase
of execution. When messages are sent and received, the context can be used to
keep unrelated messages from interfering with one another.
 
Search WWH ::




Custom Search