Databases Reference
In-Depth Information
FIGURE 6.6: General MPI program structure
passing source code when porting a program to a platform that supports or
is compliant with the MPI standard. In addition, MPI offers great function-
ality for message passing. In MPI version 1 (MPI-1), there are over 115 mes-
sage passing routines. In MPI-2, these routines have been extended to include
other key functionalities, such as dynamic processes, one-sided communica-
tion, and parallel I/O. There are a variety of MPI implementations available
through both the vendor and public domains. These include MPICH/MPICH-
2 (Argonne National Laboratory), LAM MPI (Indiana University, USA), and
OpenMPI (collaborative project between academic and business institutions).
MPI is native to the ANSI C programming language. However, there have
been several initiatives to implement MPI using languages such as C++ (MPI-
2 provides such capability, see [57]) and Java [79]. Figure 6.6 shows a common
MPI program structure.
The MPI specification lends itself to virtually any distributed memory par-
allel programming model. In addition, MPI is commonly used to implement
(behind the scenes) shared memory models, such as data parallelism, on dis-
tributed memory architectures. MPI can be executed on different hardware
platforms, e.g., distributed memory, shared memory, or even hybrid shared-
distributed systems. In the MPI implementation, all parallelism is explicit,
i.e., the programmer is responsible for identifying parallelism and implement-
ing parallel algorithms using MPI constructs. In addition, the number of tasks
dedicated to run a parallel program is static. New tasks cannot be dynamically
Search WWH ::




Custom Search