Information Technology Reference
In-Depth Information
MPI_Send(void
*
buf, int count, MPI_Datatype datatype,
int dest, int tag, MPI_Comm comm)
The first three arguments constitute the outgoing message, by specifying the ini-
tial memory address of the array, the number of data elements, and the data type.
The
dest
argument gives the rank of the receiving process relative to the
comm
communicator, whereas
tag
is an integer argument used to label the particular
message.
Correspondingly, the simplest MPI function for receiving a message is
MPI_Recv(void
*
buf, int count, MPI_Datatype datatype,
int source, int tag, MPI_Comm comm, MPI_Status
*
status)
Compared with the
MPI Send
function, the extra argument in the
MPI Recv
function
is a pointer to a so-called
MPI Status
object, which can be used to check some
details of a received message.
Below we will give two examples of MPI programming in the C language. Com-
pared with Examples
10.6
and
10.7
, we will see that a programmer is responsible for
more details of the parallelization. Communications have to be enforced explicitly
in MPI. Despite the extra programming effort, parallel MPI programs are usually
good with respect to data locality, because the local data structure owned by each
MPI process is small relative to the global data structure. In addition, the user
has full control over work division, which can be of great value for performance
enhancement.
For a beginner, it is important to realize that each MPI process executes the
same MPI program. The distinction between the processes, which are spawned by
some parallel runtime system, is through the unique process rank. The rank typi-
cally determines the work assignment for each process. Moreover, an
if
-test with
respect to a particular process rank can allow the chosen process to perform different
operations than the other processes.
Example 10.8.
The following is the most important part of an MPI implementation
of the composite trapezoidal integration rule (10.6):
#include <mpi.h>
/
*
code omitted for defining function f
*
/
int main (int nargs, char
**
args)
{
int P, my_id, n, n_p, i_start_p, i, remainder;
double a, b, h, x, s_p, sum;
MPI_Init (&nargs, &args);
MPI_Comm_size(MPI_COMM_WORLD,&P);
MPI_Comm_rank(MPI_COMM_WORLD,&my_id);
n = 1000000; a = 0.; b = 1.;
remainder = (n-1)%P;