Information Technology Reference
In-Depth Information
n_p = (my_id<remainder) ? (n-1)/P+1 : (n-1)/P;
i_start_p = 1+my_id * ((n-1)/P)+min(my_id,remainder);
h = (b-a)/n;
s_p = 0.; x = a+i_start_p * h;
for (i=0; i<n_p; i++) {
s_p += f(x); x += h;
}
MPI_Allreduce (&s_p,&sum,1,MPI_DOUBLE,MPI_SUM,MPI_COMM_WORLD);
sum += 0.5 * (f(a)+f(b));
sum * =h;
MPI_Finalize();
return 0;
}
The MPI functions MPI Comm size and MPI Comm rank give, respectively, the
number of processes P and the unique id (between 0 and P 1 )ofthecurrent
process. The work division, i.e., computation of n p and i start ;p , follows the formulas
(10.8)and(10.9). The for -loop implements the formula (10.7) for computing s p .
The MPI Allreduce function is a collective reduction operation involving all the
processes. Its effect is that all the local s p values are added up and the final result is
stored in the sum variable on all processes. An alternative is to use the MPI Reduce
function, which runs faster than MPI Allreduce , but the final result will only be
available on a chosen process.
Example 10.9. Let us show below the most important part of an MPI implementa-
tion in C of Algorithm 10.1 .
#include <mpi.h>
#include <malloc.h>
/ *
code omitted for defining functions I,f,D0,N1
* /
int main (int nargs, char ** args)
{
int P, my_id, n, n_p, i_start_p, i, k, m, remainder;
int left_neighbor_id, right_neighbor_id;
double dt, dx, alpha, t, * u_prev, * u;
MPI_Init (&nargs, &args);
MPI_Comm_size(MPI_COMM_WORLD,&P);
MPI_Comm_rank(MPI_COMM_WORLD,&my_id);
/ * code omitted for choosing n and dt, computing dx,alpha etc.
* /
remainder = (n-1)%P;
n_p = (my_id<remainder) ? (n-1)/P+1 : (n-1)/P;
i_start_p = my_id * ((n-1)/P)+min(my_id,remainder);
 
Search WWH ::




Custom Search