Hardware Reference
In-Depth Information
1. offset = disp + (rank / 2) * 3 * 8 + (rank % 2) * 4
2. for (i=0; i<N; i++, offset+=8)
3. MPI_File_write_at(fh, offset, buf, 4, MPI_INT, &status);
(a) Write the 2D array using MPI independent I/O with explicit offsets.
1. int gsizes[2] = {5, 8}; /* global array size */
2. int subsizes[2] = {N, 4}; /* local array size */
3. int starts[2]; /* starting file offsets */
4. starts[0] = (rank / 2) * (gsizes[0] / 2 + 1);
5. starts[1] = (rank % 2) * (gsizes[1] / 2);
6. MPI_Type_create_subarray(2, gsizes, subsizes, starts,
7. MPI_ORDER_C, MPI_INT, &ftype);
8. MPI_Type_commit(&ftype);
(b) Create an MPI derived data type that maps local subarray to global array.
1. MPI_File_set_view(fh, disp, MPI_INT, ftype, "native", info);
2. MPI_File_write(fh, buf, N*4, MPI_INT, &status);
(c) Write the 2D array using MPI independent I/O with file view.
1. MPI_File_set_view(fh, disp, MPI_INT, ftype, "native", info);
2. MPI_File_write_all(fh, buf, N*4, MPI_INT, &status);
(d) Write the 2D array using MPI collective I/O.
FIGURE13.4:MPIprogramfragmentstowritethe2Darrayinparallelas
illustratedinFigure13.3.
usingindependentfunctionsallowsMPIprocessestomakeanunequalnum-
berofcalls.Eachcallto MPIFilewriteat at line 3 is equivalent to a call
to POSIX lseek with the given file offset, followed by a write with the same
request amount. From an MPI-IO perspective, the program expresses the fol-
lowing I/O intent. At first, the loop at line 2 asks the MPI-IO library to handle
the requests one after another, in that exact order. Secondly, the use of in-
dependent functions tells MPI-IO that requests from one process can arrive
independently from other processes and expects MPI-IO to protect the data
consistency for each individual request. Obviously, the above interpretation is
not exactly the same as the user's intent, in which the order does not matter
and the consistency is for the whole 2D array. In order to prevent such mis-
understanding, MPI-IO provides a feature named file view to let users better
convey their I/O intent.
13.3.2 MPI File View
An MPI le view denes the portion of a le that is \visible" to a process. A
process can only read and write the data that is located in its file view. When
a le is rst opened, the entire le is visible to the process. A process's le view
can be changed with the function MPIFilesetview . When using individual
le pointers, a process's le view can be dierent from others. When using the
shared file pointer, all processes must define the same view. A file view can
 
Search WWH ::




Custom Search