Hardware Reference
In-Depth Information
Using explicit offsets is also a thread-safe way of accessing the file, since there
are no separate seek and read/write functions.
MPI-IO also supports a third way that involves using a shared file pointer.
The shared file pointer is a common file pointer shared by all processes in
the communicator passed to the file open function. This file pointer can be
moved by calling MPIFileseekshared . The corresponding read/write func-
tions are MPIFilereadshared and MPIFilewriteshared . This method
is useful for writing log files, for example. However, maintaining a shared file
pointer involves some overhead for the implementation. Hence, for perfor-
mance reasons, the use of these functions is generally discouraged.
13.2.2 Blocking and Nonblocking I/O
MPI-IO supports both blocking and nonblocking I/O functions. The read-
/write functions mentioned above are all blocking functions, which block until
the specified operation is completed. Each of these functions also has non-
blocking variants: MPIFileiread , MPIFileiwrite , MPIFileireadat ,
MPIFileiwriteat , MPIFileireadshared , and MPIFileiwriteshared .
They return an MPIRequest object immediately after the call, similar to
MPI nonblocking communication functions. The user must call MPITest ,
MPIWait , or their variants to test or wait for completion of these operations.
The nonblocking I/O functions offer the potential for overlapping I/O and
computation or communication in the program.
13.3 File Access with User Intent
Besides the POSIX-equivalent basic I/O functions, MPI-IO contains ad-
ditional functions that can better convey the user's I/O intent. In our ter-
minology, a user's I/O intent refers to the user's expectation on how the I/O
operation should be carried out, and a user's I/O requirement refers to the end
result. Consider an example that describes and distinguishes between a user's
I/O intent and requirement. Figure 13.3 shows a 58 two-dimensional integer
array that is partitioned among four processes in a block{block pattern. Each
of process ranks 0 and 1 is assigned a subarray of size 3 4. Each of process
ranks 2 and 3 is assigned a subarray of size 24. (The use of a small array and
small number of processes here is only for explanation purposes. In practice,
MPI-IO is used for large datasets and large system sizes.) The 2D array can
be considered as a representation of the problem domain to a parallel applica-
tion, and the subarrays represent the sub-domains distributed among the MPI
processes. It is assumed that the user's intent is to write the entire 2D array
to a le in parallel and the data layout in the le follows the array's canonical
order. Such I/O operations often occur during an application's checkpoint.
 
Search WWH ::




Custom Search