Database Reference
In-Depth Information
TABLE 9.3 Sample H5Part code to write multiple fields from
a time-varying simulation to a single file
if(serial)
handle=H5PartOpenFile(filename, mode);
else
handle=H5PartOpenFileParallel(filename, mode, mpi comm);
H5PartSetNumParticles(handle, num particles);
loop(step=1,2)
// compute data
H5PartSetStep(handle, step);
H5PartWriteDataFloat64(handle,”px”,data px);
H5PartWriteDataFloat64(handle,”py”,data py);
H5PartWriteDataFloat64(handle,”pz”,data pz);
H5PartWriteDataFloat64(handle, ”x”,data x);
H5PartWriteDataFloat64(handle, ”y”,data y);
H5PartWriteDataFloat64(handle, ”z”,data z);
H5PartCloseFile(handle);
offered by the sample API are completely independent of the standard for or-
ganizing data within the file. The file format supports the storage of multiple
timesteps of datasets that contain multiple fields.
The data model for particle data allows storing multiple timesteps where
each timestep can contain several datasets of the same length. Typical particle
data consists of the three-dimensional Cartesian positions of particles
(
x
,
y
,
z
)
as well as the corresponding three-dimensional momenta
. These
six variables are stored as six HDF5 datasets. The type of the dataset can be
either integer or real. H5Part also allows storing attribute information for the
file and timesteps.
Table 9.3 presents sample H5Part code for storing particle data with two
timesteps. The resulting HDF5 file with two timesteps is shown in Table 9.4.
These examples show the simplicity of an application that uses the H5Part
API to write or read H5Part files. One point is that there is basically a one-
line difference between serial and parallel code. Another is that the H5Part
application is much simpler than an HDF5-only counterpart: This example
code need not worry about setting up data groups inside HDF5; that task is
performed inside the H5Part library.
(
px
,
py
,
pz
)
9.4.1.3
Parallel I/O
Anaive approach to writing data from a parallel program is to write one
file per processor. Although this approach is simple to implement and very
ecient on most cluster file systems, it leads to file management diculties
when the data needs to be analyzed. One must either recombine these sep-
arate files into a single file or create unwieldy user interfaces that allow a
Search WWH ::




Custom Search