Hardware Reference
In-Depth Information
21.1 Canonical Use Case: ALE3D Restart and VisIt
Visualization Workflow
ALE3D [7] is a multi-physics application utilizing arbitrary Lagrangian-
Eulerian (ALE) techniques. It models uid and elastic{plastic response on
unstructured grids of hexahedra. ALE3D integrates a variety of multi-physics
capabilities through an operator-splitting approach including heat conduction,
chemical kinetics with species diffusion, incompressible flow, and a wide range
of material and chemistry models. Fully featured ALE3D simulations have
been run with good scaling behavior up to as many as 128,000 MPI [5] tasks.
ALE3D uses Silo to produce restart files, plot files, time-history files, and
mesh-to-mesh linking files. Restart files are used to restart ALE3D after exe-
cution has been terminated. Plot files are used in visualization and analysis.
Time-history files capture high time-resolution data at specific user-defined
points. Mesh-to-mesh linking files are produced periodically when workflows
require ALE3D solution data to be integrated and exchanged with applica-
tions modeling other physical phenomena.
For restart and plot files, the key data objects ALE3D writes to the files are
the main mesh, the material composition of the main mesh, and key physical
variables (e.g., fields) defined on the main mesh such as pressure, velocity,
mass, ux, etc.
The main mesh is an unstructured cell data (UCD) mesh; an arbitrary
arrangement of connected 3D hexahedral mesh elements (see Figure 21.1).
For computation in a distributed, scalable parallel setting, the main mesh is
decomposed into pieces called domains ranging in size from 2.5 to 25 and
typically 10 thousand mesh elements. To reduce communication in parallel
computation, neighboring domains typically contain copies of each other's el-
ements along their shared boundaries. In Silo parlance, these copies are called
ghost elements.
The total number elements in the main mesh is a commonly used metric
for the scale or size of the problem being simulated. As problem size increases,
typically the number of domains increases but the size of each domain (e.g.,
per-domain element count) remains roughly the same. When I/O performance
and scaling is studied, the focus is on those Silo objects the application pro-
duces whose size varies with problem size.
Once decomposed, the main mesh is forevermore stored and exchanged via
Silo in its decomposed state. All tools and applications used in Silo-enabled
HPC workflows are designed to interact with the data in its decomposed state.
In fact, all applications are designed so that any single MPI task can manage
multiple domains simultaneously. For example, given a mesh decomposed into
60 domains, both ALE3D and VisIt can run one domain per task on 60 MPI
tasks, 2:1 on 30 tasks, 3:1 on 20 tasks, or even 3:1 on 12 together with 4:1
 
Search WWH ::




Custom Search