Hardware Reference
In-Depth Information
The performance of GLEAN was evaluated for two standard FLASH out-
put streams (checkpoint and plot data) on the Intrepid BG/P system. Work-
ing closely with the FLASH developers, GLEAN was designed to have an
API to capture the data semantics of FLASH including the AMR hierar-
chy. To interface with FLASH, pNetCDF was transformed into the appropri-
ate GLEAN API calls. By setting an environment variable that is passed to
the flash executable when the job is launched with pNetCDF, one can use
GLEAN with FLASH. Thus, GLEAN is able to integrate with FLASH with-
out any modifications to the FLASH simulation code. On the staging nodes
(Eureka), GLEAN can be configured to write data out asynchronously us-
ing either pNetCDF, or transformed on-the-fly to HDF5 and written out.
This is possible because GLEAN captures the data semantics of FLASH.
By incorporating asynchronous data staging, a multifold improvement was
achieved in the I/O performance for FLASH at 32,000 cores on the BG/P
system [10].
18.3.3 Co-Visualization for PHASTA CFD Simulation
The PHASTA code [9] performs computational fluid dynamics (CFD) us-
ing a finite element discretization that uses a large-eddy-simulation-based tur-
bulence model. PHASTA uses an adaptive, unstructured tetrahedral grid. The
number and locality of elements changes frequently, based on solution char-
acteristics and load balancing. Grid and field structures are stored in dynam-
ically allocated memory, due to frequent updates.
GLEAN's success is described by the application's ability to enable the
simulation-time analysis and the visualization of PHASTA running on 128,000
cores of Intrepid (32 racks) with ParaView [7] running on 80 data-staging Eu-
reka nodes. To achieve this, GLEAN developers worked closely with PHASTA
developers to capture the unstructured tetrahedral mesh data model. Ad-
ditionally, to facilitate visualization of the PHASTA data using ParaView,
GLEAN supports ParaView's visualization meshes. On Eureka, GLEAN was
able to transform PHASTA's staged data into ParaView's mesh format on-
the-fly. ParaView is a highly common visualization toolkit on leadership-class
systems. Simulations can now use ParaView and GLEAN to visualize data at
runtime.
Two additional scaling studies for co-visualization of the PHASTA sim-
ulation with ParaView [10] include a 416-million-element and a 3.32-billion-
element case study. For the 416-million-element case of PHASTA on 32 In-
trepid racks consisting of about 20 GiB, GLEAN was able to transfer this
data in about 0:6s, achieving around 34 GiBps. For the 3.32 billion elements
(about 160 GB) using 32 racks and 80 Eureka nodes, GLEAN achieved around
41 GiBps. GLEAN enabled PHASTA developers to visualize a live simulation
on 128,000 cores.
 
Search WWH ::




Custom Search