Image Processing Reference
In-Depth Information
transform. They transform the data from spatial domain to different domains, in
which important information can be easily accessed.
But already transferring large-scale data over the network for post-processing
is a very time-consuming task. A more efficient approach is the treatment of the
raw data directly in the main memory of the supercomputer where the simulation is
located. For this so-called in-situ processing, no data movement or storage between
the simulation and the filtering stage of the visualization pipeline is required [ 28 ].
Now, extracted and usually more compact representations can almost immediately
be achieved and transmitted to the visualization environment.
Compared to classical parallel post-processing approaches, sharing the simulation
host introduces additional challenges for in-situ processing [ 28 ]:
DomainDecomposition: It is optimized for best simulation performance. However,
it is not necessarily optimal for visualization purposes. Repartitioning is not an
option because of high communication and data transmission costs. Therefore,
this fixed partitioning has a strong influence on the scalability of visualization
algorithms.
CommonMemory Usage: In order to avoid data duplication, the simulation and the
visualization have to share the same data structures in memory. If the simulation
uses most of the available memory, the visualization is only allowed to allocate a
small extra amount for its own internal data structures.
Post-Processing Time: The post-processing should not occupy too much time to
avoid slowing down the simulation process.
Software Architecture: Since the visualization must work directly on the data
structures of the simulation, a common interface needs to be provided.
A successful implementation has been presented by Moelder et al. [ 35 ]. The
authors demonstrate an in-situ method for feature tracking using a low cost and
incremental prediction and morphing approach to track a turbulent vortex flow.
In-situ rendering is utilized by Tu et al. [ 40 ] in order to visualize a tera-scale earth-
quake simulation. Here, ray casting is performed independently on each parallel
processor before the generated images are combined and streamed to the visualiza-
tion frontend. In [ 42 ], Wagner et al. discuss how explorative analysis with freely
movable cutplanes in interactive virtual environments can be supported by in-situ
online monitoring (see Fig. 31.5 ).
The later approach requires update rates of at least 100 ms to interactively provide
the required scalar field data on the cutplane. If using an analytical cutplane algorithm
to determine the sample points on the cutplane, all cells intersecting the cutplane have
to be found. In this case, the load on each processing element depends heavily on
the position of the cutplane and the distribution of the cells (see Fig. 31.6 ,left).
To avoid a long and unpredictable extraction runtime, progressive sampling
schemes can be applied instead. First, the needed information is just sampled on few
positions. By adding further sample points, the cutplane visualization is progres-
sively refined. The data streaming is stopped as soon as the interactivity threshold is
reached (see Fig. 31.6 , right).
Search WWH ::




Custom Search