Image Processing Reference
In-Depth Information
Source
Heavy Work
Source
Heavy Work
Final Steps
Sink
Source
Heavy Work
Fig. 31.4 Usage of data parallelization speeds up the heavy workload of the post-processing
pipeline. Each processor can process its independent part of the source data. The processed data is
collected in a final step for further calculations
Today many turnkey applications exist in this area. The most popular systems
for scientific visualization of large-scale datasets are the open-source systems
ParaView [ 25 ] and VisIt [ 6 ] as well as commercial tools such as EnSight, TecPlot,
or FieldView. While these applications are more appropriate for desktop user
interaction, another framework called Viracocha is focusing more on the explorative
analysis of unsteady simulation datasets within immersive virtual environments [ 14 ].
Special interaction metaphors are implemented to interact with the virtual environ-
ment in a more natural way.
31.2.2 In-situ Processing
Typically, the data is stored to a file server after the simulation was carried out. This
allows analyzing the results whenever an engineer has time or wants to explore more
findings at a later time. But sometimes, one is already interested about intermediate
results while a simulation is still running. Instead of storing the data for such cases,
it can be transferred to a dedicated post-processing system for online monitoring
directly after a computation step is completed. While the so-called co-processing is
performed on this second computer cluster, the simulation can continue.
The main purpose for co-processing is not only online monitoring but also reduc-
ing and reorganizing the huge amount of raw data to reasonable sizes and formats
for persistent storage on file servers. Sub-sampling raw data is the simplest way to
reduce simulation data. Further shrinking is possible by quantization or compression.
Quantization compresses value ranges to single values and can be applied in differ-
ent ways. To achieve high scalability, local and computationally cheap quantizations,
such as the Jayant quantizer [ 21 ], should be preferred [ 13 ]. Complex global quanti-
zations like the global Lloyd-Max method [ 15 ] or codebook-based methods like the
Linde-Buzo-Gray algorithm [ 27 ] can be too computationally intense. However, in
terms of cost and performance, using transform-based compression is a better choice
[ 28 ]. Popular encoding algorithms are the discrete cosine transform and the wavelet
 
Search WWH ::




Custom Search