Image Processing Reference
In-Depth Information
Abstract
Visual
Objects
Raw
Data
Derived
Data
Image
Data
Simulation
Filtering
Mapping
Rendering
Display
User -Interaction
Fig. 31.3 A classical visualization pipeline transforming raw data to images
applications (e.g. online monitoring, in-situ processing, computational steering).
However, the efficiency of a currently appointed task does not only depend on the
hardware components used but is also particularly affected by algorithms which
have to be tailored to existing individual environments and requirements. This will
be evaluated in more detail in the following sections.
31.2.1 Parallel Post-processing
Post-processing for scientific visualization is mostly based on data flow networks
describing how to process raw data step by step [ 16 ]. Figure 31.3 shows a simplified
visualization pipeline along with the data flow between the pipeline modules. In
interactive applications, each stage of the pipeline can be manipulated by a user,
which triggers the execution of all subsequent stages repeatedly.
In scientific visualization applications, raw data is often multi-variate and stored
in structured file formats. Filtering this raw data, e.g. an extraction of sub-volumes,
could be a method to fill gaps, or an algorithm to smooth data values. The derived data
is then mapped to abstract visualization primitives with extensions in time and space.
The last transformation is the rendering step which produces displayable 2D-images
from the abstract visualization primitives.
When traversing this pipeline, the heavy workload mainly occurs in the filtering
and mapping stages. Thereafter, large-scale data is considerably reduced to a man-
ageable size even for smaller computer systems. To speed-up the processing time
of the first stages, parallelization strategies can be implemented on HPC clusters.
For large-scale simulations, however, task parallelism and pipeline parallelism are
not very promising approaches. In contrast, data parallelism shows great success,
which assigns partitions of the dataset to the available processing elements. If the
domain is decomposed, each processor can load and process its parts concurrently.
After all processors computed have their partial results, extracted features have to
be joined before they are sequentially processed by the remaining pipeline stages
(see Fig. 31.4 ). To enhance the interactivity in virtual environments, partial data
already extracted on the backend may be streamed to the rendering stage as soon as
possible. Particularly suitable for data streaming approaches are progressive multi-
resolution data formats. These allow early previews of the overall result which is
steadily refined until all remaining feature details are arrived [ 14 ].
 
Search WWH ::




Custom Search