Image Processing Reference
In-Depth Information
lation of vector fields in unprecedented detail and results in extremely large datasets.
Here, data size itself poses an additional challenge to integration-based visualization.
Due to the highly non-linear, data-dependent nature of the integral curve approxima-
tion process, traditional approaches for distributing computation and data to leverage
available computing resources optimally are not applicable in this situation. Hence,
approaches developed for the analysis of large scalar field data that are built on
decomposing a dataset and independently treating each part do not generalize well
to the vector field case directly. To further complicate the choice of parallelization ap-
proach, the computational characteristics of integral curve computation are strongly
dependent on a variety of factors such as vector field complexity, data set size, seed
set size, and integration length.
26.2.3 Parallel Integral Curve Computation
Computing many integral curves simultaneously is an embarrassingly parallel
problem, since the curves are mutually independent. In this respect, parallelization
is achieved in a straightforward manner by decomposing the overall set of inte-
gral curves. However, issues arise when data size grows. In the next section, we
will discuss the problems of computing integral curves on large data and aim at an
approximate characterization of integration-type problems.
Focusing on data size, early work on parallel integral curve computation has
focused primarily on out-of-core techniques that are commonly used in large-scale
data applications where data sets are larger than main memory. These algorithms
focus on achieving optimal I/O performance to access data stored on disk. Ueng
et al. [ 20 ] presented a technique to compute streamlines in large unstructured grids
using an octree partitioning of the vector field data for fast fetching during streamline
construction. Taking a different approach, Bruckschen et al. [ 2 ] describe a technique
for real-time particle traces of large time-varying data sets, by isolating all integral
curve computation in a pre-processing stage. The output is stored on disk and can then
be efficiently loaded during the visualization phase. Similarly, PC cluster systems
were leveraged to accelerate advanced integration-based visualization algorithms,
such as time-varying Line Integral Convolution (LIC) volumes [ 13 ] or particle visu-
alization for very large data [ 6 ]. While such approaches can give good performance,
they do not generalize to more modern vector field visualization techniques such as
integral surfaces or Lagrangian Coherent Structure visualization.
In the following, we will introduce general integral curve problems as a basis
for modern integration-based flow visualization, describe their characteristics, and
discuss corresponding algorithms.
26.2.4 Problem Description and Classification
Given a vector field data set, a set of seed points in its domain, and parameters that
control the integration process, an integral curve problem consists of computing all
 
Search WWH ::




Custom Search