Image Processing Reference
In-Depth Information
the work of Gavrilescu et al. [ 9 ]. The increased complexity of data ensembles, large
simulation runs and uncertainty in the data poses interesting visualization challenges.
How shall we cope with the increased data and analysis complexity? Three of several
possible directions include integrated views and interaction [ 3 ], comparative visu-
alization [ 13 ] and fuzzy visualization [ 14 ]. With fuzzy visualization, techniques of
information theory will play a bigger role in coping with large parameter spaces.
Currently problem solving in visualization is typically algorithm-centric and thus
imperative by definition. With increased data complexity it will probably become
more declarative and thusmore data and image centric, as domain experts have always
been data-centric. A data-centric approach means that the user does not specify how
data is mapped to images but defines which features of the data he would like to see
how in the result images. This is like specifying pre- and post-conditions but not the
instructions to get from the first to the second. An optimization process should then
automatically figure out which algorithms and parameter settings best fulfill the user
defined declarations and constraints. Semantic layers [ 14 ] is a step in this direction.
Frameless rendering [ 5 ] is about efficiently rendering animation sequences where
pixels are updated on a priority basis. At no point in time all pixels of the image
are up-to-date, i.e., no frame is available though the animation sequence as a whole
evolves. Analogously to this concept, we foresee algorithmless visualizations in the
sense that not a single algorithm is explicitly specified by the user in a specific
application. For different features of the data and for different parts of the image the
most appropriate algorithmamong a set of possible candidatesmight be automatically
selected. Various combinations and integrations of visualization algorithms might be
possible to best achieve the user goals and declarations. Each pixel or voxel might
get its own algorithm on demand.
Interval arithmetic has long been used to cope with uncertainties due to round-
ing, measurement and computation errors. Handling ensemble data in an analogous
manner may lead to densely visualizing intervals or even distributions. While there
are already some approaches to locally investigate visualization parameter spaces,
not much has been done in terms of a global or topological analysis. For quantitative
results visualization algorithms will have to provide more stability and robustness
analyses in the future. With the increased data complexity (massive-multiple, het-
erogeneous data) heuristic approaches and parameter space analyses will become
even more important. This raises the need to visualize uncertain, fuzzy, and even
contradictory information.
Very often heuristics are useful. But even if you do not (exactly) knowwhat you are
doing (this is what heuristics is about), you should make sure that it is safe what you
are doing. Safety concerns robustness, stability, and sensitivity of an algorithm and
its parameters. So heuristics are great, when handled with care. This way your paths
through the haunted swamps will be safe ones. We for sure agree with a statement
by Voltaire: “Doubt is not a pleasant condition, but certainty is absurd.”
Acknowledgments We would like to acknowledge the Bridge-Project SmartCT and the K-Project
ZPT ( http://www.3dct.at ) of the Austrian Research Promotion Agency (FFG). We also acknowledge
the ViMaL project supported by the Austrian Science Fund (FWF), grant no. P21695.
Search WWH ::




Custom Search