Database Reference
In-Depth Information
9.5 Query-Driven Visualization
The term query-driven visualization (QDV) refers to the process of limiting
visual data analysis processing only to “data of interest.” 34 In brief, QDV is
about using software machinery combined with flexible and highly useful in-
terfaces to help reduce the amount of information that needs to be analyzed.
The basis for the reduction varies from domain to domain but boils down to
what subset of the large dataset is really of interest for the problem being
studied. This notion is closely related to that of feature detection and anal-
ysis, where features can be thought of as subsets of a larger population that
exhibits some characteristics that are either intrinsic to individuals within the
population (e.g., data points where there is high pressure and high velocity) or
that are defined as relations between individuals within the population (e.g.,
the temperature gradient changes sign at a given data point).
QDV is one approach to visual data analysis of problems that are of massive
scale in size. Other common approaches focus on increasing capacity of the
visualization processing pipeline through increasing levels of parallelism to
scale up existing techniques to accommodate larger data sizes. While effective
in the primary objective—increase capacity to accommodate larger problem
sizes—there is a fundamental problem with these approaches: They do not
necessarily increase the likelihood of scientific insight. By processing more
data and creating an image that is more complex, such an approach can
actually impede scientific understanding.
Let's examine the first question a bit more closely. First, let's assume that
we're operating on a gigabyte-sized dataset (10 9 data points), and we're dis-
playing the results on a monitor that has, say, 2 million pixels (2
10 6 pixels).
For the sake of discussion, let's assume we're going to create and display an
isosurface of this dataset. Studies have shown that on the order of about N 2 / 3
grid cells in a dataset of size N 3 will contain any given isosurface. 35 In our
own work, we have found this estimate to be somewhat low—our results have
shown the number to be closer to N 0 . 8 for N 3 data. Also, we have found an
average of about 2.4 triangles per grid cell will result from the isocontouring
algorithm. 36 If we use these two figures as lower and upper bounds, then for
our gigabyte-sized dataset, we can reasonably expect on the order of between
about 2.1 and 40 million triangles for many isocontouring levels. At a display
resolution of about 2 million pixels, the result is a depth complexity—the
number of objects at each pixel along all depths—of between 1 and 20.
With increasing depth complexity come at least two types of problems.
First, more information is “hidden from view.” In other words, the nearest
object at each pixel hides all the other objects that are further away. Second, if
we do use a form of visualization and rendering that supports transparency—
so that we can, in principle, see all the objects along all depths at each pixel—
we are assuming that a human observer will be capable of distinguishing
among the objects in depth. At best, this latter assumption does not always
Search WWH ::




Custom Search