Database Reference
In-Depth Information
slicing or contouring algorithms) followed by a sink (typically a rendering
algorithm). When a pipeline is executed, data comes from the source and
flows from filter to filter until it reaches the sink. There are many variations
on this general design that include caching, how the execution takes place
(push versus pull), multiplicity in terms of sources and sinks, feedback loops
in the filters, reusing arrays from data object to data object to reduce memory
footprint, and optimizations like parallel-pipelined operation so that different
stages of the pipeline may operate concurrently.
When the client asks the server to perform some operations on a data object,
each processor of the server sets up an identical data flow network. They only
differ in the portion of the dataset that they process. The majority of visu-
alization operations are “embarrassingly parallel”—the processing can occur
in parallel with no communication between the parallel processes. For these
operations, the only concern is artifacts that can occur along the boundaries
of a chunk. For example, a stencil-based algorithm that is run in parallel may
require data from adjacent grid points that are owned by another processor.
The typical way to resolve this problem is using redundant data located at
the boundary, which is often referred to as ghost data .
9.2.1.3
Rendering and Remote/Distributed Visualization
Within the context of visualization software architectures, the majority of
SDM-related concerns reside in I/O and processing stages. The later stage of
visualization—rendering, where visualization results (geometry, 3D volumes,
etc.) are transformed into images—has its own unique set of visualization-
centric SDM-related issues.
As context, remote and distributed visualization applications can use one
of three general types of architectures as shown in Figure 9.1. A discussion of
the relative performance and usability merits of these different configurations
is presented in Reference. 14 The important point here, within the context of
SDM-related issues, is that moving data across machine boundaries can be
a nontrivial task. The data might be raw data, as in the desktop-only con-
figuration; it might be geometric output produced by visualization tools, as
in the cluster isosurface configuration; or it might be raw image pixels, as in
the cluster render configuration. Unlike “traditional” data movement appli-
cations (e.g., ftp and its variants), the visualization use model often dictates
which pipeline partitioning will work best given a particular problem size and
set of machines/networks. For instance, if maximizing rendering interactiv-
ity of static data is the desired target, then one of the configurations that
uses desktop graphics hardware for rendering is the best choice, assuming the
problem will fit onto the desktop machine. If maximizing throughput is the
objective, for example, cycling through large, time-varying data, then the con-
figuration where data I/O and processing is performed on a parallel machine
and image sent to the remote viewer is the best choice. The trend we see in
Search WWH ::




Custom Search