Graphics Reference
In-Depth Information
are now solutions to the global illumination problem that combine these two
approaches. See, for example, [NeuN95].
10.4
Volume Rendering
Up to now, the type of rendering of three-dimensional objects we have been discussing
is sometimes called surface rendering , because it assumed that objects were repre-
sented by their boundary and so that was what we had to display. The “interior” of
an object was never considered. This approach works fine for objects that have well-
defined surfaces, but does not work so well when modeling natural phenomena such
as clouds, fog, smoke, etc. Here the light partially penetrates the objects and their
interior becomes important and volume rendering comes to the rescue. This and the
next two sections give an overview of volume rendering and some of the algorithms
it uses. Two general references are [Elvi92] and [LiCN98]. Elvins describes several
additional algorithms.
The complete volume-rendering pipeline really consists of three parts: data acqui-
sition, data classification, and the actual rendering algorithms. The data acquisition
part will not be considered here. We assume that we have been given some volumet-
ric data consisting of a cubical collection of voxels. Data classification is actually quite
a tricky part of the overall process and refers to deciding on criteria for what part of
the raw data to use. After one has decided on a classification, the last step is to display
the geometry it represents.
In the past (see [KaCY93]), volume rendering was sometimes defined as a tech-
nique for visualizing volumes directly from the volumetric data without the use of
surfaces or other explicit intermediate representations of the geometry. We shall use
the term in a more general sense, so that it includes any technique that is used to
render volumetric data. Some techniques in fact do involve constructing surfaces,
such as the marching cube algorithm described in Section 10.4.2.
We look at direct volume rendering first and ray casting approaches. (The terms
“ray tracing” and “ray casting” are usually used interchangeably, but some prefer to
use the term “ray casting” in the volume rendering context because they give it the
more restricted meaning that the rays are sent in only a single direction, in contrast
to “ray tracing,” which for them suggests that the rays bounce around in all direc-
tions in the scene.) Similar to visible surface algorithms, they can be classified as
image precision or object precision type algorithms.
Image Precision Volume Rendering. Here, we send out a three-dimensional ray
for each pixel on screen. For a parallel projection these rays would be perpendicular
to the view plane. See Figure 10.13. The rays can be parameterized and density values
associated to points evaluated at uniform intervals or they could be discrete rays gen-
erated by a Bresenham-type algorithm (see Section 10.4.1). In either case, we would
have to interpolate the density values at their points from the adjacent voxel data that
we were given. One simplification in the case where we use discrete rays in a paral-
lel projection is that we can use “templated” discrete rays. What this means is that we
only need to compute one ray starting at one pixel and then the relative movement
from one voxel to another would be the same for the rays at all the other pixels. One
Search WWH ::




Custom Search