Graphics Reference
In-Depth Information
same opacity. Opacity values make it possible to extract and make visible different
parts inside an object. For example, if we want to see the bone structure, then we
would set the other material to be transparent. Transfer functions can also be used to
map to colors.
Once transfer functions have been defined, the rendering is automatic. Unfortu-
nately, it is not possible to generate all possible such functions systematically because
the features one is trying to extract may be hard to specify with a program. Defining
suitable transfer functions remains one the difficult tasks in volume rendering. See
[RhyT01]. Extracting features from data is called segmentation and sometimes
requires user input and, in the worst case, may have to be done entirely by hand. In
volume rendering it is basically a labeling procedure applied to voxels to indicate their
material type. It is a preprocessing procedure applied to the data before it is rendered
and the segmentation information is stored along with the other voxel data that can
then be used by transfer functions.
Now, in many applications of volume rendering, the issue is not photorealism but
making data meaningful. For that reason parallel projection is typically used in
volume rendering. For typical medical applications nothing is gained by perspective
views. Furthermore, perspective views have problems in that, since the rays diverge,
they may miss voxels and create aliasing problems.
To get shading one needs normals. To get these one typically uses the gradient
f
x
f
y
f
z
—= Ê
Ë
ˆ
¯
f
,
,
of the density function f. The central difference operators
(
) --
(
)
fi
+
1
,,
jk
fi
1
,, ,
jk
(
) --
(
)
fij
,
+
1
,
k
fij
,
1
,
k
,
(
) -
(
)
fijk
,,
+
1
fijk
,,
-
1
,
are the most common approximations to the partial derivatives ∂f/∂x, ∂f/∂y, and ∂f/∂z,
respectively, but there are others.
Finally, sometimes one knows that there are surfaces present in the volumetric
data. Two well-known approaches used in volume rendering try to construct a con-
ventional surface model S from the data that is then rendered in the ordinary way.
Approach 1. Here one proceeds in two stages: first one determines the curves that
make up the contour of S in each slice and then one tries to connect these contours
with surface patches, a process called skinning. Figuring out how to connect the con-
tours from one slice to the next is an especially tricky problem, because a contour
may consist of several curves. Chapter 14 will have more to say about finding con-
tours and skinning.
Approach 2. In Approach 1 separate and independent algorithms are used in the
two stages. There are more general approaches to finding the surface S that work on
Search WWH ::




Custom Search