Biomedical Engineering Reference
In-Depth Information
hence of all images at the same time. Therefore, the individual voxels of the dataset
must be selected, weighted, combined, and projected onto the image plane.
A frequently used technique for 3D visualization is the maximum intensity
projection (MIP). With this technique, images are generated by tracing rays from
the viewing plane to the 3D volume data in the direction of the virtual camera. For
each pixel of the view plane, the voxel with maximum intensity is displayed. MIP
images do not convey depth-relations reliably, but they allow assessment of contrast-
enhanced vascular structures (these are often the voxels with highest intensity; other
structures are therefore effectively suppressed). Diagnosis of vascular structures is
therefore the most important application of MIP images.
Another visualization technique available inmany CAD systems is surface render-
ing. Volumetric data are composed of a very large number of individual voxels. One
approach to extract information from this large data set attempts to do so by focusing
on a data subset. Structures of interest in volumetric data are typically differentiated
from the surrounding image data by a boundary or a material interface. Typically,
this boundary is on voxels that have the same or a similar intensity value. Hence,
the resulting surface on these voxels is called an isosurface . An isosurface can be
specified as an implicit surface, where the implicit function equals the isovalue (also
called threshold) or where the difference of the implicit function and the isovalue is
zero. The simplest example of an isosurface volumetric data is a binary segmented
image, in which the isosurface value is equal to the foreground value of segmentation.
Typically, images are generated using one of the segmentation algorithms provided
by the CAD. A lot of digital techniques for surface-based visualization have been
developed, such as Contour Tracing [ 43 ], the Cuberille Voxel Representation [ 44 ],
and the Polygonal Isosurface Extraction [ 45 ]. Usually, some lighting is applied to
produce shaded visualizations, which convey depth relations well.
In contrast to surface rendering, volume rendering produces semi-transparent ren-
ditions based on a transfer function. For volume rendering, two transfer functions are
defined: one for mapping intensity values to gray or color values (as in 2D visual-
ization), and one for mapping intensity values to transparency values. According to
these transfer functions, voxels are overlaid from front to back. Opaque voxels block
all voxels behind. If several semitransparent voxels are encountered that are projected
to the same pixel, the gray value is determined as an interpolation of the gray values
of these voxels. Volume rendering does not produce any intermediate representation,
such as polygonal meshes. To emphasize this property, volume rendering is often
referred to as direct volume rendering (DVR), whereas surface rendering is an indi-
rect method of rendering volume data. The most important (i.e., the most frequently
used) direct volume rendering approaches are ray casting [ 46 ] and shear warp [ 47 ].
All 3D visualization techniques are combined with interaction techniques that
allow the adjustment of arbitrary viewing directions and zooming into relevant
regions. To support depth perception, interactive rotation is essential. To convey the
current viewing direction, some sort of orientation indication is important. Often, a
so-called orientation cube is included in the 3D visualization and rotated together
with the 3D scene. Its faces are labeled “A”nterior, “P”osterior, “L”eft, “R”ight,
“H”ead, or “F”oot, which refer to the anatomical names of viewing directions.
Search WWH ::




Custom Search