Image Processing Reference
In-Depth Information
24.6.2 Multi-modal Rendering
Additional challenges arise when it comes to DVR of multiple datasets. Multi-modal
rendering is meant to bring two or more data sets of the same object into a single
image. Having two or more datasets in the same scene creates a challenge to keep the
cluttering of less interesting regions to a minimum from the datasets. For ultrasound,
3D Doppler data can be acquired simultaneously with 3D B-mode data. Jones et al.
discuss several approaches to explore and visualize 4D Doppler data [ 31 ]. Multi-
planar rendering, showing several slices at once, provides a surface fitting of the
Doppler data based on theYCbCr color scheme values to improve separation between
Doppler data and B-mode data. An approach is presented to blend multi-planar slice
rendering into a DVR scene. The DVR is shown highly transparent and the slices
provide better detail along the perspective. A different way of combining B-mode
with Doppler data was presented by Petersch and Hönigmann [ 55 ]. They propose
a one level composite rendering approach allowing for blending flow and tissue
information arbitrarily, using silhouette rendering for the B-Mode and a mix of
Phong shaded DVR and silhouette rendering on color Doppler.
A new technique for blending Doppler and B-mode was introduced by Yoo
et al. [ 82 ]. Instead of blending two 2D rendered images (post fusion), or a blending
the two volumes while rendering (composite fusion), they proposes a way to do both
called progressive fusion (PGF). Post fusion has a problem with depth blending and
composite fusion will get a too early ray termination. PGF compensates for this by
using an if-clause to adjust the alpha-out value in the ray-caster to composite either
the Doppler-signal or the B-mode-signal.
Burns et al. applied illustrative cut-aways combined with 3D freehand ultrasound
and CT [ 7 ]. This provides a better spatial overview for the ultrasound images. To add
more information onto the 2D ultrasound image, Viola et al. proposed an approach
to enhance the ultrasound image by overlaying higher order semantics [ 75 ], in this
case in the form of Couinaud segmentation. The segmentation is pre-defined in a CT
dataset and visually verified using exploded views. To combine it with ultrasound
images, the CT dataset is co-registered with the ultrasound using rigid transformation
according to user defined landmarks. The different segments are superimposed onto
the ultrasound image enabling the user to directly see which segments are being
imaged. To improve ultrasound video analysis, Angelelli et al. used a degree-of-
interest (DOI) distribution superimposed on the image [ 3 ]. The video sequence was
presented as a function of time (x-axis), where the y-axis was defined by the amount
the current ultrasound image covered the DOI-function.
24.6.3 Shading and Illumination
Light is an indispensable part of scenes we see in real life. Also in computer graphics,
light sources and light transport models have to be taken into account when rendering
realistic scenes. In volume graphics, the problem of illumination and light transport
has been tackled by a handful of researchers as well.
 
Search WWH ::




Custom Search