Image Processing Reference
In-Depth Information
border extraction. A surface to be visualized is determined substantially
(and implicitly) by setting parameters
(opacity).
(2) The essential part of volume rendering is in calculating the mixture of
the information that has been accumulated and propagated to a current
point along a ray and the information given by a density value of a current
point. The ratio of two types of information is determined in fusing them
and controlled by a value of the opacity α i in each point. A resulting
mixture is sent to the next point along a ray. If α i = 1 , for example,
only a density value at a sample point P i propagates, and brightness
values accumulated along a ray in the past disappear there and do not
contribute to a density (brightness) value on a rendered image at all. This
corresponds to a completely opaque medium. Alternatively, if α i = 0 ,the
effect of a density value of a point P i disappears there. This means that
the medium is completely transparent.
(3) By utilizing the above property, we can extract component patterns by
thresholding density values or similar processing. This is considered as
inserting the segmentation procedure into the procedure of rendering. This
also means that we cannot always avoid the problem of parameter selection
completely since selection of
{
α i }
{
α i }
relates to selection of threshold values
substantially.
(4) Even if a surface seems to be extracted correctly, it is not always guaran-
teed that a surface or a border is extracted exactly from the viewpoint of
the shape. This is because the rendering is based on the opacity parame-
ter
and is performed only by a density value of each point, and any
information concerning shapes is not considered.
(5) Results are sensitive to random noise. Inappropriate settings of
{
α i }
{
α i }
may
cause apparently unexpected artifacts.
(6) We need not take care of the failures in border surface extractions. From
the beginning border extraction algorithms are not required.
(7) Even if a border surface is perceived visually in a rendered image, the
location of the surface cannot be determined exactly. This is because the
border extraction is not actually performed in volume rendering. The sur-
face is visible in a rendered image only by human vision. We cannot des-
ignate an object using a border surface that can be seen in a rendered
image. Neither can we obtain quantitative measurements from a border
surface that is seen in an image drawn by volume rendering.
(8) The computation time is likely to be longer in the volume rendering than
in surface rendering, because the volume rendering procedure contains an
accumulation of density values along a ray. This problem is being overcome
in newer computers with a graphic engine.
(9) Although objects existing on the same ray overlap, it should be avoided
that an object in the back of another object is completely invisible (oc-
currence of occlusion). Exact results of rendering depend on setting of
parameters
{
α i }
.
Search WWH ::




Custom Search