Biomedical Engineering Reference
In-Depth Information
overinterpret the registered datasets. Once correspondence is determined
throughout the volume imaged by the two modalities, then one image can
be transformed into the coordinate system of the other. This calculation can
lead to further approximations or errors. Typically, there will be some blurring
of the transformed image. In our simple neurosurgical example, once any scal-
ing errors and geometric distortion produced by the scanners are corrected
as described in Chapter 5, the transformation will be very well approximated
by that of a “rigid body.” A rigid-body transformation, as the name suggests,
is one that changes position and orientation without changing shape or size
between the two scans.
Finally, we have the process of combining or “fusing” the information in
the images in some useful or meaningful way. This process may be left
entirely to the clinician in his
her mind's eye, or simple visualization effects
may be used, including color or interactive fading in and out of one image's
contribution overlaid on the other. Alternatively, two cursors, called “linked
cursors,” might be used to indicate corresponding points in the two images.
Further computation or combined displays of fused information may be gen-
erated. Corresponding structures in the two images can be used to check the
transformation, while complementary information can be used to deduce
useful new information either by qualitative interpretation or by improving
the accuracy of measurement. This combination of information is sometimes
termed “data fusion,” a term originally coined for the combination of infor-
mation in computing systems for battlefield command and control.
We might wish to use individual 3D images or the combined images for
navigation in image-guided surgery. Again, this requires a process of regis-
tration, and now correspondence is defined between image and physical
space within the patient in the operating room. We require a transformation that
for each identified 3D point within the patient allows us to compute the corre-
sponding location of the tissue that occupied that point in the preoperative
image. If the tissue has moved as a result of the intervention, then we would like
to know by how much and what tissue element now occupies this point.
This process of establishing correspondence becomes more complicated if
one of the images represents a projection of physical space, as is the case with
most optical images and conventional x-ray radiographs. These images are
called “projection” images. One point in a radiograph will correspond to
some combination of the x-ray attenuation values along the line in the patient
leading from the x-ray source to the imaging plane. This means that one point
in the radiograph will correspond to a line of points through a CT or MR image.
One point in the CT image will only correspond to a component of the inten-
sity seen at a point in the radiograph. In optical images, only the visible surface
will contribute to the image. Establishing correspondence between a pair of
points in two projection images that have been calibrated allows the 3D posi-
tion of that point to be determined. This is the basis of stereo-photogrammetry,
used widely outside medicine in robotics, nondestructive testing in industry,
analysis of remote or hostile environments, surveillance work, and analysis
of satellite images.
Search WWH ::




Custom Search