Biomedical Engineering Reference
In-Depth Information
We can usefully think of the two images as mappings of points in the
patient within their field of view (or
domain
,
) to intensity values.
A
:
x
A
(
x
)
A
A
A
B
:
x
B
(
x
)
B
B
B
Because the images are likely to have different fields of view, the domains
will be different. This is a very important factor, which accounts
for a good deal of the difficulty in devising accurate and reliable registration
algorithms. We will return to this issue later in this section.
As the images
and
A
B
, imaged with the same or dif-
ferent modalities, there is a relation between the spatial locations in
A
and
B
represent one object
X
A
and
B
.
Image
A
is such that position
x
X
is mapped to
x
, and images
B
maps
x
to
A
x
. The registration process involves recovering the spatial transformation
T
B
which maps
x
to
x
over the entire domain of interest, i.e., that maps from
B
A
A
to
within the overlapping portion of the domains. We refer to this overlap
domain as . This notation makes it clear that the overlap domain
depends on the domains of the original images
B
T
A , B
A
and
B
, and also on the spa-
tial transformation
T
. The overlap domain can be defined as:
T
A T 1
A , B
{
x A
x () B
}
(3.2)
Registration algorithms that make use of geometrical features in the
images involve identifying features such as sets of image points and
that correspond to the same physical entity visible in both images, and
calculating
x {}
x {}
for these features.
Registration algorithms that work directly on image intensity values work
differently. These algorithms nearly always iteratively determine the image
transformation
T
that optimizes some measure of the similarity between the
voxel intensities in the two images (a
T
voxel similarity measure
). At each itera-
tion, they transform the image using the current estimate of
T
and recalculate
a voxel similarity measure. Unless
is simply a translation by an integer
number of pixels or voxels, the transformation carried out at each iteration
involves interpolation between sample points. For these algorithms, it is use-
ful to introduce new notation for the transformation
T
that maps both the
position and the associated intensity value at that position. In this chapter, we
use the notation
T
T
when mapping of position is all that is required, and
T
when the intensity at a position is also taken into account. Any time that
is
used, the type of interpolation used by the algorithm is likely to alter the solu-
tion obtained. For example, throughout this chapter, we treat image
T
A
as the
reference, or target, image and image
B
as the iteratively transformed, or
B T
source, image. We use the notation
to represent image
B
transformed,
using the current transformation estimate
T
. This image
B T
is defined at the
B T
voxel coordinates of image
A
. The voxel values in
, of course, depend
Search WWH ::




Custom Search