Biomedical Engineering Reference
In-Depth Information
(a) For each voxel center in the transformed image find the corresponding point in
the moving image (transformation).
(b) Find the intensity at each voxel center by interpolating intensity values in the
moving image (interpolation).
The intensity interpolation does not depend on the registration method used
(i.e., the same interpolation can be used for different registration methods). Inter-
polation algorithms that can be used include nearest neighbor, linear, quadratic,
cubic, cubic B-spline, Gaussian and sinc interpolation.
The main difference between the transformer used in image-based registration
and the one used in biomechanics-based registration lies in the way the transforma-
tion is described. In case of image-based registration, the transformation is
described using methods ranging from simple transformation matrices (rigid, simi-
larity or affine transformations) to B-splines, thin-plate splines or Bezier functions.
All these methods require relatively low computational effort to perform the
transformation at a given point. In case of biomechanics-based registration
methods, the transformation is defined as the deformation field computed at the
nodes of the mesh. We look at how this deformation field can be used to perform the
transformation in the next subsection.
2.1 Performing the Transformation Using the Deformation
Field from a Biomechanical Model
To illustrate our method, we consider the case of a brain shift. The biomechanical
model is constructed based on the moving image (preoperative image) (Fig. 1b ). We
can distinguish two main regions in the moving image: the part of the image which is
included in the model (brain tissue, ventricles, tumor) and the part of the image
which is not (all the rest of the image, including the skull, the skin, and the exterior of
the head). The deformation field is computed only for the part of the image included
in the model, and only that part of the image should be deformed (as the surrounding
area, including the skull, is fixed). Because only part of the image is moving, there
will be regions of the transformed image that have no correspondence in the moving
image (for example, the region of the brain that collapses)—see Fig. 2b in Sect. 3 .
As presented before, the deformation field obtained from a biomechanical model
is described as the displacements at every node of the computational grid (mesh). In
order to find the corresponding point in the moving image for every voxel center in
the deformed image, interpolation must be performed using the position of the
nodes. This is a 3D scattered data interpolation problem. The simplest 3D scattered
data interpolation method is linear tetrahedron-based interpolation [ 16 ]. It implies
connecting the nodes using a tetrahedral mesh (tessellation), associate voxel centers
with tetrahedra (by determining within which tetrahedron each center is located),
and then use barycentric coordinates for interpolation. The tetrahedral mesh can be
created based on the original mesh used in the biomechanical model. Nearest
Search WWH ::




Custom Search