Biomedical Engineering Reference
In-Depth Information
Detailed alignment
Automated detailed alignment then seeks to accurately
localize the best surface data to MRI transformation [12-
14] . Starting from the best transformation of the pre-
vious step, the method then solves a second minimization
problem. In this case it measures the least-squares fit of
the two data sets under the current estimated trans-
formation (subject to a maximum distance allowed be-
tween a transformed data point and the nearest point on
the skin surface, to discount the effects of outliers in the
data). This minimization can again be solved using a gra-
dient descent algorithm.
This process runs in about 10 seconds on a Sun
UltraSPARC workstation. The method basically solves
a truncated least-squares fit of the two data sets, refining
the transformation obtained in the previous step.
Figure 6.5-4 Example of augmented reality visualization. Tumor
and ventricles have been overlaid onto live video view of patient.
Stochastic perturbation
To ensure that the solution found using this process is not
a local minimum, the method arbitrarily perturbs the
transformation and reruns the process. If the system
converges to the same solution after several trials, the
system terminates with this registration.
blending of the MRI skin and video image. A second
verification tool overlays the sensed data on the MRI skin
by color-coding the sensed data by distance between the
data points and the nearest MRI skin points. Such a re-
sidual error display identifies possible biases remaining in
the registration solution. A third verification tool com-
pares locations of landmarks. Throughout the surgery,
the surgeon uses the optically tracked probe to point to
distinctive anatomical structures. The offset of the probe
position from the actual point in the MR volume is then
observed in the display. This serves to measure residual
registration errors within the surgical cavity.
Camera calibration
The final stage of the process is to determine the re-
lationship between a video camera viewing the patient,
and the patient position. This can be accomplished by
using a trackable probe to identify the positions of points
on a calibration object in patient coordinates. By relating
those coordinates to the observed positions in the video
image, one can solve for the transformation relating the
camera to the patient [12-14] .
6.5.2.3 Tracking subsystem
Augmented reality visualization
By coupling all of these transformations together, we
can provide visualizations of internal structures to the
surgeon. In particular, we can transform the segmented
MRI model (or any portions thereof) into the coordinate
frame of the patient, then render those structures
through the camera transformation, to create a synthetic
image of how those structures should appear in the
camera. This can then be mixed with a live video view to
overlay the structures onto the actual image ( Fig. 6.5-4 ).
Tracking is the process by which objects are dynamically
localized in the patient's coordinate system. Of particular
interest to us is the tracking of medical instruments and
the patient's head. The two most common methods of
tracking are articulated arms and optical tracking. Ar-
ticulated arms are attached to the head clamp or oper-
ating table and use encoders to accurately compute the
angles of its joints and the resulting 3D position of its end
point. Such devices, though, may be bulky in the oper-
ating room and, because of their mechanical nature, are
not as fault tolerant as other methods. Optical trackers
use multiple cameras to triangulate the 3D location of
flashing LEDs that may be mounted on any object to be
tracked. Such devices are generally perceived as the most
accurate, efficient, and reliable localization system [2, 5] .
Other methods such as acoustic or magnetic field sensing
are being explored as well, but can be more sensitive to
environmental effects. We use optical tracking (the
Flashpoint system by IGT Inc., Boulder, CO, USA)
Verifying the registration
Three verification tools are used to inspect the registra-
tion results, as the objective functions optimized by the
registration algorithm may not be sufficient to guarantee
the correct solution. One verification tool overlays the
MRI skin on the video image of the patient ( Fig. 6.5-5 ),
except that we animate the visualization by varying the
Search WWH ::




Custom Search