Biomedical Engineering Reference
In-Depth Information
because of the difference in the information available in the spaces being
aligned. In these cases, biomechanical models of the sort described in this
chapter have the potential to provide model-based registration techniques
that can align an image collected on one occasion with a very different image
or the coordinates of a surgical instrument at a later time. These models can
take into account the mechanical properties of the tissue, the operative con-
ditions, and the presence of surgical instruments such as retractors. Further-
more, sparse intraoperative information, such as photographs of the site of
resection or ultrasound images, can be used as constraints on the models to
improve accuracy.
Modeling activities associated with surgical simulation have been explored
for a number of years, and important progress continues to be reported.
1,2
The goal of these virtual reality experiences is to produce a video-like visual-
ization of a deforming tissue that appears to be real. For these simulations,
there is a fundamental shift from the traditional emphasis on predictive mod-
eling with secondary interest in computational speed to a situation where
these priorities are exactly reversed—the primary interest is speed of compu-
tation to enable real-time interaction, and physical accuracy is of less concern.
In contrast, the modeling efforts described in this chapter are intended to
impact intraoperative clinical decision making within the image-guided
framework; hence, there is an emphasis not only on modeling accuracy and
speed (although not at the real-time or video refresh rate) but also on the use
of both preoperative and intraoperative data to define
confine model param-
eters and outcomes.
The motivation for the deployment of such physically defined deformation
models of brain tissue has been the recognition and recent characterization
and quantification of brain motion during surgery. A number of studies have
tracked both surface and subsurface points in the brain and reported that
movements on the order of a centimeter or more can occur intraoperatively.
3,4
During image-guided procedures this motion manifests as a dynamic regis-
tration error which erodes the effectiveness of image guidance when the
operating room (OR) is registered with the statistically defined preoperative
image space. While it is clear that intraoperative brain motion is significant
and can severely compromise the added advantage of image-based surgical
navigation, strategies to address this source of error intraoperatively are in
early stages of development. One intriguing approach is to employ compu-
tational methods based on physical models of tissue deformation to compen-
sate for intraoperative brain motion.
A conceptually powerful paradigm would be to:
Update preoperative images intraoperatively by generating a patient-
specific computational model based on segmentation of high res-
olution preoperative scans
Collect readily accessible but incomplete intraoperative informa-
tion relevant to brain motion with low cost tracking technology
Search WWH ::




Custom Search