Biomedical Engineering Reference
In-Depth Information
facial simulations [ 15 ]. For example, simulations can be used to predict the soft-tissue
deformations following surgical alteration of the underlying jaw and skull shape
[ 16 - 18 ]. Similarly, computer-assisted planning of complex maxillofacial reconstruc-
tive surgery has improved outcomes and reduced patient recovery time [ 19 ].
Basic questions pertaining to orofacial physiology, such as speech production,
can also be investigated with tissue-level simulations. Speech production involves
highly coordinated movements of the lips, jaw, and tongue. While speech move-
ments can be analyzed with experimental measurement techniques such as ultra-
sound, MRI, electromagnetic articulography, electropalatography, and (to a limited
extend) electromyography, the principal anatomical structures of the vocal tract are
all mechanically coupled. Therefore, in order to understand the neural control of
speech articulations, one must account for the role of the intrinsic, coupled mechan-
ics of the articulators.
For simulations to impact the above stated applications, two particular considera-
tions must be addressed. First, many biomedical applications require models that are
representative of individual patients. Patient-specific modeling is commonly done to
only match the size, shape, and kinematics of a model to a particular patient. For
tissue-scale models, the tissue properties should also be matched to the particular
patient. Second, face-tissue simulations require integration with the underlying skull
and jaw as well as the vocal tract articulators. This is particularly important for mod-
eling speech production because the interactions between the lips and teeth, tongue
and teeth, and tongue and jaw position are critical issues. Many applications also
require that simulations capture the dynamics of the face and vocal tract structures,
because breathing, mastication, swallowing, and speech production are all dynamic
acts. The effect of tissue dynamics is more pronounced on fast speed movements,
such as speech production, however these effects can also have an impact on slower
speed movements.
In order to address these varied modeling requirements and to apply simulations
to scientific and clinical questions, we have been developing a set of biomechanical
modeling tools as well as a 3D dynamic model of the jaw, skull, tongue, and
face. These models were originally developed in the commercial software pack-
age ANSYS ( www.ansys.com , ANSYS, Inc., Canonsburg, PA) and were more re-
cently re-implemented in the in-house developed software package ArtiSynth ( www.
artisynth.org , University of British Columbia, Vancouver, Canada). ArtiSynth pro-
vides us with flexibility to incorporate state-of-the-art algorithms for very efficient
simulations, while ANSYS provides us with a reliable engineering package against
which we can corroborate our ArtiSynth simulation results. In this chapter, we pro-
vide a description of our tissue-scale modeling approach developed in the ArtiSynth
platform. We will focus on aspects of our approach that pertain to the dynamic cou-
pling between the face and the jaw at the tissue scale. We will also review our results
for muscle-driven simulations of speech movements and facial expressions.
Search WWH ::




Custom Search