learn and perform well in general conditions. We also utilize the statistics of
the training data in a way that covariance of the key shapes are used as weights
in NURBS. This helps to increase the likelihood of the generated trajectory.
The details about temporal trajectory modeling is discussed in Chapter 5.
In our framework, we can alternatively infer facial derormation dynamics
from correlated signals, such as in speech-driven animation and visual face
tracking. In that case, the facial deformation is inferred from input signals at
every time instance. (See Chapter 4, Section 2 for visual tracking driven ani-
mation, and Chapter 5, Section 1.3 for speech-driven animation.) The mapping
from related signal to facial deformation, however, can be many-to-many (as in
speech-driven animation) or noisy (as in tracking). Thus, the proposed temporal
model is still useful. We plan to incorporate dynamics model in speech-driven
animation and visual face tracking in the future.
In this chapter, a geometric 3D facial motion model is introduced. Compared
to handcrafted models, the proposed model is derived from motion capture data
so that it can capture the characteristics of real facial motions more easily.
We have also discussed methods for applying the motion model to different
subjects and face models with different topologies. The applications of the
motion model in face analysis and synthesis will be presented in Chapter 4 and
Chapter 5, respectively.