Graphics Reference
In-Depth Information
polation models [Hong et al., 2001a, Tao and Huang, 1999], parameterized
models [Parke, 1974], physics-based models [Waters, 1987], and more re-
cently machine-learning-based models [Kshirsagar et al., 2001, Hong et al.,
2001b, Reveret and Essa, 2001]. Free-form interpolation models define a set of
points as control points, and then use the displacement of the control points to
interpolate the movements of any facial surface points. Popular interpolation
functions includes: affine functions [Hong et al., 2001a], Splines, radial basis
functions, Bezier volume model [Tao and Huang, 1999] and others. Parameter-
ized models (such as Parke's model [Parke, 1974] and its descendants) use facial
feature based parameters for customized interpolation functions. Physics-based
muscle models [Waters, 1987] use dynamics equations to model facial muscles.
The face deformation can then be determined by solving those equations. Be-
cause of the high complexity of natural facial motion, these models usually
need extensive manual adjustments to achieve plausible facial deformation. To
approximate the space of facial deformation, people proposed linear subspaces
based on Facial Action Coding System (FACS) [Essa and Pentland, 1997, Tao
and Huang, 1999]. FACS [Ekman and Friesen, 1977] describes arbitrary facial
deformation as a combination of Action Units (AUs) of a face. Because AUs
are only defined qualitatively, and do not contain temporal information, they are
usually manually customized for computation. Brand [Brand, 2001] used low-
level image motion to learn a linear subspace model from raw video. However,
the estimated low-level image motion is noisy such that the derived model is
less realistic. With the recent advance in motion capture technology, it is now
possible to collect large amount of real human motion data. Thus, people turn
to apply machine learning techniques to learn model from motion capture data,
which would capture the characteristics of real human motion. Some examples
of this type of approaches are discussed in Section 1.3.
1.2 Facial temporal deformation modeling
For face animation and tracking, temporal facial deformation also needs to
be modeled. Temporal facial deformation model describes the temporal tra-
jectory of facial deformation, given constraints at certain time instances. Wa-
ters and Levergood [Waters and Levergood, 1993] used sinusoidal interpolation
scheme for temporal modeling. Pelachaud et al. [Pelachaud et al., 1991], Cohen
and Massaro [Cohen and Massaro, 1993] customized co-articulation functions
based on prior knowledge, to model the temporal trajectory between given key
shapes. Physics-based methods solve dynamics equations for these trajectories.
Recently, statistical methods have been applied in facial temporal deforma-
tion modeling. Hidden Markov Models (HMM) trained from motion capture
data are shown to be useful to capture the dynamics of natural facial deforma-
tion [Brand, 1999]. Ezzat et al. [Ezzat et al., 2002] pose the trajectory modeling
problem as a regularization problem [Wahba, 1990]. The goal is to synthesize a
Search WWH ::




Custom Search