The animations are essential to computer games, film making, online chat,
virtual presence, video conferencing, etc.
There have been many methods proposed for modeling the 3D geometry
of faces. Traditionally, people have used interactive design tools to build hu-
man face models. To reduce the labor-intensive manual work, people have
applied prior knowledge such as anthropometry knowledge [DeCarlo et al.,
1998]. More recently, because 3D sensing techniques become available, more
realistic models can be derived based on those 3D measurement of faces. So far,
the most popular commercially available tools are those using laser scanners.
However, these scanners are usually expensive. Moreover, the data are usually
noisy, requiring extensive hand touch-up and manual registration before the
model can be used in analysis and synthesis. Because inexpensive computers
and image/video sensors are widely available nowadays, there is great interest
in producing face models directly from images. In spite of progress toward
this goal, this type of techniques are still computationally expensive and need
In this topic, we will give an overview of these 3D face modeling techniques.
Then we will describe the tools in our iFACE system for building personalized
3D face models. The iFACE system is a 3D face modeling and animation
system, developed based on the 3D face processing framework. It takes the
Cyberware TM 3D scanner data of a subject's head as input and provides a
set of tools to allow the user to interactively fit a generic face model to the
Cyberware TM scanner data. Later in this topic, we show that these models
can be effectively used in model-based 3D face tracking, and 3D face synthesis
such as text- and speech-driven face animation.
2.3 Geometric-based facial motion modeling, analysis and
Accurate face motion analysis and realistic face animation demands good
model of the temporal and spatial facial deformation. One type of approaches
use geometric-based models [Black and Yacoob, 1995, DeCarlo and Metaxas,
2000, Essa and Pentland, 1997, Tao and Huang, 1999, Terzopoulos and Wa-
ters, 1990a]. Geometric facial motion model describes the macrostructure level
face geometry deformation. The deformation of 3D face surfaces can be rep-
resented using the displacement vectors of face surface points (i.e. vertices).
In free-form interpolation models [Hong et al., 2001a, Tao and Huang, 1999],
displacement vectors of certain points are predefined using interactive editing
tools. The displacement vectors of the remaining face points are generated using
interpolation functions, such as affine functions, radial basis functions (RBF),
and Bezier volume. In physics-based models [Waters, 1987], the face vertices
displacements are generated by dynamics equations. The parameters of these
dynamic equations are manually tuned. To obtain a higher level of abstraction