Graphics Reference
In-Depth Information
chin, and neck. There are several scale distances between facial features: 1 head x , y , z ; chin to mouth and
chin to eye; eye to forehead; eye x and y ; and widths of the jaw, cheeks, nose bridge, and nostril. Other
conformal parameters translate features of the face: chin in x and z ; end of nose x and z ; eyebrow z . Even
these are not enough to generate all possible faces, although they can be used to generate a wide variety.
Parke's model was not developed based on any anatomical principles but from the intuitions from
artistic renderings of the human face. Facial anthropometric statistics and proportions can be used to
constrain the facial surface to generate realistic geometries of a human head [ 8 ]. Variational techni-
ques can then be used to create realistic facial geometry from a deformed prototype that fits the con-
straints. This approach is useful for generating heads for a crowd scene or a background character. It
may also be useful as a possible starting point for some other character; however, the result will be
influenced heavily by the prototype used.
The MPEG-4 standard proposes tools for efficient encoding of multimedia scenes. It includes a set
of facial definition parameters (FDPs) [ 15 ] that are devoted mainly to facial animation for purposes of
video teleconferencing. Figure 10.10 shows the feature points defined by the standard. Once the model
is defined in this way, it can be animated by an associated set of facial animation parameters (FAPs)
[ 14 ] , also defined in the MPEG-4 standard. MPEG-4 defines 68 FAPs. The FAPs control rigid rotation
of the head, eyeballs, eyelids, and mandible. Other low-level parameters indicate the translation of a cor-
responding feature point,with respect to its position in the neutral face, alongone of the coordinate axes [ 7 ] .
One other interesting approach to generating a model of a face from a generic model is fitting it to
images in a video sequence [ 8 ] . While not a technique developed for animation applications, it is use-
ful for generating a model of a face of a specific individual. A parameterized model of a face is set up in
a three-dimensional viewing configuration closely matching that of the camera that produced the video
images. Feature points are located on the image of the face in the video and are also located on the three-
dimensional synthetic model. Camera parameters and face model parameters are then modified to more
closely match the video by using the pseudoinverse of the Jacobian. (The Jacobian is the matrix of
partial derivatives that relates changes in parameters to changes in measurements.) By computing
the difference in the measurements between the feature points in the image and the projected feature
points from the synthetic setup, the pseudoinverse of the Jacobian indicates how to change the para-
metric values to reduce the measurement differences.
10.2.2 Textures
Texture maps are very important in facial animation. Most objects created by computer graphics tech-
niques have a plastic or metallic look, which, in the case of facial animation, seriously detracts from the
believability of the image. Texture maps can give a facial model a much more organic look and can give
the observer more visual cues when interacting with the images. The texture map can be taken directly
from a person's head; however, it must be registered with the geometry. The lighting situation during
digitization of the texture must also be considered.
1 In Parke's model, the z -axis is up, the x -axis is oriented from the back of the head toward the front, and the y -axis is from the
middle of the head out to the left side.
 
Search WWH ::




Custom Search