Graphics Reference
In-Depth Information
added advantage of being able to capture color information that can be used to generate a texture map.
This is particularly important with facial animation: a texturemap can often cover flaws in themodel and
motion. Laser scanners also have drawbacks; they are expensive, bulky, and require a physical model.
Muraki [ 22 ] presents a method for fitting a blobby model (implicitly defined surface formed by
summed, spherical density functions) to range data by minimizing an energy function that measures
the difference between the isosurface and the range data. By splitting primitives and modifying param-
eters, the user can refine the isosurface to improve the fit.
Models can also be generated from photographs. This has the advantage of not requiring the pres-
ence of the physical model once the photograph has been taken, and it has applications for video con-
ferencing and compression. While most of the photographic approaches modify an existing model by
locating feature points, a common method of generating a model from scratch is to take front and side
images of a face on which grid lines have been drawn ( Figure 10.9 ). Point correspondences can be
established between the two images either interactively or by locating common features automatically,
and the grid in three-space can be reconstructed. Symmetry is usually assumed for the face, so only one
side view is needed and only half of the front view is considered.
Modifying an existing model is a popular technique for generating a face model. Of course, some-
one had to first generate a generic model. But once this is done, if it is created as a parameterized model
and the parameters were well designed, the model can be used to try to match a particular face, to design
a face, or to generate a family of faces. In addition, the animation controls can be built into the model so
that they require little or no modification of the generic model for particular instances.
One of the most often used approaches to facial animation employs a parameterized model orig-
inally created by Parke [ 23 ] [ 24 ] . The parameters for his model of the human face are divided into
two categories: conformational and expressive . The conformational parameters are those that distin-
guish one individual's head and face from another's. The expressive parameters are those concerned
with animation of an individual's face; these are discussed later. Symmetry between the sides of the
face is assumed. Conformal parameters control the shape of the forehead, cheekbone, cheek hollow,
FIGURE 10.9
Photographs from which a face may be digitized [ 25 ] .
 
Search WWH ::




Custom Search