Game Development Reference
In-Depth Information
are able to recognize individuals from a large number of similar faces and to
detect very subtle changes in facial expressions. Therefore, the general accept-
ability of synthetic face images strongly depends on the 3-D head model used for
rendering. As a result, significant effort has been spent on the accurate modeling
of a person's appearance and his or her facial expressions (Parke et al., 1996).
Both problems are addressed in the following two sections.
3-D head models
In principle, most head models used for animation are based on triangle meshes
(Rydfalk, 1978; Parke, 1982). Texture mapping is applied to obtain a photorealistic
appearance of the person (Waters, 1987; Terzopoulos et al., 1993; Choi et al.,
1994; Aizawa et al., 95; and Lee et al., 1995). With extensive use of today's
computer graphics techniques, highly realistic head models can be realized
(Pighin et al., 1998).
Modeling the shape of a human head with polygonal meshes results in a
representation consisting of a large number of triangles and vertices which have
to be moved and deformed to show facial expressions. The face of a person,
however, has a smooth surface and facial expressions result in smooth move-
ments of surface points due to the anatomical properties of tissue and muscles.
These restrictions on curvature and motion can be exploited by splines which
satisfy certain continuity constraints. As a result, the surface can be represented
by a set of spline control points that is much smaller than the original set of
vertices in a triangle mesh. This has been exploited by Hoch et al. (1994) where
B-splines with about 200 control points are used to model the shape of human
heads. In Ip et al. (1996), non-uniform rational B-splines (NURBS) represent the
facial surfaces. Both types of splines are defined on a rectangular topology and,
therefore, do not allow a local patch refinement in areas that are highly curved.
To overcome this restriction, hierarchical splines have been proposed for the
head modeling (Forsey et al., 1988) to allow a recursive subdivision of the
rectangular patches in more complex areas.
Face, eyes, teeth, and the interior of the mouth can be modeled similarly with
textured polygonal meshes, but a realistic representation of hair is still not
available. A lot of work has been done in this field to model the fuzzy shape and
reflection properties of the hair. For example, single hair strands have been
modeled with polygonal meshes (Watanabe et al., 1992) and the hair dynamics
have been incorporated to model moving hair (Anjyo et al., 1992). However,
these algorithms are computationally expensive and are not feasible for real-time
applications in the near future. Image-based rendering techniques (Gortler et al.,
1996; Levoy et al., 1996) might provide new opportunities for solving this
problem.
Search WWH ::




Custom Search