Game Development Reference
In-Depth Information
Virtual Character in MPEG-4 Version 1
and 2: Face and Body Animation
First efforts to standardize the animation of a human-like character (an avatar)
within MPEG-4 were finalized at the beginning of 1999. Published under the
name FBA, they dealt with specifications for defining and animating the avatar.
This section first describes the node specification for Face and Body objects, and
then describes how to create and animate an FBA compliant avatar model. The
compression schemes of animation parameters are presented in the third
subsection. Finally, local deformation issues are tackled.
Face and Body Animation Nodes Specification
A key concept in the MPEG-4 standard is the definition of the scene, where text,
2D and 3D graphics, audio and video data can (co)exist and (inter)act. A scene
is represented as a tree, where each object in the scene is the instantiation of a
node or a set of nodes. Compression representation of the scene is done through
the BInary Format for Scene (BIFS) specification (ISOIEC, 2001). Special
transformations and grouping capabilities of the scene make it possible to cope
with spatial and temporal relationships between objects.
The first version of the standard addresses the animation issue of a virtual human
face, while Amendment 1 contains specifications related to virtual human body
animation. In order to define and animate a human-like virtual character, MPEG-4
introduces the so-called FBA Object. Conceptually, the FBA object consists of
two collections of nodes in a scene graph grouped under the so-called Face node
and Body node (Figure 1), and a dedicated compressed stream. The next
paragraph describes how these node hierarchies include the definition of the
geometry, the texture, the animation parameters and the deformation behaviour.
The structure of the Face node (Figure 1a) allows the geometric representation
of the head as a collection of meshes, where the face consists of a unique mesh
(Figure 2a). The shape and the appearance of the face is controlled by the FDP
(Facial Definition Parameter) node through the faceSceneGraph node for the
geometry, and the textureCoord and useOrthoTexture fields for the texture.
Moreover, a standardized number of control points are attached to the face mesh
through the featurePointsCoord field as shown in Figure 3. These points control
the face deformation. The deformation model is enriched by attaching
parameterisation of the deformation function within the neighbourhood of the
control points through the faceDefTables node.
Search WWH ::




Custom Search