Game Development Reference
In-Depth Information
carried by heterogeneous networks (broadcast, IP, mobile), available anywhere
and for a large scale of devices (PCs, set-top boxes, PDAs, mobile phones) and
profiled with respect to the user preferences. All these requirements make the
chain where content is processed more and more complicated and a lot of
different actors must interfere: designers, service providers, network providers,
device manufacturers, IPR holders, end-users and so on. For each one, consis-
tent interfaces should be created on a stable and standardized basis.
Current work to provide 3D applications within a unified and interoperable
framework is materialized by 3D graphics interchange standards such as
VRML 9 and multimedia 2D/3D standards, such as MPEG-4 (ISOIEC, 2001).
Each one addresses, more or less in a coordinated way, the virtual character
animation issue. In the VRML community, the H-Anim 10 group released three
versions of their specifications (1.0, 1.1 and 2001), while the SNHC 11 sub-group
of MPEG also released three versions: MPEG-4 Version 1 supports face
animation, MPEG-4 Version 2 supports body animation and MPEG-4 Part 16
addresses the animation of generic virtual objects. In MPEG-4 the specifications
dealing with the definition and animation of avatars are grouped under the name
FBA — Face and Body Animation — and those referring to generic models
under the name BBA — Bone-based Animation. The next section analyses the
main similarities and differences of these two standardization frameworks.
The VRML standard deals with a textual description of 3D objects and scenes.
It focuses on the spatial representation of such objects, while the temporal
behaviour is less supported. The major mechanism for supporting animation
consists of defining it as an interpolation between key-frames.
The MPEG-4 standard, unlike the previous MPEG standards, does not only cope
with highly efficient audio and video compression schemes, but also introduces
the fundamental concept of media objects such as audio, visual, 2D/3D, natural
and synthetic objects to make up a multimedia scene. As established in July 1994,
the MPEG-4 objectives are focused on supporting new ways (notably content-
based) of communicating, accessing and manipulating digital audiovisual data
(Pereira, 2002). Thus, temporal and/or spatial behaviour can be associated with
an object. The main functionalities proposed by the standard address the
compression of each type of media objects, hybrid encoding of the natural and
synthetic objects, universal content accessibility over various networks and
interactivity for the end-user. In order to specify the spatial and temporal
localisation of an object in the scene, MPEG-4 defines a dedicated language
called BIFS — Binary Format for Scenes. BIFS inherits from VRML the
representation of the scene, described as a hierarchical graph, and some
dedicated tools, such as animation procedures based on interpolators, events
routed to the nodes or sensor-based interactivity. In addition, BIFS introduces
some new and advanced mechanisms, such as compression schemes to encode
Search WWH ::




Custom Search