Information Technology Reference
In-Depth Information
room acoustics and HRTFs. As it dynamically
moves from front to surround positions around
the listener's head, the sound is filtered in a num-
ber of ways. However, the listener can distinctly
recognize that the sound is from the same source
as it retains its fundamental timbral characteristic.
With exact matching speakers, therefore, one
would expect the same would hold true with
5.1-surround sound, since their discrete position
in space is simply like a natural sound source
traveling around the listener's head. Unfortunately,
this is not the case as 5.1-surround is a system
that represents too few steps in the sound field.
Therefore, sound designers need to compensate
for timbral instabilities by equalizing the signal
as it reaches the surround speakers. This is a dif-
ficult task, as sound designers cannot predict the
type of room the listener will have their gaming
system in.
by MPEG. The Sound Node of MPEG-4 is quite
similar to that of the VRML/Java 3D Sound Node.
However, MPEG-4 contains a sound spatialization
paradigm called Environmental Spatialisation of
Audio (ESA). ESA can be divided into a Physical
Model and a Perceptual Model.
Physical Model (see Table 2): This enables
the rendering of source directivity, detailed room
acoustics and acoustic properties for geometrical
objects (walls, furniture, and so on.). Auraliza-
tion , another term for realisation of the physical
model, has been defined as: “creating a virtual
auditory environment that models an existent or
non-existent space” (Väänänen, 1998).
Three Nodes have been devised to facilitate
the physical modelling approach. These are
AcousticScene, AcousticMaterial and Directive-
Sound.
Briefly, DirectiveSound is a replacement for
the simpler Sound Node. It defines a directional
sound source whose attenuation can be described
in terms of distance and air absorption. The direc-
tion of the source is not limited to a directional
vector or a particular geometrical shape.
The velocity of the sound can be controlled
via the speedOfSound field: this can be used, for
example, to create an instance of the Doppler ef-
fect. Attenuation over the distance field can now
drop to -60dB and can be frequency-dependent if
the useAirabs field is set to TRUE. The spatial-
ize field behaves the same as its counterpart in
the Sound Node but with the addition that any
reflections associated with this source are also
spatially rendered. The roomEffect field controls
the enabling of ESA and, if set to TRUE, the
source is spatialized according to the environ-
ment's acoustic parameters.
AcousticScene is a node for generating the
acoustic properties of an environment. It simply
establishes the volume and size of the environment
and assigns it a reverberation time. The auraliza-
tion of the environment involves the processing
of information from the AcousticScene and the
MPEG-4 and Spatial Sound
MPEG (Motion Picture Experts Group) is a work-
ing group of an ISO/IEC subcommittee that gener-
ates multimedia standards. In particular, MPEG
defines the syntax of low-bitrate video and audio
bit streams, and the operation of codecs. MPEG
has been working for a number of years on the
design of a complete multimedia toolkit, which
can generate platform-independent, dynamic, in-
teractive media representations. This has become
the MPEG-4 standard.
In this standard, the various media are encoded
separately allowing for better compression, the
inclusion of behavioral characteristics, and user-
level interaction. Instead of creating a new Scene
Description Language (SDL) the MPEG organiza-
tion decided to incorporate Virtual Reality Model-
ing Language (VRML). VRML's scene description
capabilities are not very sophisticated so MPEG
extended the functionality of the existing VRML
nodes and incorporated new nodes with advanced
features. Support for advanced sound within the
scene graph was one of the areas developed further
Search WWH ::




Custom Search