Information Technology Reference
In-Depth Information
ous updating of listener position, convolution must
be rapid and dynamic with no perceived artifacts
during the many transitions. Multi-processing
systems have aided in achieving more realistic
rendering, as have techniques such as optimized
fading between impulse response updates.
Head-tracking technology also introduces
an amount of latency into a real-time process-
ing system. Usually, the technologies employed
are optical, inertial, mechanical, ultrasonic, and
electromagnetic. However, new developments in
terms of eye tracking and image recognition are
being explored to reduce the amount of hardware
encumbrance placed on the user. These techniques
are also finding interesting applications in the
computer game industry by taking advantage of
the integrated webcam facilities built-in to most
modern consumer computers.
The implementation of spatial sound in the
Java 3D specification employs a hierarchy of
nodes that comprise:
Sound node
PointSound node
ConeSound node
BackgroundSound node
There are also two Java classes for defining
the aural attributes of an environment. These are
the Soundscape Node and the AuralAttributes
Object. Each node is defined in a SceneGraph.
The SceneGraph is a collection of nodes that
constitute the three-dimensional environment. The
application reads the nodes and their associated
parameters from the SceneGraph and constructs
the three-dimensional world with that information.
The BackgroundSound node is not a spatial
sound rendering node. Its purpose is to facilitate
the use of ambient background sounds within the
Java application. The audio input to this node is
normally a mono or stereo audio file.
Developing spatial sound
Environments
So far in this chapter we have explored some of
the key concepts in spatial sound and how they
might apply to computer games and VR environ-
ments. The next section examines a number of key
implementations of spatial sound. Where possible
the emphasis is upon standardized implementa-
tions, such as MPEG-4, Java 3D, and OpenSL-ES,
which are very stable, unlikely to change in the
near-term, and have also informed the develop-
ment of other implementations. There are also
other implementations that are introduced by virtue
of their prevalence within the industry.
Spatial Sound in Java 3D
The Sound Node itself does not address the
spatial rendering of the sound source: this is
accomplished in one of two ways. Firstly, by
explicitly constructing the spatial attributes of the
sound using either the PointSound Node or the
ConeSound Node, or secondly, by configuring
the acoustical characteristic of an environment
using the Soundscape Node.
The first technique, constructing the spatial
attributes, is dependent upon the type of sound
source that is being used. If the sound source is a
uniformly radiating sound (positional sound) then
the PointSound node should be used, otherwise
the developer should use the ConeSound node
(directional sound).
Distance attenuation, as implemented in the
Java 3D specification, employs distance attenu-
ation arrays, which modify the amplitude of po-
Java 3D Sound API
Although the Java 3D API specification was
originally intended for 3D graphics it has proved
to be a suitable vehicle for the rendering of
three-dimensional sound. It makes sense from a
developer's point of view to keep all of the three-
dimensional functionality within the same API set.
Search WWH ::




Custom Search