Information Technology Reference
In-Depth Information
LANGUAGE AND PrOcEss
ogy the principles needed to create sampling plus
synthesis (S+S), wavetable and velocity-mapped
acoustic instruments. This migration of audio
technology, from music synthesisers to game audio
during the late 1990s and early 21 st Century, can
be seen as the renaissance of synthesis. The “dark
age” of sampling following an abandonment of
game audio synthesis chips like the SID, AY8910,
YM2151 (Collins, 2008, pp. 31-60)), is over now
that native synthesis capabilities are more than
adequate for realistic sound.
As mentioned earlier, it is perfectly possible to
approach the procedural concept with an imple-
mentation using only pre-digitised waves. The
line between sampling and synthesis has never
been a clear one and at present it is this area,
using hybrid sampling-synthesis, that holds the
most promise for transitional generation game
audio technology during the changeover from
data driven to fully procedural systems. Practi-
cal transitional systems employed at Electronic
Arts and Disney are variations on granular or
wavetable S+S with Steve Rockett's Blackrock
team attempting some ambitious work on vehicle
engines. Work done by Kenneth Young and others
on the Media Molecule title “Little Big Planet”
shows many of the structural and behavioural
hallmarks of procedural audio as applied to com-
binational sound that can be configured for user
generated content (UGC). While such endeavours
stop short of fully procedural audio, they are a
valuable step in the right direction because they
establish conceptual foundations necessary for
proper structural approaches and they are properly
tested in publicly distributed titles. Along with
dynamic reconfigurability, which has a bearing on
the effectiveness of user generated content, most
important of these approaches are the transition
from state/event to continuous parameterisation
and the recognition of behavioural audio objects
with multi-dimensional parameters.
To take the above example of the screeching
gate, we might now express the condition as:
In a command like:
play (scrape.wav) 
there is a definite article, a specific, singular
sound which exists as a data file or area of memory
containing a digitised recording. It is an allusion
to an atomic event. In this unqualified case, the
event time is implicitly now . We could choose to
“bind” the sound to an event, deferring it until
some condition is met, by saying something like:
if (moves(gate)) play(scrape.
wav) 
In the case of a single, one-shot sound, the game
logic is unaware of the sound length. The relation-
ship between the audio engine and game logic is
stateless, so any timing relationship between the
sound and visual elements must be predefined or
contrived at runtime. As a further refinement, a
looped sound can be playing or stopped. In such a
stateful system, an indeterminate future endpoint
is explicitly given by game logic: meanwhile, a
looped sample plays repeatedly. Like MIDI, this
leaves the possibility for a stuck sound without
a safety timeout.
For decades, this has been the dominant model
of game audio. Everything of significance can
be reduced to a single occurrence, an event, or
to a simple set of states. A multi-state example
might be an elevator that can be “starting”, “mov-
ing”, “stopping”, or “stopped”. We say this is an
event-based sound system, and that each event
is bound to a sound asset, or to a simple post-
processed control of that asset. State transitions
are themselves events. Essentially, the entire
game audio system can be reduced to a matrix
of event-resource pairings.
Since the turn of this century, more sophis-
ticated approaches have appeared. Multi-state,
multi-sample sources borrow from music technol-
Search WWH ::




Custom Search