Information Technology Reference
In-Depth Information
one of the key problems of computer game audio
is the loss of control that a sound designer has
over the playback of the sound in the gameplay
of a complex game. Two or considerably more
different (or identical) audio files might be played
simultaneously due to gameplay events induced by
the player or the game system. This could lead to a
chaotic sonic environment, the “logjam” of sound
that Murch (1998) describes in relation to film
sound (see also Cancellaro, 2006; Childs, 2007;
Marks, 2001; Prince, 1996; Wallén, 2008). This
“logjam” does not support or enhance gameplay
and may also become very tiresome to listen to
during even short sessions of play. In order to
avoid that sounds lose their definition and thereby
their semantic value, a game audio designer needs
to plan and structure the game audio as much as
possible, which constitutes a major problem.
The sound designer can design and deliver
the sounds to a game but the player is the one
person in control of the play button. The goal for
a sound designer should be to retain as much con-
trol over the final sonic environment as possible,
even though it is hard to define exactly when the
sounds are going to be played. Since game sounds
are usually loaded on call by certain events in the
game, the sounds cannot be edited and mixed in
a fashion similar to the mixing of film sound.
In other words, the sonic environment has to be
spread out beforehand. To avoid a big, undefined
wall of sound, the sounds have to be somewhat
compatible with each other. One could compare
sound layers with a jig-saw puzzle; in order to
complete it, each part must fit in with the surround-
ing parts. If a number of pieces are put onto each
other, the parts at the bottom will be covered and
not clearly visible. On the other hand, as Chion
(1994) has noted, sounds may be superimposed
on top of each other without the conceptualiza-
tion that they stem from different environments
(pp. 45--46). The problem is that sounds which
are similar will blur into one another. By using
the entire dynamic and frequency range, as well
as the panorama and distribution of the cognitive
load over the brain, the sonic environment is more
likely to be clear and distinct. Perhaps every sound
does not have to be as loud as possible (Thom,
1999)? If every sound is evaluated and then given
values for a set of variables, such as dynamic
range, dominant frequency, and cognitive load,
the sonic environment can be easier to visualize.
This is what our combined model does.
So far we have identified a number of key
problems in the analysis and production of com-
puter game audio:
There is a general lack of functional mod-
els for the analysis of computer game audio
There is also a general lack of functional
models for the production of game audio
The loss of control that a sound designer
has over the playback of the audio in the
gameplay of a complex game may lead to
a chaotic blur of sounds which makes them
lose their definition and hence their seman-
tic value
When two or more sounds play simultane-
ously, the clarity of the mix depends on the
type of sounds, which leads to
The nature of the relationship between en-
coded and embodied sounds.
Furthermore:
Sound is often an abstract to game design-
ers, graphical artists and programmers, due
to a lack of consistent and communicable
terminology.
The overall purpose of this chapter is therefore
to present a model (Figures 3 to 8) that solves
these problems and makes it possible to plan the
audio layering of computer games in terms of:
The relationship between encoded and em-
bodied sounds
Cognitive load
Search WWH ::




Custom Search