Information Technology Reference
In-Depth Information
ances and so on of language production, although
language and its use is a prototypical example
of highly encoded sounds which Murch's model
emphasizes.
away from the centre, Murch's spectrum is then
traversed until the periphery of the circle which
equates to red/embodied. The more central a
sound is placed, the higher its level of encoding;
the more peripheral a sound is the lower its level
of encoding. This is a clear difference from both
the IEZA-framework and Murch's conceptual
model which do not themselves allow such a visual
differentiation in the first case and in the second
do not, apart from Effects, place the sounds in
specific categories (such as Interface, Zone and
Affect) nor place the sound on the vertical axis
of diegetic versus non-diegetic or the horizon-
tal axis of setting versus activity. Murch does
write about the setting versus activity scale, but
his conceptual model does not have a structure
that visualizes this aspect clearly. The effect of
combining these two models, that is, the IEZA-
framework and the conceptual model, will make
it easier to understand what is happening in the
sound environment beforehand, in a more detailed
manner. The sound designer does not need to use
actual sounds: They may be derived from a game
script or story board prior to production. 4 If the
sound designer has lots of sounds in the centre of
the model, she is most likely to produce a cogni-
tive overload for the player because the sounds in
the centre are encoded and need more intellectual
processing to be meaningful and distinct. Since
controlling dominant frequencies is one way to
distinguish1sound stream from another, we have
chosen to make this quality of sound visible within
the model. We have also chosen to use 3 basic
primitives, as Figures 4 and 5 illustrates:
tHE cOMbINED MODEL
FOr GAME AUDIO
This section introduces the different parts of the
combined model for the layering of computer game
audio. The combined model makes it possible to
categorize the different sounds for any part of a
game in a number of ways. Such a categorization
could span from relative dynamic range (domi-
nant frequency areas, “encoded sound” versus
“embodied sound” (Murch, 1998)) to whether a
sound belongs to the diegesis of the game, if it
is part of the interface, belongs to the activity of
playing, or to the setting in general. If, for example,
many “encoded sounds” are used, such as spoken
language in a game, it is necessary to be attentive
to the total sonic environment in which these “en-
coded sounds” take place and plan for an acoustic
niche for the dialogue with few interfering sounds
played simultaneously within the same frequency
span. If many “embodied sounds” are needed,
such as music combined with ambient sounds
designating the environment, it will be necessary
to make them work together by shaping the sounds
to fit and allow each other concurrent presence.
As Figures 1 to 3 above show, we have taken
the basic differentiation of the game audio divided
into Interface, Effect, Zone and Affect sounds
from the IEZA-framework. We have also used,
from the IEZA-framework, the horizontal axis
that differentiates sounds on the setting versus
activity scale and the vertical axis that describes
sound as diegetic or non-diegetic from the origi-
nal IEZA-framework. The IEZA-framework is
intact within our model; Murch's conceptual
model has, however, been visually adapted. The
centre of the circle equates to the left-hand foot
of Murch's arch (violet/encoded) and, moving
A circle = a sound in which the bass fre-
quencies are dominant
A square = a sound in which the midrange
frequencies are dominant
A triangle = a sound in which the treble/
high frequencies are dominant.
These 3 basic primitives were chosen since
they seemed natural, but this is not to say that
Search WWH ::




Custom Search