Information Technology Reference
In-Depth Information
framework successfully in game audio courses at
the University of Skövde with good and promising
results. Nevertheless, we have also realized that
the model does not cover all the important issues
with regard to creating a sonic richness and at the
same time avoiding the smearing of sound all over
the sonic environment. The key problem is that
the IEZA-framework does not, in itself, produce
a visualization of the cognitive load, the relation
between the semantic value of different sounds,
the relation between encoded and embodied
sounds, the dominant frequencies of a sound file
or its loudness. Combined with Murch's (1998)
conceptual model the IEZA-framework can be
part of a more elaborate tool for the production
and the analysis of computer games. We have now
covered the first node of our combined model and
it is time to take a closer look at the second node:
Murch's conceptual model of film sound.
3. Yellow - Equally Balanced Effects
4. Orange - Musical Effects (e.g. atmospheric
tonalities)
5. Red - Music.
Before addressing Murch's conceptual model we
need to elaborate the statement that humans are
biased towards listening for voices (Chion, 1994,
p. 6). Chion states that: “Sound in film is, above
all, voco- and verbocentric because human be-
ings in their habitual behavior are as well” (p. 6).
He suggests 3 different listening modes: causal,
semantic and reduced listening. We first listen in
order to identify the cause of a sound—causal
listening—and, when identified, we listen to find
the meaning of the sound—semantic listening.
Reduced listening is a special case that is not
discussed in this chapter.
Therefore, what is Chion's suggestion about
how listening to a cinematic soundtrack works
with regard to the 3 different types of sound, that
is, speech, effects, and music?
MUrcH's cONcEPtUAL MODEL
One central point made by Murch in his work on
the conceptual model (1998) is that just as audible
sound may be placed on a scale ranging from
approximately 20 Hz to 20,000 Hz, a sound may
also be placed on a conceptual scale from Encoded
to Embodied covering a spectrum from speech to
music via sound effects in order to avoid a “log-
jam” of sounds. This dimension of film sound is
the reason for our choice of Murch's conceptual
model as the second node of our combined model
of computer game audio. The IEZA-framework
does not, in itself, categorize the different sounds
on a scale from encoded to embodied, and no refer-
ences to Murch's conceptual model of film sound
are made in Huiberts' and van Tol's article (2008).
If the scene has dialogue, our hearing analyzes
the vocal flow into sentences, words-hence, lin-
guistic units. Our perceptual breakdown of noises
will proceed by distinguishing sound events, the
more easily if there are isolated sounds. For a
piece of music we identify the melodies, themes,
and rhythmic patterns, to the extent that our musi-
cal training permits. In other words, we hear as
usual, in units not specific to cinema that depend
entirely on the type of sound and the chosen level
of listening (semantic, causal, reduced).
The same thing obtains if we are obliged to sepa-
rate out sounds in the superimposition and not in
their succession. In order to do so we draw on a
multitude of indices and levels of listening: dif-
ferentiating masses and acoustic qualities, doing
causal listening, and so on. (Chion, 1994, p. 45)
Example from Murch (1998)
1. Violet - Dialogue
2. Cyan/Green - Linguistic/Rhythmic Effects
(e.g. footsteps, door knocks etc)
Search WWH ::




Custom Search