Information Technology Reference
In-Depth Information
Non-Linearity and Interactivity
where the human subjective auditive perception
differs greatly from the actual physical situation.
Possible causes can be the listener's attention,
stress, auditory acuity, body sounds and reso-
nances, hallucination and so forth.
All this indicates that diegetic music has to be
handled on the same layer as sound effects and
definitely not on the “traditional” non-diegetic
music layer. In the gaming scenario, it falls under
the responsibility of the audio engine that renders
the scene's soundscape. Audio Application Pro-
gramming Interfaces (APIs) currently in use are,
for instance, OpenAL (Loki & Creative, 2009),
DirectSound as part of DirectX (Microsoft, 2009),
FMOD Ex (Firelight, 2009), and AM3D (AM3D,
2009). An approach to sound rendering based on
graphics hardware is described by Röber, Ka-
minski, and Masuch (2007) and Röber (2008). A
further audio API that is especially designed for
the needs of mobile devices is PAudioDSP by
Stockmann (2007).
It is not enough, though, to play the music
back with the right acoustics, panorama, and
filtering effects. Along the lines of “more real
than reality”, it is often a good case to reinforce
the live impression by including a certain degree
of defectiveness. The wow and flutter of a record
player may cause pitch bending effects. There can
be interference with the radio reception resulting
in crackling and static noise. Not to mention the
irksome things that happen to each musician, even
to professionals, at live performances: fluctua-
tion of intonation, asynchrony in ensemble play,
and wrong notes, to name just a few of them.
Those things hardly ever happen on CD. In the
recording studio, musicians can repeat a piece
again and again until one perfect version comes
out or enough material is recorded to cut down
a perfect version during postproduction. But at
life performances all this happens and cannot
be corrected afterwards. Including them in the
performance of diegetic music makes for a more
authentic live impression.
However, in the gaming context in particular this
authenticity gets lost when the player listens to
the same piece more than once. A typical situation
in a game: The player re-enters the scene several
times and the diegetic music always starts with
the same piece as if the performers paused and
waited until the player came back. This can be
experienced, for example, in the adventure game
Gabriel Knight: Sins of the Fathers (Sierra, 1993)
when walking around in Jackson Square. Such a
déjà vu effect robs the virtual world of credibility.
The performers, even if not audible, must continue
playing their music and when the player returns
he must have missed parts of it.
Another very common situation where the
player rehears a piece of music occurs when getting
stuck in a scene for a certain time. The perform-
ers, however, play one and the same piece over
and over again. In some games they start again
when they reach the end, in others, the music
loops seamlessly. Both are problematic because
it becomes evident that there is no more music.
The end of the world is reached in some way and
there is nothing beyond. A possible solution could
be to extend the corpus of available pieces and
go through it either successively or randomly in
the music box manner. But the pieces can still
recur multiple times. In these cases it is important
that the performances are not exactly identical.
A radio transmission does not always crackle at
the same time within the piece and musicians try
to give a better performance with each attempt.
They focus on the mistakes they made last time
and make new ones instead. This means that the
game has to generate ever new performances.
Examples for systems that can generate expres-
sive performances are:
the rule-based KTH Director Musices by
Friberg, Bresin, and Sundberg (2006)
the machine learning-based YQX by
Flossmann, Grachten, and Widmer (2009)
Search WWH ::




Custom Search