Information Technology Reference
In-Depth Information
Grimshaw: There are a number of ways to ap-
proach this. What will the picture be in 1 year, 10
years, 20 years, how will the technology change,
what interfaces/outputs might we have, do we need
realism (what type of realism), what will change
in the player's perception, how will sound design
change, are there ethical questions involved in
biofeedback and so on?
of the player and their culture/society (various
literature reports that different nationalities view
very different engine sounds as exciting and sporty
-- Ferrari for Italians, Porsche for Germans). The
game setup menu might have faders for national-
ity, gender and age.
This type of technology will void the requirement
for game sound designers because it will be the
players (their psychophysiological state) who
design the sound on the fly. Some role might re-
main for the creation of specific sounds but sound
designers will find their role greatly decreased in
the games industry. Of course, it also opens up
new avenues, creative ones, outside the industry
-- the technology described above leads to the
possibility of being able to design/create sound
by 'thinking' about it -- presumably, the most
creative soundscapes created in future will be
thought by the most creative minds.
I'll start this off by saying that games of the future
will conduct a form of dialogue between player and
sound where the sound itself becomes an active,
participating character in the gameworld work-
ing in tandem with the player to increase his/her
experience and immersion. This will be achieved
through biofeedback whereby the game constantly
monitors the player's immediate affect and latent
emotion (through EEG, GSR, ECG, EMG etc.
-- devices such as Nia and emotiv headsets are
tending in this direction) and responds by synthe-
sising new sounds and processing NPC speech.
In the former case, parameters of sound such as
frequency, timbre, intensity, ADSR envelope will
be modified; in the latter, pitch, stress, rhythm
and so on will be modified (I'm imagining real-
time synthesis of NPC text files with an emotive
envelope/pattern applied according to player state
and game context).
O Keeffe: Perhaps in 20 years, sound will be
considered less important, characterised as an
interference with game play. In the real world,
sound within urban spaces is constantly being
categorised as noise. If we continue on the path
of highlighting post modern soundscapes as noisy
environments we justify silent virtual spaces,
places to escape to sound of the real.
So, the game engine senses the player is not
frightened enough? Up the 'fear' controller to
alter the synthesis of sounds and add a worried
tremor to NPC speech. Perhaps this should be
taken the other way. The player is about to have
a heart attack so an emotion governor kicks in to
calm them down by synthesising soothing sounds
(after all, presumably game companies don't want
to be sued). Players can be kept on just the right
level of emotional rollercoaster.
Grimshaw: Rather like 'silent' rooms in busy
corporate and academic campuses.
Liljedahl: There is also the concept of “percep-
tulization” which is interesting in the view of
the very well established “visualization” and the
at least in some communities well established
“auralization”. In the future, sound, graphics and
other media types will be more integrated. Today,
what we see and what we hear do not necessarily
match. One example is room acoustics. When
Of course, before all this is possible, there needs
to be substantial research into what it is about
sound that induces fear (or happiness, sadness etc.).
This will need to take in context, past experience
Search WWH ::




Custom Search