Information Technology Reference
In-Depth Information
modelling. The second section takes a look at the
evolution of physical modelling including many
lessons which are to be learned from the physical
modelling of musical instruments. Some specific
techniques are discussed with extra emphasis
placed on two techniques, modal synthesis and
digital waveguide synthesis, which are particularly
useful in real-time applications. The third section
presents, in chronological order, the projects that
have made advances in the area of physical model-
ling for sound synthesis in computer games and
virtual environments and the last section looks
at directions which future research may take as
well as an industry perspective on the technique.
The earliest computer games to include sound
effects synthesised them using whatever hardware
was available at the time. The sounds produced
were very much influenced by the limitations
of the hardware. Although by today's standards
they could not be considered realistic, they were
marketed as such and a “drive towards realism
[…] is a trend we shall see throughout the history
of game sound” (Collins, 2008, p. 9). “By 1980,
arcade manufacturers included dedicated sound
chips known as programmable sound generators,
or PSGs into their circuit boards” (Collins, 2008,
p. 12). Early consoles also had sound chips that
developers used to synthesise sound and again
the hardware limitations influenced their work.
However, as computers became more powerful,
developers began to utilise recorded samples
in their quest for realism. Andy Farnell, author
of Designing Sound (2008), explains, “[e]arly
consoles and personal computers had synthesiser
chips that produced sound effects and music in
real-time, but once sample technology matured
it quickly took over because of its perceived
realism thus synthesised sound was relegated to
the scrapheap” (p. 298). Karen Collins gives a
comprehensive account of the earliest chips that
performed synthesis up to the first systems that
were capable of playing CD-quality samples.
When sound effects are required in a virtual
environment today, sample playback is the most
common method of producing them (McCuskey,
2003) and it is widely used for good reason. Raghu-
vanshi, Lauterbach, Chandak, Manocha, and Lin
(2007) state that the method of sample playback is
“simple and fast”, meaning it is computationally
inexpensive and straight forward to implement (p.
68). The method takes advantage of known sound
design techniques that have been refined through
a long history of use in the movie industry. In the
introduction to Real Sound Synthesis for Interac-
tive Applications, Perry Cook (2002) concedes
that, with much effort having gone into improv-
ing the sample-based approach, “the state of the
art has indeed advanced to the point of absolute
realism, at least for single sounds triggered only
once” (p. xi). However, there are many drawbacks
with sample playback and, in an interactive en-
vironment, it cannot provide “absolute realism”.
The sounds heard in reality are produced
as a result of many factors and information on
these factors is contained within the sounds. For
example, a piece of wood struck near its centre
will sound differently than when struck close to its
edge. If struck harder it will not only sound louder
but it will have a different quality (Cook, 2002,
p. xiii). When continuous contact is maintained,
for example during rolling or scraping, another
variation of sounds is heard. The object used to
excite the piece of wood also has an influence.
While the sonic differences can be subtle, we in-
tuitively perceive the conditions that cause them
and therefore these factors are important in creating
realistic audio. One recording of a piece of wood
being struck may be enough to provide realistic
audio in a pre-determined scenario but it will be
inadequate in a fully interactive environment.
A partial solution to this problem is to use
multiple recordings. We could, for example, record
a block of wood being struck on various points
with varying forces using different objects. When
its virtual counterpart is excited, an algorithm
can then determine the most suitable sample to
playback or interpolate between the most appropri-
ate samples. However, this approach can quickly
Search WWH ::




Custom Search