Information Technology Reference
In-Depth Information
aiming and shooting targets, painting gestural
curves, or nudge objects of different types in a
two- or three-dimensional scene.
Although the player triggers each event manu-
ally he does not have to be the only one playing.
An accompaniment can be running autonomously
in the background like that of a pianist that goes
along with a singer or a rock band that sets the
stage for a guitar solo. Often repetitive structures
(ostinato, vamp, riff) are therefore applied. Such
endlessly looping patterns can be tedious over
a longer period. Variation techniques like those
explained in the previous section can introduce
more diversity. Alternatively, non-repetitive
material can be applied. Precomposed music is
of limited length, hence, it should be sufficiently
long. Generated music, by contrast, is subject
to no such restrictions. However, non-repetitive
accompaniment comes with a further problem:
it lacks musical predictability and thereby ham-
pers a player's smooth performance. This can be
avoided. Repetitive schemes can change after a
certain number of iterations (for example, play riff
A four times, B eight times, and C four times).
The changes can be prepared in such a way that
the player is warned. A well-known example is
the drum roll crescendo that erupts in a climactic
crash. Furthermore, tonally close chord relations
can relax strict harmonic repetition without losing
the predictability of appropriate pitches.
The player can freely express himself against
this background. But should he really be allowed
to do anything? If yes, should he also be allowed
to perform badly and interfere with the music? In
order not to discourage a proportion of the custom-
ers, lower difficulty settings can be offered. The
freedom of interaction can be restricted to only
those possibilities that yield pleasant satisfactory
results. There can be a context sensitive compo-
nent in the event generation just like a driving
aid system that prevents some basic mistakes.
Pitch values can automatically be aligned to the
current diatonic scale in order to harmonize. A
time delay can be used to fit each event perfectly
to the underlying meter and rhythmic structure.
Advanced difficulty settings can be like driving
without such safety systems. It is most interesting
for trained players who want to experiment with
a bigger range of possibilities.
Interaction with high-level structures is less
direct. The characteristic feature of this approach is
the autonomy of the music. It plays back by itself
and reacts to user behaviour. While the previously
described musical instruments are rather perceived
as a tool-like object, in this approach the impres-
sion of a musical diegesis, a virtual world filled
with entities that dwell there and react and interact
with the player, is much stronger. User interaction
affects the arrangement of the musical material
or the design principles which define the way the
material is generated. In Amplitude (in standard
gameplay mode) it is the arrangement. The songs
are divided into multiple parallel tracks. A track
represents a conceptual aspect of the song like
bass, vocals, synth, or percussion and each track
can be activated for a certain period by passing a
skill test. Even this test derives from melodic and
rhythmic properties of the material to be activated.
The goal is to activate them all.
The music in Amplitude is precomposed and,
thus, relatively invariant. Each run leads ultimately
to the same destination music. Other approaches
generate the musical material just in time while
it is performed. User interaction affects the pa-
rameterization of the generation process which
results in different output. For this constellation
of autonomous generation and interaction Chapel
(2003) coined the term Active Musical Instrument ,
an instrument for real-time performance and com-
position that actively interacts with the user: “The
system actively proposes musical material in real-
time, while the user's actions [.. .] influence this
ongoing musical output rather than have the task
to initiate each sound” (p.50). Chapel states that
an Active Instrument can be constructed around
any generative algorithm.
The first such instrument was developed by
Chadabe (1985). While music is created autono-
Search WWH ::




Custom Search