Information Technology Reference
In-Depth Information
revealed to them in novel ways, as with a mirror. It is clearly participatory, can lead
to novelty through interaction, and is autonomous in its capability to independently
infer and reproduce style. The
OMax
system of Assayag et al. (
2006
)usesasimilar
framework of behavioural modelling, but is more geared towards the construction of
improvising behaviours beyond that gathered by a performer in real-time. As such
it can also exhibit leadership.
In terms of our
PQf
wiring diagrams, such systems are complete Live Algo-
rithms (Fig.
6.1
H) typically operating in a MIDI or other music symbolic domain:
the
f
system operates directly on such symbolic data, in tandem with some kind of
stored representation of a responsive behavioural strategy, such as a Markov model.
Note that here as in other cases, the symbolic form of data flows
p
and
q
mean that
f
can easily be simulated in simpler virtual environments. This can be practical for
training purposes.
A number of related systems provide frameworks that straddle the range of be-
haviours from shadowing to negotiation. Research into granular audio analysis and
resynthesis offers a lower-level alternative to MIDI and introduces timbral informa-
tion to an agent's perceptual world. Casey (
2005
) proposes a method for dissecting
sequences of audio into acoustic lexemes, strings of short timbral/tonal categories.
Based on this principle, Casey's
Soundspotter
system (Casey
2009
) can be used to
match incoming audio from one source with pre-analysed audio from another, of-
fering rich creative potential. Schwarz's
CataRT
system uses a similar mechanism,
providing a scatter plot interface to a corpus of pre-analysed audio data (Schwarz
et al.
2006
).
In its raw form,
Soundspotter
offers a powerful new kind of shadowing (more
powerful than the MIDI domain given the kinds of timbral transformations and
within-note control it allows), and can be considered more as a novel timbral ef-
fect or a creative tool than a Live Algorithm. This fits with the scheme of Fig.
6.1
E.
The
Soundspotter
framework, however, provides a firm foundation for more gener-
ative and interactive use, as demonstrated in
Frank
developed by Plans Casal and
Morelli (
2007
), which introduces a generative process based on a coevolutionary al-
gorithm, effectively introducing a novel
f
operating on feature data. As with MIDI
data, here the data flows
p
and
q
take the form of (lower level) symbolic data (lex-
ical, in Casey's terms, Casey
2005
), meaning that there is a convenient model for
embedding different
f
's in a stable musical context. Although
Frank
does not di-
rectly map input to output, it is able to take advantage of the shadowing nature of
the
Soundspotter
system, for example by giving the impression of echoes of mu-
sical activity from the audio input. Britton's experiments with chains of feedback
in
CataRT
have likewise explored the generative capabilities inherent in Schwarz's
concatenative synthesis framework (Schwarz et al.
2006
).
Thus whilst MIDI is a well established domain based on musical notation in the
Western music tradition, timbral analysis and acoustic lexemes indicate new ways
for music to be transformed into a conceptual space and then retransformed into
sound. These principles of transformation are key to the formulation of a Live Al-
gorithm, central to which is the identification and isolation of an abstract nested be-
havioural module,
f
, which enjoys some degree of transferability between contexts.