Image Processing Reference
In-Depth Information
onto a large display and the audio output is channeled through multiple speakers.
In addition to control via multi-touch, the system responds to Open Sound Control
(OSC) [65] messages generated from musical events created on another computer
and sent over the network. Additionally, fluid vectors and various fluid parameters
can be transmitted wirelessly via OSC to influence the composition.
13.6
The Annular Genealogy Project
Annular Genealogy is more ambitious project based on generative systems that in-
volves multi-user interaction. It is an interactive multimedia artwork for two per-
formers using multi-channel speakers, a projector, and a tablet computer. The audio
and visual components of the artwork explore feedback processes that the perform-
ers interactively shape into appealing, transient structures. The composition engine
generates output via a stochastic sequencer that uses Brownian motion as a guiding
metaphor. Similarly, the visualization engine, based on the Fluid Automata system,
depicts colored fluid energy as a representation of dynamic, ephemeral structures.
In addition to exploring these feedback processes independently of each other, each
engine also directly influences the other via networked communication: both the
visual and audio processes broadcast data via OSC messages which then influence
various parameters of the composition and/or visualization. Finally, even the physi-
cal interactions are fed into the generative system as contact microphones are used
to pick up the tapping and other ambient sounds made during the interaction. The
performance aims to bring the layers of feedback into a cohesive compositional
experience. These feedback layers are interconnected, and include: the generation
of new musical motifs being created from the processing of the output sound; the
generation of visual forms from the processing of the output graphics; the vector
positions that govern the displacement of the visual forms used as inputs to control
music parameters; and the sequencing parameters controlling the generation of the
composition used as inputs to control image processing parameters [26].
Other recent multimedia installations have also featured generative compositions
that make use of feedback mechanisms between the audio and visual components.
For instance, Karen Curley's Licht und Klang is an audio-visual installation that
generates sounds via optical sensors that use the refractions of light through oil and
water as inputs into sound generation software [18]. A work by Joel Ryan and Ray
Edgar called LINA features both musical and visual output based on mappings from
a single CA system [14, 21]. Various electro-acoustic ensembles have explored the
use of networked feedback as a tool for improvised performance. Most famously, the
new media ensemble, The Hub, creates multimedia performances based on sets of
rules that transform signals passed between performers and that are then presented
in aural and visual domains [35]. Annular Genealogy similarly creates synaesthetic
output based on a synchretic fusion of a mixed audio and visual feedback loop.
By supplying a multi-touch and live coding environment as an interface to and
influencer of the generative processes another layer of feedback is added in which
the performer responds to and shapes the multimedia output. That is, the performers
 
Search WWH ::




Custom Search