Image Processing Reference
In-Depth Information
are participants in a compositional process. The generative software serves to cre-
ate some structures independently of the performers; it is the role of the performers
to guide the generational processes toward more compositionally interesting output
and away from output that is overly repetitive, monochromatic, garish, or otherwise
less satisfactory. Likewise, the audio and visual engines, via the various feedback
processes, continually push against the explicit control of the performers. Overall,
the composition is defined by a network of nested feedback loops that link the per-
former and the algorithm to create an inherent aesthetic tension between the gen-
erative and the interactive, the performed and the composed, the random and the
intended. Figure 13.10 shows audience members inside an installation of Annular
Genealogy in the AlloSphere Research Facility, an immersive CAVE-like environ-
ment housed at UC Santa Barbara [2].
While some of the results of interconnecting multiple feedback layers are un-
predictable, the performers nonetheless begin to have an intuition as to how their
actions will update the overall composition. For example, while there is no direct
mapping of how the visualization data will update the compositional structures, af-
ter some experience using the iPad interface, it becomes clear that certain gestures
during certain kinds of passages generate a particular shaping of the composition.
It was also interesting to re-conceive the performers role as “guiders” of aesthetics,
rather than as creators. A direction for future versions of the artwork would be to
more explicitly highlight the effect that an interaction has as it is transmuted from
one medium to the other.
Fig. 13.10 Photograph of viewers wearing 3D active stereo glasses within an installation of
Annular Genealogy inside the AlloSphere Research Facility at UC Santa Barbara
 
Search WWH ::




Custom Search