Information Technology Reference
In-Depth Information
Given this, we believe it is currently appropriate to study stakeholder groups sepa-
rately, as we do in the following sections.
1.4 Observer Issues with the General Public
We introduce here three notions, namely essential behaviours , the humanity gap and
software accounting for its actions . We believe these are important in understand-
ing how people generally react to the idea of software being creative, and thus are
important in managing and shaping those reactions. To end the section, we present a
case study in handling public perception of creativity in software, and we introduce
another notion, namely that of accountable unpredictability .
Aworking definition of the field of Computational Creativity research as a subfield
of Artificial Intelligence research given in [ 28 ]isasfollows:
The philosophy, science and engineering of computational systems which, by taking on
particular responsibilities, exhibit behaviours that unbiased observers would deem to be
creative.
While this definition is not universally accepted (with a challenge to focus on
system-level creativity rather than individual responsibilities given in [ 29 ]), varia-
tions of it have been used to describe the field for many years.
The usage of theword 'unbiased' in the above definition hints at a problemencoun-
tered in evaluating projects where generative software produces artefacts (poems,
paintings, sonatas, recipes, theorems, etc.) for human consumption. In particular,
people generally have natural biases against, but also occasionally in favour of,
artefacts produced by computers over those produced by people. In particular, neg-
ative, so called 'silicon', biases have been observed under experimental conditions
[ 30 , 31 ]. Hence, in stipulating that observers must be unbiased, the definition above
emphasises a scientific approach to evaluating progress in the building of creative
systems, whereby experimental conditions are imposed to rule out, or otherwise cater
for, such biases. One such experimental setup is the Turing-style comparison test ,
where computer-generated and human-produced artefacts are mixed and audience
members make choices between them with zero context given about the processes
involved in their production. It is seen as a milestone moment if audiences cannot
tell the difference between the artefacts produced by people and those produced by
a computer. We believe there are many problems in the application of such tests in
the general context of presenting the processing and products of creative software,
as expanded in the subsections below.
Search WWH ::




Custom Search