Information Technology Reference
In-Depth Information
therefore imagine an experiment where we present the processes and products from
creative software to different stakeholder groups and assess their reaction to see if
there is indeed a difference in how different groups react, learning from analyses
of the results. Hypothesis 2 encompasses much of our philosophical position on the
notion of creativity being essentially contested and secondary in nature. One can
imagine restricting participants in an experiment to fairly constrained groups, and
testing whether there is general (healthy) disagreement about the nature of creativity
in people and software or not, and further testing whether there is more consensus
about software being uncreative. To properly test Hypothesis 2 , we would need to
ask participants about the essential behaviours—such as intentionality, learning and
reflection—they perceive to be taking place in software and see how it affects their
perception of uncreativity in the system.
Our third hypothesis makes a bold statement: that blind comparison tests damage
the long-term goal of embedding creative software in society, by emphasising the
evident humanity gap. If this effect is true, it would be borne out by a Turing-style test
where, when people are told that it was software that produced an artefact that they
particularly liked, they were also asked about whether their perception of the cre-
ative act and/or the artefact had changed in light of the new knowledge. More pointed
questions about the nature of any change in perception could lead to insights about
how to manage the humanity gap in future projects. This would lead into an experi-
ment to address Hypothesis 4 , where computer generated artefacts were presented as
re-imagined pieces with specific management of the relative lack of humanity in the
generation of the artefacts. The re-imagining would specifically include commen-
taries and other framing information produced by the creative system. If Hypothesis
4 is correct, people would appreciate the re-imagined versions of artefacts more than
those presented merely as computer-generated versions from the human oeuvre.
By proposing that random number generation detracts from an experience of a
creative act, whereas more accountable unpredictability can benefit the experience,
Hypothesis 5 is more specific than those preceding it. We can imagine an experiment
where one set of participants are told that a particularly impressive creative act (in
terms of the processing performed and/or the resulting artefacts) was because of a
random event, and another set are given interesting framing information about what
led—in a non-randomway—to the same unpredictably good creative act. If the latter
group appreciated the creative act and its results more than the former group, the truth
of the hypothesis would be upheld.
We have already started work on testing Hypothesis 6 , i.e., that the formalism
presented in [ 47 ], can capture notions of progress when building creative systems.
That is, we have used the formalism to capture abstracted timelines leading to the
building of certain creative systems, and timelines where that software operates
and produces artefacts of value. However, to convince the Computational Creativity
researcher stakeholders of the value of the formalism, we need to work with them
to capture the essence of their approaches to implementing and operating creative
software. Moreover, our audience evaluation model is far from complete. We plan
to employ the criteria specified in [ 55 ], for more fine-grained evaluations of the
quality, novelty and typicality of artefacts. We will also import audience reflection
Search WWH ::




Custom Search