Information Technology Reference
In-Depth Information
now argue that the software shows some degree of appreciation when it paints. That
is, it appreciates the emotion being expressed by the sitter, and it has an appreciation
of the way in which its painting styles can be used to possibly heighten the emotional
content of portraits.
1.4.3 Scene Construction
Referring back to the creativity tripod described in the guiding principles above,
we note that through the non-photorealistic rendering and the emotional modelling
projects, we could claim that the software has both skill and appreciation. Hence,
for us to argue in our own terms that the software should be considered creative,
we needed to implement some behaviours which might be described as imaginative.
To do so, we took further inspiration from Cohen's AARON system, specifically its
ability to construct the scenes that it paints. It was our intention to improve upon
AARON's scene generation abilities by building a teaching interface to The Paint-
ing Fool that allows people to specify the nature of a generic scene and the software
can then produce instantiations. As described below, we have experimented with nu-
merous techniques in order to provide people with a range of methods with which to
train the software. The techniques include AI methods such as evolutionary search
and constraint solving approaches; exemplar based methods, where the user teaches
the software by example; and third party methods such as context free design gram-
mars for generating parts of scenes.
We describe a scene as a set of objects arranged prior to the production of a
painterly rendition. This could be the arrangement of objects for a still life, the or-
chestration of people for a photograph, or the invention of a cityscape, etc. In prac-
tical terms, this entails the generation of a segmentation prior to it being rendered
with simulated paints and pencils. This problem is most naturally split into firstly,
the generation of the overall placement of elements within a scene—for instance
the positions of trees in a landscape; and secondly, the generation of the individual
scene elements—the trees themselves, composed of segments for their trunks, their
leaves, and so on. While this split is appealing, we did not develop separate tech-
niques for each aspect. Instead, we implemented a layering system whereby each
segment of one segmentation can be replaced by potentially multiple segments re-
peatedly, and any segmentation generation technique can be used to generate the
substitutions. This adds much power, and, as shown in the example pictures below,
allows for the specification of a range of different scene types.
Our first exploration of scene generation techniques involved evolving the place-
ment of scene elements according to a user-defined fitness function. Working with
the cityscape scene of the tip of Manhattan as an inspiring example (in the words of
Ritchie ( 2007 )), we defined a fitness function based on seven correlations between
the parameters defining a rectangle, with a set of rectangles forming the cityscape
scene. For instance, we specified that there needed to be a positive correlation be-
tween a building's height and width, so that the rectangles retained the correct pro-
portions. We similarly specified that the distance of a rectangle from the centre of
Search WWH ::




Custom Search