Information Technology Reference
In-Depth Information
Fig. 11.1
Examples of images created using an Evolutionary Engine and heuristic AJS
Some constraints were applied to the different formula components so as to ex-
plore these ideas in an evolutionary context, in the following way:
min (α, CV) a
max (β, CP(t 1 )
1
max γ, CP(t 1 ) CP(t 0 )
CP(t 1 )
=
CP(t 0 )) b ×
fitness
c
(11.2)
×
where α , β and γ are constants defined by the user.
These constraints are necessary to ensure that the evolutionary algorithm does
not focus exclusively on one of the components of the formula. This could make
it converge to images with maximum visual complexity (e.g. white noise images)
disregarding entirely the processing complexity estimates, or to images with mini-
mal processing complexity estimates (e.g. pure white). It was not necessary to make
additional changes to prevent the situation where CP(t 1 )
0 because these images
have very low fitness, and are, therefore, already avoided by the evolutionary algo-
rithm.
It is important to notice that the situations where CP(t 1 )
0or CP(t 1 )
CP(t 0 )
0, although theoretically possible, never occurred when using natural im-
agery.
Machado and Cardoso ( 2002 ) carried out various experiments using a Genetic
Programming engine and formula ( 11.2 ) as the fitness function.
The results achieved with this autonomous evolutionary art system are quite strik-
ing (Machado and Cardoso 2002 ). In spite of the shortcomings—e.g. it only deals
with greyscale images—it allows the evolution of a wide variety of images with
different aesthetic merits. Figure 11.1 shows the fittest images from several inde-
pendent runs.
11.3.2 Learning AJSs
Based on the results described in the previous section, we developed a learning
AJS. The system consists of two modules: a Feature Extractor (FE) and an adaptive
classifier .
The FE performs an analysis of the input images by collecting a series of low-
level feature values, most of which are related to image complexity. The values that
result from the feature extractor are normalised between 1 and
1. These values are
the inputs of the classifier, which is made up of a feed-forward artificial neural net-
work with one hidden layer. For training purposes, we resorted to SNNS ( Stuttgart
Search WWH ::




Custom Search