Information Technology Reference
In-Depth Information
Table 11.2 Success rate in
validation set (the results are
averages of 50 independent
runs)
Picasso vs. Van Gogh
Picasso vs. Monet
Van Gogh vs. Monet
92.1 %
91.5 %
89.9 %
Table 11.3 Confusion matrix
(the results are averages of 50
independent runs)
Picasso
Monet
Van Gogh
Picasso
87 . 59 %
2 . 59 %
9 . 81 %
Monet
4 . 76 %
70 . 24 %
25 . 00 %
Van Gogh
4 . 11 %
6 . 60 %
89 . 29 %
of the painting. Since we avoided doing any sort of manual pre-processing of the
images, the frames were not removed. The images were gathered from different
sources and the dataset will be made available for research purposes, thus enabling
other researchers to compare their results with ours.
The experimental results are averages of 50 independent runs using different
training and validation sets. In each run, 90 % of the images were randomly selected
to train the artificial neural network. The remaining ones were used as validation set
to assess the performance of the artificial neural network. The training of the artifi-
cial neural network was stopped after a predetermined number of learning steps. All
the results presented concern the performance in validation.
Table 11.2 presents the results obtained in an author classification task with two
classes. As it can be observed, discriminating between the works of Van Gogh and
Monet was the biggest challenge. Conversely, Pablo Picasso's works were easily
distinguished from the ones made by Monet and Van Gogh.
In Table 11.3 we present the confusion matrix for this experiment, which re-
inforces the previous findings. There is a significant drop in performance when it
comes to the correct identification of Claude-Oscar Monet's works. The existence
of fewer paintings of this author can explain the difficulties encountered in correctly
learning how to recognise his style. A more detailed analysis of this experiment is
currently in preparation.
Overall, the results indicate that the considered set of metrics and classifier sys-
tem are able to distinguish between the signatures (in the sense used by Cope 1992 )
of different authors. It cannot be stated that the AJS is basing its judgement, at
least exclusively, on aesthetic principles. It can, however, be stated that it is able to
perform stylistic classification in the considered experimental settings. Even if we
could demonstrate that the system was following aesthetic principles, this would not
ensure that those principles are enough to perform aesthetic value assessments. If
the system obtained bad results in distinguishing between works that have different
aesthetic properties it would cast serious doubts on its ability to perform aesthetic
evaluation. Thus, a good performance on an author identification task does not en-
sure the ability to perform aesthetic evaluation, but it is arguably a prerequisite.
Search WWH ::




Custom Search