Information Technology Reference
In-Depth Information
In Shannon's information theory, the statistical measure of information in a mes-
sage is defined as
r
H
=−
p i log p i .
(3.3)
i =
1
The assumption for the derivation of this formula in Shannon and Weaver ( 1963 )
is that all the p i are probabilities. They determine the statistical properties of a
source sending out messages that are constructed according to the probabilities of
the source.
This explanation may not mean much to the reader. For one, information theory
is no longer popular outside of certain technical contexts. Moreover, it was over-
estimated in the days when the world was hoping for a great unifying theory. The
measure H gives an indication of what we learn when one specific event (out of a
set of possible events) has occurred, and we know what the other possible events
could be.
Take as an example the throwing of dice in a typical board game. As we know,
there are six possible events, which we can identify by the numbers 1, 2, 3, 4, 5,
and 6. Each one of the six events occurs with the same probability, i.e. 1 / 6. Using
Shannon's formula for the information content of the source “dice”, we get
H =−
2 . 6 (3.4)
(the logarithm must be taken to the base of 2). The result is measured in bits and
must be interpreted thus: when one of the possible results of the throw has appeared,
we gain between two and three bits of information. This, in turn, says that between
two and three decisions of a “yes or no” nature have been taken. The Shannon
measure of information is a measure of the uncertainty that has disappeared when
and because the event has occurred.
Information aesthetics, founded by Max Bense and Abraham A. Moles (Bense
1965 , Moles 1968 ) and further developed in more detail by others (Gunzenhäuser
1962 , Frank 1964 ), boldly and erroneously ignored the difference between fre-
quency and probability. To repeat, probabilities of a sign schema characterise an
ideal source. Frequencies, however, are results of empirical measurement of several,
but only finitely many messages or events (images in our case). As such, frequencies
are only estimates for probabilities.
Information aesthetics wanted to get away from subjective value judgement. In-
formation aesthetic criteria were to be objective. Aspects of the observer were ex-
cluded, at least in Max Bense's approach. Empirical studies from the 1960s and later
were, however, not about aesthetic sources, but about individual pieces. In doing so,
the difference of theory and practice, of infinite class and individual instance, of
probability and frequency, had to be neglected by replacing theoretical probability
by observed frequency, thus p i =
log ( 1 / 6 ) =− ( log 1
log 6 ) =
log 6
f i . This opened up the possibility to measure the
object without any observer being present. However, the step also gave up aesthetics
as the theory of sensual perception.
Now, the program Generative Aesthetics I accepted as input a set of constraints of
the following kind. For each sign (think of colour), a measure of surprise and a mea-
sure of conspicuity (defined by Frank in 1964) could be constraint to an interval of
Search WWH ::




Custom Search