Image Processing Reference
In-Depth Information
(7.6a)
(7.6b)
(7.6c)
FIGURE 7.6: Example of normalized RGB probe function: (a) Original image (Martin
et al., 2001), (b) normalized R over subimages of size 5×5, and (c) normalized R over
subimages of size 10×10.
Shannon introduced entropy (also called information content) as a measure of the amount
of information gained by receiving a message from a finite codebook of messages (Pal and
Pal, 1991). The idea was that the gain of information from a single message is proportional
to the probability of receiving the message. Thus, receiving a message that is highly un-
likely gives more information about the system than a message with a high probability of
transmission. Formally, let the probability of receiving a message i of n messages be p i ,
then the information gain of a message can be written as
∆I = log(1/p i ) =−log(p i ),
(7.6)
and the entropy of the system is the expected value of the gain and is calculated as
X
n
H =−
pi log(p i ).
i
=1
This concept can easily be applied to the pixels of a subimage. First, the subimage is
converted to greyscale using Eq. 7.5. Then, the probability of the occurrence of grey level
i can be defined as p i = h i /T s , where h i is the number of pixels that take a specific grey
level in the subimage, and T s is the total number of pixels in the subimage. Information
content provides a measure of the variability of the pixel intensity levels within the image
and takes on values in the interval [0, log
L] where L is the number of grey levels in the
image. A value of 0 is produced when an image contains all the same intensity levels and
the highest value occurs when each intensity level occurs with equal frequency (Seemann,
2002). An example of this probe function is given in Fig. 7.7. Note, these images were
formed by multiplying the value of Shannon's entorpy by 32 since L = 256 (thus giving a
maximum value of 8).
2
7.4.4 Pal's entropy
Work in (Pal and Pal, 1991, 1992) shows that Shannon's definition of entropy has some
limitations. Shannon's definition of entropy suffers from the following problems: it is unde-
fined when p i = 0; in practise the information gain tends to lie at the limits of the interval
[0, 1]; and statistically speaking, a better measure of ignorance is 1 - p i rather than 1/p i (Pal
and Pal, 1991). As a result, a new definition of entropy can be defined with the following
desirable properties:
P1: ∆I(p i ) is defined at all points in [0, 1].
 
Search WWH ::




Custom Search