Graphics Reference
In-Depth Information
or
f
(
i, j
)=
i
and
f
(
i
−
1
,j
)=
k
,
δ
= 0, otherwise.
b.
Conditional Entropy of a Partitioned Image
The entropy of an n-state system as defined by Shannon [151] is
n
H
=
−
p
i
ln
p
i
,
(2.5)
i
=1
n
where
p
i
=1and0
≤
p
i
≤
1,
p
i
is the probability of the
i
-th state of
i
=1
the system. Such a measure is claimed to give information about the actual
probability structure of the system. Some drawbacks of (2.5) were pointed out
by Pal and Pal [131] and the following expression for entropy was suggested:
n
p
i
e
1
−p
i
,
H
=
(2.6)
i
=1
n
where
p
i
=1and0
≤
p
i
≤
1. The term
−
ln
p
i
, i.e., ln(1
/p
i
) in (2.5)
i
=1
or
e
1
−p
i
in (2.6) is called gain in information from the occurrence of the
i
-th
event. Thus, one can write,
n
H
=
p
i
I
(
p
i
)
,
(2.7)
i
=1
I
(
p
i
) = ln(1
/p
i
)or,
e
1
−p
i
depending on the definition used.
Considering two experiments
A
(
a
1
,a
2
,
where
,b
n
)
with respectively
m
and
n
possible outcomes, the conditional entropy of
A
given
b
l
has occurred in
B
is
···
,a
m
)and
B
(
b
1
,b
2
,
···
m
H
(
A
|
b
l
)=
p
(
a
k
|
b
l
)
I
(
p
(
a
k
|
b
l
))
,
(2.8)
k
=1
where
p
(
a
k
b
l
) is the conditional probability of occurrence of
a
k
given that
b
l
has occurred. We can write the entropy of
A
conditioned by
B
as
|
n
H
(
A
|
B
)=
p
(
b
l
)
H
(
A
|
b
l
)
,
l
=1
n
m
=
p
(
b
l
)
p
(
a
k
|
b
l
)
I
(
p
(
a
k
|
b
l
))
,
(2.9)
l
=1
k
=1
n
m
p
(
a
k
,b
l
)
I
(
p
(
a
k
|
b
l
))
,
=
l
=1
k
=1
Search WWH ::
Custom Search