Databases Reference
In-Depth Information
H
b
(p)
1.0
p
0.5
1.0
F I GU R E 8 . 5
The binary entropy function.
Find a lower bound for the average mutual information:
I
(
X
;
Y
)
=
H
(
X
)
−
H
(
X
|
Y
)
(59)
=
H
(
X
)
−
H
(
X
⊕
Y
|
Y
)
(60)
H
(
X
)
−
H
(
X
⊕
Y
)
from Equation
(
12
)
(61)
In the second step, we have used the fact that if we know
Y
, then knowing
X
we can obtain
X
⊕
X
.
Let us look at the terms on the right-hand side of (
12
):
Y
and vice versa as
X
⊕
Y
⊕
Y
=
H
(
X
)
=−
p
log
2
p
−
(
1
−
p
)
log
2
(
1
−
p
)
=
H
b
(
p
)
(62)
where
H
b
(
p
)
is called the
binary entropy function
and is plotted in Figure
8.5
. Note that
H
b
(
p
)
=
H
b
(
1
−
p
)
.
is completely specified by the source probabilities, our task now is to find
the conditional probabilities
Given that
H
(
X
)
{
P
(
x
i
|
y
j
)
}
such that
H
(
X
⊕
Y
)
ismaximizedwhile the average dis-
tortion
E
[
d
(
x
i
,
y
j
)
]
D
.
H
(
X
⊕
Y
)
is simply the binary entropy function
H
b
(
P
(
X
⊕
Y
=
1
))
,
where
P
(
X
⊕
Y
=
1
)
=
P
(
X
=
0
,
Y
=
1
)
+
P
(
X
=
1
,
Y
=
0
)
(63)
Therefore, to maximize
H
(
X
⊕
Y
)
, we would want
P
(
X
⊕
Y
=
1
)
to be as close as possible
to one-half. However, the selection of
P
(
X
⊕
Y
)
also has to satisfy the distortion constraint.
The distortion is given by
E
[
d
(
x
i
,
y
j
)
]=
0
×
P
(
X
=
0
,
Y
=
0
)
+
1
×
P
(
X
=
0
,
Y
=
1
)
+
1
×
P
(
X
=
1
,
Y
=
0
)
+
0
×
P
(
X
=
1
,
Y
=
1
)
=
P
(
X
=
0
,
Y
=
1
)
+
P
(
X
=
1
,
Y
=
0
)
=
P
(
Y
=
1
|
X
=
0
)
p
+
P
(
Y
=
0
|
X
=
1
)(
1
−
p
)
(64)