Databases Reference
In-Depth Information
Gaussian, and the constraint can be satisfied if E (
2 =
is
the differential entropy of a Gaussian random variable with variance D , and the lower bound
becomes
X
Y
)
D . Therefore, h
(
X
Y
)
1
2 log
1
2 log
2
I
(
X
;
Y
)
(
2
π
e
σ
)
(
2
π
eD
)
(74)
2
2 log σ
1
=
(75)
D
This average mutual information can be achieved if Y is zero mean Gaussian with variance
σ
2
D , and
x 2
2 D
1
2
D exp
f X | Y (
x
|
y
) =
(76)
π
2 ,ifweset Y
For D
=
0, then
I
(
X
;
Y
) =
0
(77)
and
E
2
2
(
X
Y
)
= σ
<
D
(78)
Therefore, the rate distortion function for the Gaussian source can be written as
2 log σ
2
D
2
for D
R
(
D
) =
(79)
2
0
for D
2
We plot the rate distortion function for
σ
=
1inFigure 8.6
Like the differential entropy for the Gaussian source, the rate distortion function for the
Gaussian source also has the distinction of being larger than the rate distortion function for any
other source with a continuous distribution and the same variance. This is especially valuable
because for many sources it can be very difficult to calculate the rate distortion function. In
these situations, it is helpful to have an upper bound for the rate distortion function. It would be
very nice if we also had a lower bound for the rate distortion function of a continuous random
variable. Shannon described such a bound in his 1948 paper [ 3 ], and it is appropriately called
the Shannon lower bound . We will simply state the bound here without derivation (for more
information, see [ 112 ]).
The Shannon lower bound for a random variable X and the magnitude error criterion
d
(
x
,
y
) = |
x
y
|
(80)
is given by
R SLB (
D
) =
h
(
X
)
log
(
2 eD
)
(81)
If we use the squared error criterion, the Shannon lower bound is given by
1
2 log
R SLB (
D
) =
h
(
X
)
(
2
π
eD
).
(82)
 
Search WWH ::




Custom Search