Digital Signal Processing Reference
In-Depth Information
Theorem 3.5 Gibbs Inequality for random variables: Let
X
and
Y
be two
n
-dimensional random vectors with densities
p
X
and
p
Y
.
If
p
X
log
p
X
and
p
X
log
p
Y
are integrable, then
H
(
X
)
≤−
p
X
log
p
Y
R
n
and equality holds if and only if
p
X
=
p
Y
.
The entropy measures “unorder” of a random variable in the sense
that it is maximal for maximal unorder:
n
be measurable of the finite Lebesgue measure
Lemma 3.8:
Let
A
⊂ R
λ
(
A
)
<
. Then the maximum of the entropies of all
n
-dimensional
random vectors
X
with density functions having support in
A
and for
which
H
(
X
) exists is obtained exactly at the random vector
X
∗
∞
being
uniformly distributed in
A
.
the density
p
∗
:=
λ
(
A
)
−1
χ
A
So for the random vector
X
∗
satisfies:
All
X
as above with density
p
X
=
p
∗
satisfy
H
(
X
)
<H
(
X
∗
)=log
λ
(
A
).
Proof
Let
X
be as above with density
p
X
. The Gibbs inequality for
X
and
X
∗
then shows that
log
1
λ
(
A
)
H
(
X
)
≤−
p
X
log
p
∗
=
−
p
X
=log
λ
(
A
)=
H
(
X
∗
)
R
n
A
and equality holds if and only if
p
X
=
p
∗
.
For a given random vector
X
in
L
2
,denote
X
gauss
the Gaussian
with mean
E
(
X
) and covariance Cov(
X
). Lemma 3.9 is the non-finite
generalization of the above lemma. It shows that the Gaussian has
maximal entropy over all random vectors with the same first- and second-
order moments.
Given an
L
2
-random vector
X
, the following inequality
Lemma 3.9:
holds:
H
(
X
gauss
)
≥
H
(
X
)
Another information theoretic function measuring distance from a
Gaussian can be defined using this lemma.
Search WWH ::
Custom Search