Digital Signal Processing Reference
In-Depth Information
centered, we get
Cov( WX )= E ( WXX W )
=
WCW
D −1/2 VCV D −1/2
=
D −1/2 DD −1/2 = I .
=
If V is another whitening transformation of X ,then
I =Cov( VX )=Cov( VW −1 WX )= VW −1 W V
so VW −1
O ( n ).
So decorrelation clearly gives insight into the structure of a random
vector but does not yield a unique transformation. We will therefore
turn to a more stringent constraint.
Definition 3.8: A finite sequence ( X i ) i=1,...,n of random functions
with values in the probability space Ω i
with σ -algebra A i
is called
independent if
:= P n
( A i ) =
n
X −1
i
P
{
X 1
A 1 ,...,X n
A n }
P
{
X i
A i }
i=1
i=1
for all A i
A i , i =1 ,...,n . A random vector X is called independent
if the family ( X i ) i := ( π i
X ) i of its components is independent.
Here π i denotes the projection onto the i -th coordinate. If X is a
random vector with density p X , then it is independent if and only if the
density factorizes into one-dimensional functions. That is,
p X ( x 1 ,...,x n )= p X 1 ( x 1 ) ...p X n ( x n )
n . Here, the p X i are also often called the marginal
for all ( x 1 ,...,x n )
∈ R
densities of X .
Note that it is easy to see that independence is a probability theoretic
term. Examples for independent random vectors will be given later.
Definition 3.9: Given two n - respectively m -dimensional random
vectors X and Y with densities, the joint density p X , Y is the density
of the n + m -dimensional random vector ( X , Y ) . For given y 0 ∈ R
m
Search WWH ::




Custom Search