Digital Signal Processing Reference
In-Depth Information
and setting:
E {
Y i Y j }
ρ i,j =
Y i } E {
Y j }
E {
We find:
σ Y 0
0
···
0
σ Y 0
0
···
0
.
.
. . .
. . .
. . .
. . .
0
0
Y
R Y =
R
.
.
. . .
. . .
. . .
. . .
0
0
0
···
0
σ Y M 1
0
···
0
σ Y M 1
with:
1
ρ 0 , 1
···
ρ 0 ,M− 1
.
. . .
. . .
ρ 1 , 0
Y =
R
.
. . .
. . .
ρ M− 2 ,M− 1
ρ M− 1 , 0
···
ρ M− 1 ,M− 2
1
The determinant is written as:
M− 1
Y
σ Y k det
R Y =
R
det
[3.12]
k =0
As:
t =det
det
R Y =det
R X det
TT
R X
because the transform is assumed to be orthogonal, we look for the transformation
T
that minimizes the geometric mean of the powers, all while keeping the determinant
of
R Y constant. Equation [3.12] shows that the optimum transformation maximizes
the determinant of the matrix
Y . If the eigenvalues of
Y
are λ
λ M− 1 , we know
R
R
0 ···
Y was defined to be
non-negative and the sum of the eigenvalues is equal to M since the trace of a matrix
is invariant under orthogonal transformations. We have:
[HAY 91] that these eigenvalues are real and non-negative since
R
1
M
M
M
1
M
1
Y =
λ k
λ k
det
R
=1
[3.13]
k =0
k =0
by applying, once again, the property that the arithmetic mean of a set of real,
non-negative numbers is greater than or equal to the geometric mean. Equality is
reached when all the eigenvalues are equal to 1, that is, when the components of
the transformed vector are decorrelated.
Search WWH ::




Custom Search