Digital Signal Processing Reference
In-Depth Information
Table 21.2. Majorization: summary
Some of the key points about majorization are summarized here;
P
denotes the size
of the vectors.
(
y
x
)if
k
=0
y
[
k
]
≥
k
=0
x
[
k
]
,
for 0
≤ n ≤ P −
2
,
and
1.
y
majorizes
x
P−
1
k
y
k
=
P−
1
k
x
k
(Definition 21.3).
=0
=0
0]
T
y
P−
1
]
T
P
]
T
2. [ 1
0
...
[
y
0
y
1
...
[
P
P
...
for
y
k
≥
0
,
and
k
y
k
= 1 (Lemma 21.1).
if and only if
P−
1
k
≥
P−
1
k
3.
y
x
g
(
y
k
)
g
(
x
k
) for all continuous convex
=0
=0
functions
g
(
x
) (Theorem 21.2).
4.
y
x
if and only if
x
=(
T
1
T
2
...
T
P−
1
)
y
for some sequence of
T
-transforms
T
k
(end of Sec. 21.5.2).
5.
y
x
if and only if there exists a doubly stochastic matrix
A
such that
x
=
Ay
(Theorem 21.9).
6.
y
x
if and only if there exists an orthostochastic matrix
A
such that
x
=
Ay
(Theorem 21.10).
7.
A
is doubly stochastic if and only if
y
majorizes
Ay
for every real vector
y
(Theorem 21.8).
8. For Hermitian
A
,
the vector of eigenvalues (
λ
k
) majorizes the vector of diagonal
elements (
a
kk
) (Theorem 21.5).
9. For Hermitian
A
,
P−
1
k
=0
1
/
(1 +
λ
k
)
≥
P−
1
k
=0
1
/
(1 +
a
kk
) (Ex. 21.9).
10. For Hermitian matrices
A
1
and
A
2
,
the
sum of eigenvalues
majorizes the
eigen-
values of the sum,
i.e.,
λ
(
A
1
)+
λ
(
A
2
)
λ
(
A
1
+
A
2
) (Theorem 21.6).
Note:
All functions, arguments, and constants are
real-valued
.
Search WWH ::
Custom Search