Digital Signal Processing Reference
In-Depth Information
transforms. Note, that by proving a recursion of the distance to orthonormality
kW
T
(
k
)
W
(
k
)
I
r
k
2
Fro
from a nonorthogonal matrix
W
(0), it has been shown in
[10], that the latter algorithm is numerically stable in contrast to the former.
Instead of deriving a stochastic approximation algorithm from a specific ortho-
normalization matrix
G
(
k þ
1), an analogy with Oja's algorithm (4.30) has been
used in [54] to derive the following algorithm
W
(
k þ
1)
¼W
(
k
)
+ m
k
[
x
(
k
)
x
T
(
k
)
W
(
k
)
W
(
k
)
VW
T
(
k
)
x
(
k
)
x
T
(
k
)
W
(
k
)
V
1
]
:
(4
:
54)
It has been proved in [55], that for tracking the dominant eigenvectors (i.e. with the
sign
þ
), the eigenvectors
f+u
1
,
...
,
+u
r
g
are the only locally asymptotically stable
points of the ODE associated with (4.54). But to the best of our knowledge, no com-
plete theoretical performance analysis of these three algorithms (4.52), (4.53), and
(4.54), has been carried out until now, except in [27] which gives the asymptotic
distribution of the estimated principal eigenvectors.
If now the matrix
G
(
k þ
1) performs the Gram-Schmidt orthonormalization on the
columns of
W
0
(
k þ
1), an algorithm, denoted stochastic gradient ascent (SGA) algor-
ithm, is obtained if the successive columns of matrix
W
(
k þ
1) are expanded, assum-
ing
m
k
is sufficiently small. By omitting the
O
(
m
k
) term in this expansion [51], we
obtain the following algorithm
"
#
w
j
(
k
)
w
j
(
k
)
w
i
(
k þ
1)
¼ w
i
(
k
)
þa
i
m
k
I
n
w
i
(
k
)
w
i
(
k
)
X
i
1
a
j
a
i
1
þ
j¼
1
x
(
k
)
x
T
(
k
)
w
i
(
k
)
for
i ¼
1,
...
,
r
(4
:
55)
where here
V ¼
Diag(
a
1
,
a
2
,
...
,
a
r
) with
a
i
arbitrary strictly positive numbers.
The so called generalized Hebbian algorithm (GHA) is derived from Oja's algor-
ithm (4.30) by replacing the matrix
W
T
(
k
)
x
(
k
)
x
T
(
k
)
W
(
k
) of Oja's algorithm by its
diagonal and superdiagonal only
W
(
k þ
1)
¼W
(
k
)
þm
k
{
x
(
k
)
x
T
(
k
)
W
(
k
)
W
(
k
)upper[
W
T
(
k
)
x
(
k
)
x
T
(
k
)
W
(
k
)]}
in which the operator upper sets all subdiagonal elements of a matrix to zero. When
written columnwise, this algorithm is similar to the SGA algorithm (4.57) where
a
i
¼
1,
i ¼
1,
...
,
r
, with the difference that there is no coefficient 2 in the sum
"
#
x
(
k
)
x
T
(
k
)
w
i
(
k
)
w
i
(
k þ
1)
¼ w
i
(
i
)
þm
k
I
n
X
i
w
j
(
k
)
w
j
(
k
)
j¼
1
for
i ¼
1,
...
,
r:
(4
:
56)
Search WWH ::
Custom Search