Digital Signal Processing Reference
In-Depth Information
Exercises 4.11 and 4.12) that the above criterion has a global maximum that is attained
when and only when W ¼ U r Q where U r ¼ [ u 1 , ... , u r ] and Q is an arbitrary r r
orthogonal matrix and all the other stationary points are saddle points. Taking the
gradient of (4.46) [which is given explicitly by (4.92)], the following gradient
ascent algorithm has been proposed in [48] for updating the estimate W ( k )
W ( k þ 1) ¼W ( k ) þm k { C ( k ) W ( k )[ W T ( k ) C ( k ) W ( k )] 1
W ( k )} : (4 : 47)
Using the recursive estimate C ( k ) ¼ P 0 b ki x ( i ) x T ( i ) (4.18), and the projection
approximation introduced in [71] W T ( k ) x ( i ) ¼ W T ( i ) x ( i ) for all 0 i k ,
the
update (4.47) becomes
W ( k þ 1) ¼W ( k )
2
3
! X
! 1
X
k
k
4
b ki x ( i ) y T ( i )
b ki y ( i ) y T ( i )
5 ,(4 : 48)
þm k
W ( k )
0
0
def W T ( i ) x ( i ). Consequently, similarly to the PAST algorithms, standard
RLS techniques used in adaptive filtering can be applied. According to the numerical
experiments presented in [38], this algorithm performs very similarly to the PAST
algorithm also having the same complexity. Finally, we note that it has been proved
in [48] that the points W ¼ U r Q are the only asymptotically stable points of the
ODE (see Subsection 4.7.1) associated with the gradient ascent algorithm (4.47)
and that the attraction set of these points is the domain fW such that W T CW . 0 g .
But to the best of our knowledge, no complete theoretical performance analysis of
algorithm (4.48) has been carried out so far.
with y ( i ) ¼
4.6 EIGENVECTORS TRACKING
Although, the adaptive estimation of the dominant or minor subspace through the
estimate W ( k ) W T ( k ) of the associated projector is of most importance for subspace-
based algorithms, there are situations where the associated eigenvalues are simple
( l 1 .
...
. l r . l 1 or l n , , l nrþ 1 , l nr ) and the desired estimated
orthonormal basis of this space must form an eigenbasis. This is the case for the stat-
istical technique of principal component analysis in data compression and coding,
optimal feature extraction in pattern recognition, and for optimal fitting in the total
least square sense, or for Karhunen-Lo`ve transformation of signals, to mention
only a few examples.
In these applications, fy 1 ( k ), ... , y r ( k ) g or fy n ( k ), ... ,
def w i ( k ) x ( k ) where W ¼ [ w 1 ( k ), ... , w r ( k )] or W ¼ [ w n ( k ), ...
, w n 2 1 ( k )] are the estimated r first principal or r last minor components of the
data x ( k ). To derive such adaptive estimates, the stochastic approximation algorithms
that have been proposed, are issued from adaptations of the iterative constrained max-
imizations (4.5) and minimizations (4.6) of Rayleigh quotients; the weighted subspace
y n 2 1 ( k ) g with y i ( k ) ¼
 
Search WWH ::




Custom Search