Digital Signal Processing Reference
In-Depth Information
gradient-search procedure or the power method (4.9) where the step w 0 1 ¼ C x w i of
(4.9) is replaced by w 0 1 ¼ ( I n mC x ) w i (where 0 , m , 1 / l 1 ) yields (4.23) after
the same derivation, but where the sign of the stepsize m is reversed.
w ( k þ 1) ¼ w ( k ) m {[ I n w ( k ) w T ( k )] x ( k ) x T ( k ) w ( k )} :
(4 : 25)
The associated eigenvalue l n could be also derived from the minimization of
J ( l ) ¼ ( lu n C x u n ) 2 and consequently obtained by (4.24) as well, where w ( k )is
issued from (4.25).
These heuristic approaches are derived from iterative computational techniques
issued from numerical methods recalled in Section 4.2, and need to be validated by
convergence and performance analysis for stationary data x ( k ). These issues will be
considered in Section 4.7. In particular it will be proved that the coupled stochastic
approximation algorithms (4.23) and (4.24) in which the stepsize m is decreasing,
converge to the pair ( +u 1 , l 1 ), in contrast to the stochastic approximation algorithm
(4.25) that diverges. Then, due to the possible accumulation of rounding errors, the
algorithms that converge theoretically must be tested through numerical experiments
to check their numerical stability in stationary environments. Finally extensive Monte
Carlo simulations must be carried out with various stepsizes, initialization conditions,
SNRs, and parameters configurations in nonstationary environments.
4.5 SUBSPACE TRACKING
In this section, we consider the adaptive estimation of dominant (signal) and minor
(noise) subspaces. To derive such algorithms from the linear algebra material recalled
in Subsections 4.2.3, 4.2.4, and 4.2.5 similarly as for Oja's neuron, we first note that
the general orthogonal iterative step (4.12): W 1 ¼ Orthonorm fCW i g allows for the
following variant for adaptive implementation
W 1 ¼ Orthonorm{( I n þmC ) W i }
where m . 0 is a small parameter known as stepsize, because I n þmC has the same
eigenvectors as C with associated eigenvalues (1 þml i ) 1, ... , n . Noting that I n 2 mC
has also the same eigenvectors as C with associated eigenvalues (1 ml i ) 1, ... , n ,
arranged exactly in the opposite order as ( l i ) 1, ... , n for m sufficiently small ( m , 1 /
l 1 ), the general orthogonal iterative step (4.12) allows for the following second variant
of this iterative procedure to converge to the r -dimensional minor subspace of C
if l nr . l nrþ 1 .
W 1 ¼ Orthonorm{( I n mC ) W i } :
When the matrix C is unknown and, instead we have sequentially the data sequence
x ( k ), we can replace C by an adaptive estimate C ( k ) (see Section 4.3.2). This leads
 
Search WWH ::




Custom Search