Digital Signal Processing Reference
In-Depth Information
to the adaptive orthogonal iteration algorithm
W ( k þ 1) ¼ Orthonorm{[ I n + m k C ( k )] W ( k )}
(4 : 26)
where the þ sign generates estimates for the signal subspace (if l r . l 1 ) and the
2 sign for the noise subspace (if l n 2 r . l n 2 1 ). Depending on the choice of the
estimate C ( k ) and of the orthonormalization (or approximate orthonormalization),
we can obtain alternative subspace tracking algorithms.
We note that maximization or minimization in (4.7) of J ( W ) ¼
def Tr( W T CW ) sub-
ject to the constraint W T W ¼ I r can be solved by a constrained gradient-descent tech-
nique. Because 7 W J ¼ 2 C ( k ) W , we obtain the following Rayleigh quotient-based
algorithm
W ( k þ 1) ¼ Orthonorm{ W ( k ) +m k C ( k ) W ( k )}
(4 : 27)
whose general expression is the same as general expression (4.26) derived from the
orthogonal iteration approach. We will denote this family of algorithms as the
power-based methods. It is interesting to note that a simple sign change enables one
to switch from the dominant to the minor subspaces. Unfortunately, similarly to
Oja's neuron, many minor subspace algorithms will be unstable or stable but non-
robust (i.e., numerically unstable with a tendency to accumulate round-off errors
until their estimates are meaningless), in contrast to the associated majorant subspace
algorithms. Consequently, the literature of minor subspace tracking techniques is very
limited as compared to the wide variety of methods that exists for the tracking of
majorant subspaces.
4.5.1 Subspace Power-based Methods
Clearly the simplest selection for C ( k ) is the instantaneous estimate x ( k ) x T ( k ), which
gives rise to the Data Projection Method (DPM) first introduced in [70] where the
orthonormalization is performed using the Gram-Schmidt procedure.
W ( k þ 1) ¼ GS Orth : { W ( k ) + m k x ( k ) x T ( k ) W ( k )} :
(4 : 28)
In nonstationary situations, estimates (4.19) or (4.20) of the covariance C x ( k )of x ( k )at
time k have been tested in [70]. For this algorithm to converge, we need to select a
stepsize m such that m 1 / l 1 (see e.g. [29]). To satisfy this requirement (in nonsta-
tionary situations included) and because most of the time we have Tr[ C x ( k )] l 1 ( k ),
the following two normalized step sizes have been proposed in [70]
m
kx ( k ) k
m
s x ( k )
with s x ( k þ 1) ¼ ns x ( k ) þ (1 n ) kx ( k ) k
2
m k ¼
and m k ¼
2
where m may be close to unity and where the choice of n [ (0, 1) depends on the
rapidity of the change of the parameters of the observation signal model (4.15).
Search WWH ::




Custom Search