Digital Signal Processing Reference
In-Depth Information
In general, subspace and component-based methods are obtained by using batch
methods, such as the eigenvalue decomposition (EVD) of the sample covariance
matrix or the singular value decomposition (SVD) of the data matrix. However,
these two approaches are not suitable for adaptive applications for tracking nonstation-
ary signal parameters, where the required repetitive estimation of the subspace or the
eigenvectors can be a real computational burden because their iterative implementation
needs O ( n 3 ) operations at each update, where n is the dimension of the vector-valued
data sequence. Before proceeding with a brief literature review of the main contri-
butions of adaptive estimation of subspace or eigenvectors, let us first classify these
algorithms with respect to their computational complexity. If r denotes the rank of
the principal or dominant or minor subspace we would like to estimate, since usually
r n , it is traditional to refer to the following classification. Algorithms requiring
O ( n 2 r )or O ( n 2 ) operations by update are classified as high complexity; algorithms
with O ( nr 2 ) operations as medium complexity and finally, algorithms with O ( nr ) oper-
ations as low complexity. This last category constitutes the most important one from a
real time implementation point of view, and schemes belonging to this class are also
known in the literature as fast subspace tracking algorithms. It should be mentioned
that methods belonging to the high complexity class usually present faster convergence
rates compared to the other two classes. From the paper by Owsley [56], that first
introduced an adaptive procedure for the estimation of the signal subspace with
O ( n 2 r ) operations, the literature referring to the problem of subspace or eigenvectors
tracking from a signal processing point of view is extremely rich. The survey paper
[20] constitutes an excellent review of results up to 1990, treating the first two classes,
since the last class was not available at the time it was written. The most popular algor-
ithm of the medium class was proposed by Karasalo in [40]. In [20], it is stated that this
dominant subspace algorithm offers the best performance to cost ratio and thus serves
as a point of reference for subsequent algorithms bymany authors. Themerger of signal
processing and neural networks in the early 1990s [39] brought much attention to a
method originated by Oja [50] and applied by many others. The Oja method requires
only O ( nr ) operations at each update. It is clearly the continuous interest in the subject
and significant recent developments that gave rise to this third class. It is outside of the
scope of this chapter to give a comprehensive survey of all the contributions, but rather
to focus on some of them. The interested reader may refer to [29, pp. 30-43] for an
exhaustive literature review and to [8] for tables containing exact computational
complexities and ranking with respect to the convergence of recent subspace tracking
algorithms. In the present work, we mainly emphasize the low complexity class for
both dominant and minor subspace, and dominant and minor eigenvector tracking,
while we briefly address the most important schemes of the other two classes. For
these algorithms, we will focus on their derivation from different iterative procedures
coming from linear algebra and on their theoretical convergence and performance in
stationary environments. Many important issues such as the finite precision effects
on their behavior (e.g. possible numerical instabilities due to roundoff error accumu-
lation), the different adaptive stepsize strategies and the tracking capabilities of these
algorithms in nonstationary environments will be left aside. The interested reader
may refer to the simulation sections of the different papers that deal with these issues.
 
Search WWH ::




Custom Search