Digital Signal Processing Reference
In-Depth Information
The derivation and analysis of algorithms for subspace tracking requires a mini-
mum background in linear algebra and matrix analysis. This is the reason why in
Section 2, standard linear algebra materials necessary for this chapter are recalled.
This is followed in Section 3 by the general studied observation model to fix the
main notations and by the statement of the adaptive and tracking of principal or
minor subspaces (or eigenvectors) problems. Then, Oja's neuron is introduced in
Section 4 as a preliminary example to show that the subspace or component adaptive
algorithms are derived empirically from different adaptations of standard iterative
computational techniques issued from numerical methods. In Sections 5 and 6
different adaptive algorithms for principal (or minor) subspace and component analy-
sis are introduced respectively. As for Oja's neuron, the majority of these algorithms
can be viewed as some heuristic variations of the power method. These heuristic
approaches need to be validated by convergence and performance analysis. Several
tools such as the stability of the ordinary differential equation (ODE) associated
with a stochastic approximation algorithm and the Gaussian approximation to address
these points in a stationary environment are given in Section 7. Some illustrative
applications of principal and minor subspace tracking in signal processing are given
in Section 8. Section 9 contains some concluding remarks. Finally, some exercises
are proposed in Section 10, essentially to prove some properties and relations
introduced in the other sections.
4.2 LINEAR ALGEBRA REVIEW
In this section several useful notions coming from linear algebra as the EVD, the
QR decomposition and the variational characterization of eigenvalues / eigenvectors
of real symmetric matrices, and matrix analysis as a class of standard subspace iterative
computational techniques are recalled. Finally a characterization of the principal
subspace of a covariance matrix derived from the minimization of a mean square
error will complete this section.
4.2.1 Eigenvalue Value Decomposition
Let C be an n n real symmetric [resp. complex Hermitian] matrix, which is also
non-negative definite because C will represent throughout this chapter a covariance
matrix. Then, there exists (see e.g. [37, Sec. 2.5]) an orthonormal [resp. unitary]
matrix U ¼ [ u 1 , ... , u n ] and a real diagonal matrix D ¼ Diag( l 1 , ... , l n ) such that
C can be decomposed 1 as follows
"
#
¼ X
¼ X
n
n
C ¼ UDU T
l i u i u i
resp : , UDU H
l i u i u i
,
:
(4 : 1)
1
1
1 Note that for nonnegative real symmetric or complex Hermitian matrices, this EVD is identical to the SVD
where the associated left and right singular vectors are identical.
 
Search WWH ::




Custom Search