Information Technology Reference
In-Depth Information
following two chapters. Among the nonneural methods, the first to use such an
idea in a recursive TLS algorithm was Davila [50]; there the desired eigenvector
was updated with a correction vector chosen as a Kalman filter gain vector, and
the scalar step size was determined by minimizing RQ. Bose et al. [11] applied
recursive TLS to reconstruct high-resolution images from undersampled low-
resolution noisy multiframes. In [200] the minimization is performed by using
the conjugate gradient method, which requires more operations but is very fast
and is particularly suitable for large matrices.
The neural networks applied to TLS problems can be divided into two cat-
egories: the neural networks for the MCA, which are described in Chapter 2,
and the neural networks that iterate only in the TLS hyperplane and therefore
give the TLS solution directly, which are described in Chapter 3. They are the
following:
The Hopfield-like neural network of Luo, Li, and He [120,121]. This network
is made up of 3
2 neurons ( m and n are the dimensions of the
data matrix) grouped in a main network and in four subnetworks; the output
of the main network gives the TLS solution, and the available data matrix
and observation vector are taken directly as the interconnections and the bias
current of the network; thus, the principal limit of this network is the fact that
it is linked to the dimensions of the data matrix and cannot be used without
structural changes for other TLS problems. The authors demonstrate 2 the
stability of the network for batch operation: that is, working on all the
equations together (as opposed to sequential or online operation, which
updates at every equation presentation). The initial conditions cannot be null.
The network is based on an analog circuit architecture which has continuous-
time dynamics. The authors apply the network to the TLS linear prediction
frequency estimation problem.
The linear neuron of Gao, Ahmad, and Swamy [63,64]. This is a single lin-
ear neuron associated with a constrained anti-Hebbian learning law, which
follows from the linearization of E TLS
(
m
+
n
) +
, so it is correct enough for small
gains and, above all, for weight norms much smaller than 1 . The weight
vector gives the TLS solution after the learning phase. It has been applied
to adaptive FIR and IIR parameter estimation problems. In the first problem,
after learning, the output of the neuron gives the error signal of the adaptive
filter; this is a useful property because in a great many adaptive filter appli-
cations the error signal has the same importance as the filter parameters and
other signal quantities. From now on, this neuron will be termed TLS GAO .
The linear neurons of Cichocki and Unbehauen [21,22]. The authors propose
linear neurons with different learning laws to deal with OLS, TLS, and DLS
problems. For each algorithm an appropriate cost energy to be minimized
(
x
)
2 This demonstration is not original, because the authors rediscover the well-known theorem of
stability for gradient flows (see, e.g., [84, p. 19]).
Search WWH ::




Custom Search