Information Technology Reference
In-Depth Information
hyperplane and therefore only the first n components of eqs. (4.35)
and (4.36)
are meaningful. The n th-dimensional TLS GAO ODE is
dt =− Rx r r T x x
dx
(4.39)
Considering the derivation of the TLS GAO learning law in [64, p. 722] as a
gradient descent algorithm using a linearization (it implies the assumptions of
small gains and, above all, weight norms much smaller than 1) of the local
gradient vector of the instantaneous estimation of the TLS energy function (4.8),
it is easy to demonstrate the following proposition.
Proposition 88 (TLS GAO as a TLS EXIN Simplification) The TLS GAO
learning law can be derived as a linearization of the TLS EXIN learning law for
weight norms much smaller than 1.
The most important differences between TLS EXIN and TLS GAO are:
1. The operation cost per iteration is smaller for TLS GAO.
2. As pointed in Remark 40, the TLS GAO learning law does not represent
the gradient flow of an error function, not allowing batch methods as CG,
SCG, and BFGS as for TLS EXIN.
3. The dependence of TLS GAO on strongly constraining assumptions limits
its dynamic behavior. Indeed, violation of the small gain assumption, typ-
ical in the transient, implies that TLS EXIN has a better transient and a
faster and more accurate response than TLS GAO. Violation of the small
weight assumption limits the initial conditions' field of validity of TLS
GAO, in the sense of accuracy and dynamical behaviour.
4.5 TLS APPLICATION: ADAPTIVE IIR FILTERING
From the point of view of linear system identification, the TLS neurons can be
applied for the parameter estimation of adaptive FIR 2 and IIR filters when the
data observed are both contaminated with additive noise [27]. Adaptive IIR filters
have lower filter orders and effectively model a wider variety of systems [172]
than those of FIR filters. According to the choice of the prediction error, there
are two approaches to adaptive IIR filtering:
1. The output-error method , which minimizes the output error. If the mini-
mization is performed using the LS criterion, it gives a consistent result but
2 For the FIR parameter estimation, the TLS neurons are fed with the input signal s ( t ) and its delayed
time n 1values s ( t 1 ) , s ( t 2 ) , ... , s ( t n + 1 ) ; the desired signal is b ( t ) . After learning, the
weights give the FIR parameters, and the error signal δ represents the error signal of the adaptive
filter, which, in many adaptive filter applications, has the same importance as the filter parameters
[63,64].
Search WWH ::




Custom Search