Information Technology Reference
In-Depth Information
is devised, which contains the classical minimization [e.g., the minimiza-
tion of E TLS ( x ) for the TLS] plus regularization terms for ill-conditioned
problems and nonlinear functions for the robustness of the method. Then
the system of differential equations describing the gradient flow of this
energy is implemented in an analog network for the continuous-time learn-
ing law and in a digital network for the discrete-time learning law. The
analog network consists of analog integrators, summers, and multipliers;
the network is driven by independent source signals (zero-mean random,
high frequency, uncorrelated i.i.d.) multiplied by the incoming data a ij , b i
( i
b ]. The artificial neuron, with an
on-chip adaptive learning algorithm, allows both complete processing of the
input information simultaneously and the sequential strategy. In the digital
neuron the difference equations of the gradient flow are implemented by
CMOSswitched-capacitor (SC) technology. The neurons for the TLS do not
work on the exact gradient flow, but on its linearization: 3 It gives the same
learning law of TLS GAO for a particular choice of the independent signals.
The DLS learning lawis introduced empirically, without justification. The
examples of [22] are used as a benchmark for comparison purposes.
=
1, 2,
...
, m
;
j
=
1, 2,
...
, n ) from [ A
;
Remark 40 TLS GAO and the linear neurons of Cichochi and Unbehauen for the
TLS problems have learning laws which are not gradient flows of the error function
because of the linearization, which forbids the use of acceleration techniques
based on the Hessian of the error function and the conjugate gradient method.
3 It is evident that as for TLS GAO, the norm of the weight must be much less than unity for
linearization; curiously, the authors forget to cite this limit.
Search WWH ::




Custom Search