Information Technology Reference
In-Depth Information
of a priori knowledge about the initial conditions ( i.e., the null initial conditions
are the universal choice for the convergence ) .
Remark 124 (Energy Invariant) The restriction of the energy function
E GeTLS EXIN to the unit hypersphere is an invariant for every GeTLS problem ( i.e.,
with respect to ζ ) .
5.5.4 Examples
The first example [24] deals with the DLS benchmark problem in [195] consid-
ered in Section 2.10. A line model a 1 x + a 2 y = 1, where a 1 = 0 . 25 and a 2 = 0 . 5,
is fitted to an observation data set by the estimation of its parameters. The set
of equations is created by adding a Gaussian noise of zero mean and variance
σ
2
= 0 . 5to x and y . In comparison, the learning rate follows the same law as in
Section 2.10: It is initially constant and equal to 0.01; then it is linearly reduced
to 0.001 during the first 500 iterations and afterward is held constant. DLS EXIN
and DLS scheduling EXIN are compared. The initial conditions [0
1] T are
chosen for the DLS EXIN neuron by using the a priori information that they are
in the domain of convergence. Scheduling for the other neuron is
.
1, 0
.
1
j
ζ (
j
) =
1
(5.139)
where j
1 is the iteration number (hyperbolic scheduling). Figure 5.21 shows
the transient: DLS scheduling EXIN reaches the value 0.5 600 iterations before
DLS EXIN because of the possibility of working with very low weights.
Figure 5.22 shows the dynamic behavior of the two neurons: They are similar,
but DLS scheduling EXIN has smaller elongations. Concluding, DLS scheduling
EXIN behaves slightly better than DLS EXIN, but it is always guaranteed to
converge.
Remark 125 (Null Observation Vector) The DLS scheduling cannot have null
initial conditions when the observation vector ( b ) is null; indeed, the first step
( OLS ζ = 0) has x T A T Ax as an energy function, which is null for null initial
conditions. The problem of the lack of universal initial conditions can be circum-
vented by inserting one or more OLS learning steps in the scheduling in order to
enter the domain of convergence. If the initial conditions are very small, one OLS
step is enough to assure the convergence.
5.6 ACCELERATED MCA EXIN NEURON (MCA EXIN
+
)
As anticipated in Sections 2.5.3 and 2.11, the DLS scheduling EXIN can be
used to improve the MCA EXIN neuron. Indeed, the DLS energy cost for b = 0
becomes [see eq. (5.21)]
x T A T Ax
x T x
2 r x , A T A
1
2
1
n
E DLS ( x ) =
=
x
− {
0
}
(5.140)
Search WWH ::




Custom Search