Information Technology Reference
In-Depth Information
for T iterations, where the L or Chebyshev norm has been taken into
account; T is in general equal to the number of vectors of the training set
or to a predefined number in case of online learning and ε = 10 10 .
For SCG TLS EXIN :The learning law stops if either of the following two
criteria is satisfied:
1.
2
2
m ,where ε m is the machine error (valid only for compat-
ible or underdetermined systems).
2. Define E ( k )
Ax b
, the error at iteration k ,
| E ( k ) | = | E ( k ) E ( k
1
) |
(5.150)
for T iterations, where ε = 10 5
and T = max ( 3, n ) , n being the dimen-
sion of the weight vector.
For BFGS TLS EXIN : By using (2.123), the learning law stops when
n
w( k ) 1
1 | w i ( k ) | < n ε
(5.151)
i
=
for T = n iterations ( n is the dimension of the weight vector), where the L 1
or absolute value norm has been taken into account and ε = 10 10 .
5.8 SIMULATIONS FOR THE GeTLS EXIN NEURON
The first simulation deals with the benchmark problem in [195] considered in
Sections 2.10 and 5.5.4. First, m points are taken from this line with 0
5by
uniform sampling. Then, the noisy observation set is generated by adding Gaus-
sian noise with zero mean and variance
<
x
<
2
5 to these points. Figure 5.27
shows the temporal evolution of the estimates for the inverse iteration (dashed)
and scheduling (solid) methods for m = 1000. The inverse iteration method
requires fewer iterations, but the cost per iteration is much larger. It is apparent
in Table 5.3, where the problem is solved for several values of m . The schedul-
ing technique works far better for large m :For m 400, the computational cost
remains nearly the same for a given accuracy. This phenomenon can easily be
explained by the observation that the GeTLS error cost 5.6 derives from the
(weighted) Rayleigh quotient of C T C , and then its minimization operates on the
column space of C , implying that it depends primarily on n and not on m [24].
This is a very important advantage for large and very large systems of linear
equations [33,61].
The next simulations deal with the benchmark problem (5.57) introduced in
Section 5.3.3 and considered subsequently in the analysis of the GeTLS stability.
In this section we compare the GeTLS EXIN neuron with the other principal
TLS neurons. The first simulations, whose results are shown in Table 5.4, deal
σ
=
0
.
Search WWH ::




Custom Search