Information Technology Reference
In-Depth Information
40). Evidently, the DLS approach needs the largest number of iterations. All
transients are far better than in [22, Ex. 1]; for the TLS and DLS approaches
it is a consequence of the approximated learning law of these neurons, whereas
in the OLS approach, it is a consequence of their particular presentation of the
inputs.
5.7.3 Mixed OLS-TLS Problems
The mixed OLS - TLS problems introduced in Section mix are solved by the
GeTLS EXIN linear neuron (MADALINE for the multidimensional case) by
working on the training set. The procedure follows (compare with [98, Alg. 3.2]):
1. Preprocessing of the training set: column permutations, to have the first
columns of the data matrix known exactly and QR factorization.
2. Training of GeTLS EXIN for ζ = 0 . 5 (TLS): null initial conditions avoid
the need of a numerical rank determination (see Section 5.4).
3. Training of GeTLS EXIN for ζ = 0(OLS).
4. Postprocessing of the solution ( multidimensional case ): inverse permuta-
tions.
The same idea of pre- and postprocessing can be applied to other problems, just
like the GeTLS problem in [98, Sec. 10.3] (preprocess the training set by using
error equilibration matrices) and the general TLS formulation in [74] (preprocess
the training set by using the nonsingular diagonal matrices D and T ).
5.7.4 Note on the Choice of the Numerical Stop Criteria
As pointed out in this chapter, it is impossible to use the usual stop criteria for
the MCA and TLS learning law, because the error cost does not go to zero at
convergence. Indeed, the minimum of the MCA error cost is equal to the smallest
eigenvalue of the autocorrelation matrix, and the minimum of the GeTLS error
cost depends on the incompatibility of the linear equations and on the validity
of the linear modeling [for TLS the minimum is equal to
2
n + 1 , as shown in
eq. (1.21)]. Hence, it is necessary to detect the flatness of the curves represent-
ing either the weight components or the error function. In all simulations the
following numerical stop criteria have been chosen:
σ
For TLS EXIN, TLS GAO, and all MCA linear neurons :Define
w( k ) = w( k ) w( k 1 )
(5.148)
n
where w
and k is the iteration. Then the learning law stops when
w( k )
max
i
| w i ( k ) |
(5.149)
Search WWH ::




Custom Search