Information Technology Reference
In-Depth Information
5.5.2.1 Householder Transformation This direct method has been pro-
posed in [52] and is equivalent to the method in [51]. Compute a Householder
transformation matrix Q such that Q T b = b 2 , ... ,0 T . Then the problem is
reduced to solving
Q T Ax = b 2 , ... ,0 T
(5.133)
Define a r ,1 as the first row of Q T A .Let A be the matrix consisting of the last
m
1rowsof Q T A .Find x such that
A x = 0
a r ,1 x = b 2
(5.134)
When A is perturbed, all that can be said is A x
0. In this case, the closest
lower rank approximation A to A (in the sense that A A
2
F is minimized)
is computed, together with its null vector, say v .This v is parallel to the right
singular vector corresponding to the smallest singular value of A . Thus, x is a
scaled version of v such that a r ,1 x = b 2 .If v is orthogonal to the first row
of Q T A (nongeneric case), no solution exists. In this case, additional constraints
must be added to this formulation.
5.5.3 DLS Scheduling
If GeTLS EXIN is used, the DLS problem requires a finite parameter
, but the
corresponding cost function is not regular at the origin. In this chapter it has been
proved that for every choice of
ζ
, the null initial conditions always guarantee the
convergence. Hence, if no a priori information is given, the only way to assure
the convergence to the DLS solution is the choice of a parameter very close but
not equal to 1; this closeness influences the accuracy of the solution. To allow the
null initial conditions without loss of accuracy, the DLS scheduling is proposed.
Note that the GeTLS EXIN weight vector follows the solution locus toward
the DLS solution if its parameter ζ is made variable and increasing from 0 to
1 according to a predefined law ( scheduling ). The first iteration, for ζ = 0, is
in the direction of the OLS direction: therefore, in a case where both the null
initial conditions and the values of the weights at the first iteration are in the
domain of convergence. This new position is also in the DLS domain because
it is in the direction of the solution locus. Changing slowly ζ makes the weight
vector follow the hyperbola branch containing the solution locus for every x
plane. Furthermore, the GeTLS neuron, for not too high an initial learning rate,
begins with very low initial conditions from the first iteration. This accelerates
the learning law. This idea is illustrated in Figure 5.20. Indeed, expressing the
GeTLS cost error function (5.6) as
ζ
T
1
2 (
Ax
b
)
(
Ax
b
)
E GeTLS EXIN
(
x ,
ζ ) =
(5.135)
y (ζ )
Search WWH ::




Custom Search