Information Technology Reference
In-Depth Information
τ (
4.3.2.1 Backtracking Algorithm The constants
η (
0, 1
)
and
τ
,
0
<
τ<τ )
are given.
1. Set α = 1.
2. Test the relation E (w( t ) + α( t ) d ( t )) E (w( t )) + ηα E T
(w( t )) d ( t ) .
3. If the relation is not satisfied, choose a new α in τα , τ α and go to 2. If
it is satisfied, set
α( t ) = α and w( t + 1 ) = w( t ) + α( t ) d ( t ) .
Several procedures have been utilized to choose a new trial value of α in
step 3. The classical Goldstein - Armijo method simply multiplies the old value
of α by
1
2 or some other constant in ( 0, 1 ) . It is also possible to choose the new
α
as the minimizer of a polynomial interpolating already computed function and
derivative values, subject to being in the interval τα
τ α . The BFGS method
with backtracking is globally and superlinearly convergent on uniformly convex
problems [13].
For an n -dimensional quadratic form, the sequence of matrices G
,
is guar-
anteed to converge exactly to the true Hessian matrix inverse after n steps, and
the quasi-Newton algorithm would find the exact minimum of the quadratic form
after n steps, provided that the line minimizations are performed exactly. In the
following, simulations of the BFGS TLS with backtracking will be shown.
(
t
)
4.4 COMPARISON WITH TLS GAO
As anticipated in Section 1.14, the only existing neural network giving the TLS
solution directly is the linear neuron
of Gao et al. [63,64]. Using the notation
of eq. (4.2), the
th-dimensional TLS GAO learning discrete law is
( t + 1 ) = ( t ) α ( t ) y ( t ) ξ ( t ) + ( t ) ξ n + 1 ( t )
(
n
+
1
)
(4.35)
The corresponding ODE is
d ( t )
dt
T
=− R ( t )
( t ) r ( t )
(4.36)
where
r
E ξ
) =
r
=
(
t
)ξ(
t
(4.37)
n
+
1
Rr
r T
R = E ξ( t
( t ) =
T
(4.38)
and R = E a i a i , r = E ( b i a i ) , = E b i . In [64, Theorem 2] it is shown
that taking n + 1 ( 0 ) =− 1 as an initial condition, ( t ) always lies in the TLS
Search WWH ::




Custom Search