Information Technology Reference
In-Depth Information
where
δ(
t
)
γ(
t
)
=
(5.11)
(
1
−
ζ )
+
ζ
x
T
(
t
)
x
(
t
)
Equation (5.10) represents the training law of the GeTLS EXIN linear neuron. It
is a linear unit with
n
inputs (vector
a
i
),
n
weights (vector
x
), one output (scalar
y
i
=
a
i
x
), and one training error [scalar
δ(
t
)
]. With this architecture, training
is considered as
supervised
,
b
i
being the target. The same is true for the TLS
EXIN neuron (see Section 4.1).
Proposition 95 (
n
-Dimensional GeTLS ODE)
The n
th
-dimensional ODE of
GeTLS EXIN is
−
Rx
−
r
+
ζ
x
T
Rx
x
−
2
x
T
r
x
+
x
(
1
−
ζ )
+
ζ
x
T
x
dx
dt
=
1
(
1
−
ζ )
+
ζ
x
T
x
(5.12)
n
, R
=
E
a
i
a
i
,r
=
E
(
b
i
a
i
)
, and
=
E
b
i
.
where x
∈
Proof.
Replacing eq. (5.11) in eq. (5.10) yields
α(
t
)
(
1
−
ζ )
+
ζ
x
T
(
t
)
x
(
t
)
2
[
(
1
−
ζ)
a
i
a
i
x
(
t
)
−
(
1
−
ζ)
ba
i
x
(
t
+
1
)
=
x
(
t
)
−
x
T
a
i
a
i
x
b
i
x
T
x
T
a
i
a
i
x
+
ζ
(
t
)
x
(
t
)
(
t
)
−
ζ
(
t
)
x
(
t
)
a
i
−
ζ
(
t
)
(
t
)
x
(
t
)
+
ζ
b
i
x
(
t
)
x
T
(
t
)
a
i
−
ζ
b
i
x
(
t
)
]
Defining
R
=
E
a
i
a
i
,
r
=
E
(
b
i
a
i
)
,and
=
E
b
i
and averaging according
to the stochastic approximation theory yields eq. (5.12).
As a particular case, the
n
th-dimensional TLS EXIN ODE is
+
x
T
x
x
T
Rx
x
−
2
x
T
r
x
+
x
(5.13)
−
Rx
−
r
+
dx
dt
=
1
1
1
+
x
T
x
1
5.2.2 Validity of the TLS EXIN ODE
)
=
a
i
b
i
T
n
+
1
Given
ξ(
t
;
∈
as input to the MCA EXIN (see Section 4.1), its
autocorrelation matrix becomes
Rr
r
T
A
T
AA
T
b
b
T
Ab
T
b
E
ξ(
)
=
[
A
;
b
]
T
[
A
;
b
]
m
1
m
T
R
=
t
)ξ
(
t
≈
=
(5.14)
Search WWH ::
Custom Search