Information Technology Reference
In-Depth Information
Theorem 97 (DLS EXIN Convergence) The DLS EXIN learning law, express-
ing the minimization of the error cost ( 5.21 ) converges, under the usual stochastic
approximation assumptions, to the DLS solution [ i.e., eq. ( 1.55 )] .
Proof. Express eq. (5.24) as
x T x 2 Q x , R , r ,
dx
dt =
1
(5.25)
where
Q x , R , r , =− Rx r x T x + x T Rx x 2 x T r x + x (5.26)
The critical points are given by Q x , R , r , = 0, which, under the transforma-
tion
= r T
v 1
x
v
(5.27)
becomes
Q v , R , r , = 0
(5.28)
which implies that
R T
3
R T
2
3 v
v R
2 v
v R
T
T
v
v
v
R T
3
T R v v 2 R T
2
R T R 1
v
v
T R v +
3
2
2
=
v
v
v
After some algebra,
T R
R
RR T
RR T
v
/
v
v =
v
(5.29)
v
T
v
R ayleigh quotient
v c are the eigenvectors of the matrix R RR T
Hence, the critical points
.
Following the proof of Theorem 60, it is easy to show that in the transformed
space, under the usual assumptions, the DLS EXIN learning law converges to
the eigenvector
/
min associated with the minimum eigenvalue. Using eq. (5.14),
it holds that, on average,
v
A T b b T A
b T b
= A T I b b T b 1 b T A
RR T
A T A
R
(5.30)
Search WWH ::




Custom Search