Information Technology Reference
In-Depth Information
in [141,186,194]. The corresponding averaging ODE is given by
d w j ( t )
dt
T
j
I
T
i
R w j ( t )
=− w
( t ) w j ( t )
> j w i ( t ) w
( t )
i
T
j
I
T
i
R w j ( t ) w j ( t )
+ w
( t )
> j w i ( t ) w
( t )
(3.24)
i
Defining
T
i
R j
=
R
j w
(
t
) w
(
t
)
R
(3.25a)
i
i
>
T
j
j ( t ) = w
( t ) w j ( t )
(3.25b)
T
j
φ j ( t ) = w
( t ) R j w j ( t )
(3.25c)
eq. (3.23) becomes
d w j ( t )
dt
=− j (
t
)
R j w j (
t
) + φ j (
t
) w j (
t
)
(3.26)
Equation (3.26) is characterized by the invariant norm property (2.27) for
j
< N (evidently, in a first approximation).
It is instructive to summarize the convergence theorems for MSA LUO. In
[122], the following theorem appears.
Theorem 84 (MSA LUO Convergence 1)
If the initial values of the weight
T
j
vector satisfy
w
(
0
)
z j
=
0 for j
=
N , N
1,
...
, N
M
+
1 ,then
span { w
(
f
)
,
w
z N M + 2
(
f
)
,
...
,
w
(
f
) } =
span { z N M + 1 , z N M + 2 ,
...
, z N }
(3.27)
N
M
+
1
N
where w j ( f ) = lim t →∞ w j ( t ) . In other words, MSA LUO finds a basis for the
minor subspace, but not the eigenvector basis.
The proof of this theorem exploits the analogy of the MSA LUO learning law
with the PSA LUO learning law cited in Section 3.3.3 and uses thec orresponding
proof for convergence. Considering that only the part of demonstration concern-
ing the orthogonality of the converged weight vectors to the remaining nonminor
eigenvectors can be applied for MSA, the authors cannot also demonstrate the
convergence to the minor components. Suddenly, in [121], the following theorem
appears.
Search WWH ::




Custom Search