Information Technology Reference
In-Depth Information
Oja's linear neurons have been combined with PCA linear neurons for imple-
menting the dual subspace pattern recognition (DSPR) method [194], in which a
pattern class is represented not only by its subspace with its basis vectors being
principal components, but also by its complementary subspace with its basis
vectors being minor components.
2.2.3.2 Luo-Unbehauen-Cichocki Learning Law Luo et al. [124] pro-
pose the following rule ( LUO ):
) w
)
T
y 2
w (
t
+
1
) = w (
t
) α (
t
(
t
) w (
t
)
y
(
t
)
x
(
t
)
(
t
) w (
t
(2.25)
with the corresponding averaging ODE,
) + w
) w (
d
w (
t
)
T
T
=− w
(
t
) w (
t
)
R
w (
t
(
t
)
R
w (
t
t
)
(2.26)
dt
together with the following theorem.
T
Theorem 49 (Asymptotic Stability)
If w (
0
) satisfies w
(
0
) z n =
0 and λ n is
single, then
2
2
2
2
t > 0
w ( t )
= w ( 0 )
(2.27)
lim
→∞ w ( t ) w ( 0 ) z n
(2.28)
t
Proof. See [124, App., pp. 296-297].
In [124, App., pp. 296-297], Luo et al. claim the discovery of eq. (2.27) and
demonstrate its validity even for OJAn, without knowing that this equation was
already well known and had already been discovered by one of the authors of
OJAn [142].
In [18] an MCA algorithm has been presented; however, this is coincident
with the LUO algorithm (see [123]).
2.2.3.3 Feng-Bao-Jiao Learning Law In [56] the following rule ( FENG
or FENG1 3 ) is introduced:
w ( t + 1 ) = w ( t ) α ( t ) w
( t ) w ( t ) y ( t ) x ( t ) w ( t )
T
(2.29)
The corresponding averaging ODE is given by
d w ( t )
dt
T
=− w
( t ) w ( t ) R w ( t ) + w ( t )
(2.30)
3 Another rule is also given in the paper, but it does not work as well.
Search WWH ::




Custom Search