Information Technology Reference
In-Depth Information
Table 3.4 PCA learning laws
PCA OJA
PCA OJAn
PCA LUO
PCA EXIN
2
w ( t ) 2
2
w ( t )
a 1
1
1
) 2
2
) 4
2
a 2
1
w (
t
1
w (
t
Indeed, the formulation of the PCA and PSA (here PSA is intended as finding
the principal components) EXIN learning laws derives from the corresponding
equations for MCA and MSA simply by reversing a sign. As a consequence,
all corresponding theorems are also valid here. The same can be said for OJA,
OJAn, and LUO, even if, in reality, historically they were originally conceived
for PCA. According to the general analysis for PCA in [121, p. 229], all these
PCA algorithms can be represented by the general form
w ( t + 1 ) = w ( t ) + a 1 α ( t ) y ( t ) x ( t ) a 2 α ( t ) y 2
( t ) w ( t )
(3.33)
and the particular cases are given in Table 3.4. The corresponding statistically
averaging differential equation is the following:
d w ( t )
dt
T
= a 1 R w ( t ) a 2 w
( t ) R w ( t ) w ( t )
(3.34)
From the point of view of the convergence, at a first approximation, it holds
that
a 1
a 2 z 1
t →∞ w ( t )
lim
(3.35)
T
which is valid on the condition
0.
About the divergence analysis, eq. (2.105) still holds exactly for PCA. Hence,
all consequences are still valid; in particular, PCA LUO diverges at the finite
time given by eq. (2.111). On the contrary, eq. (2.116) is no longer valid, but for
PCA OJA it must be replaced by
w
(
0
) z 1 =
dp
dt =+
2
λ
(
1
p
)
p
p
(
0
) =
p 0
(3.36)
min
whose solution is given by
1
p
(
t
) =
(3.37)
+ (
p 0 ) e 2 λ min t
/ p 0
1
1
Hence,
lim
p
(
t
) =
1
(3.38)
t
→∞
and neither divergence nor sudden divergence happens.
Search WWH ::




Custom Search