Information Technology Reference
In-Depth Information
eigenvalues are 0, 1, and 2. The initial weight components have been chosen
randomly in the interval [0, 1]. The OJA learning law (2.16) can be interpreted
in the following way:
y 2
w (
t
+
1
) = w (
t
) α (
t
)
y
(
t
)
x
(
t
)
+ α (
t
) w (
t
)
(
t
)
(2.127)
anti-Hebbian
and compared with the FENG learning law (2.29):
2
w ( t +
1
) = w ( t ) α ( t ) w ( t )
2 y ( t ) x ( t ) + α ( t ) w ( t )
(2.128)
2
2
In approaching convergence, (2.31) holds, so w ( t )
1 n . Hence, the FENG
law becomes
w (
t
+
1
) w (
t
) γ(
t
)
y
(
t
)
x
(
t
)
+ λ
γ (
t
) w (
t
)
(2.129)
n
anti-Hebbian
γ( t ) = α ( t ) /λ n . With regard to OJA, the term y 2
where
( t ) does not appear.
According to Oja [138], this term implies a normalization of the weight vector
and has a stabilizing effect on the anti-Hebbian law. Indeed, for small γ ,
w ( t + 1 ) = w ( t ) γ y ( t ) x ( t )
1 γ
2
w (
t
) γ
y
(
t
)
x
(
t
) + γw(
t
) +
O
)
(2.130)
which means that the FENG learning law, being 0 γ< 1, amplifies the anti-
Hebbian law, and this fact is more relevant just when it approaches the MC
direction. From (2.130), the relative error of the FENG estimate can be deduced:
FENG = w ( t + 1 ) w ( t )
w ( t )
w (
= γ
(2.131)
t
)
w (
t
+
1
)
Then, decreasing γ , the oscillations around the MC direction decay. Indeed,
the term y 2
T xx T
T R w . At very near
= w
w
in (2.127) tends, on average, to w
T R w w
T
T
convergence, w
n .If w
w 1 n , as for FENG [see eq. (2.31)],
it can be stated that y 2
1, so after a certain number of iterations, in general very
large, the anti-Hebbian rule becomes as constrained as OJA. This phenomenon
is apparent in Figure 2.11, which shows the last iterations of the example of
Figure 2.10.
Theorem 71 (Time Constant) In the limit of the FENG ODE approximation
( 2.30 ) , that is, in the converging phase to the minor component, the decay time
constant is the same as LUO and, close to convergence, depends only on the
autocorrelation matrix spectrum.
Search WWH ::




Custom Search