Information Technology Reference
In-Depth Information
In reality, as for OJA, the second condition is contained in the first. Indeed,
considering the case 0 b γ< 1, it holds that
1
2 p +
1
x ( t )
p
1
1
γ p x ( t )
cos 2
ϑ x w
2 =
(2.179)
2
2
2
p
which is more restrictive than (2.178). Figure 2.13 shows this condition, where
σ = arccos . This angle is proportional to 1 / 2 p (as OJA) and then an
increasing weight modulus (as close to the MC direction for
λ n > 1) better
respects this condition. The decrease in γ
has a similar positive effect.
2.7.5 Conclusions
In summary: Dealing with the possible instabilities, there is a clear distinction
between two groups of learning laws. The first contains LUO, OJAn, and MCA
EXIN and is characterized by the presence of negative instability, which in gen-
eral happens for values of ϑ x w typical of the transient state. The second contains
OJA, OJA + , and FENG and is characterized by stability in the transient state.
This analysis justifies the following proposition.
Proposition 73 (Bias/Variance) LUO, OJAn, and MCA EXIN are iterative
algorithms with high variance and low bias. However, OJAn and, above all,
LUO require many more iterations to converge than MCA EXIN, because they
have more fluctuations around the MC direction and cannot be stopped earlier
by a stop criterion that, as seen in Section 2.6.2.6, requires the flatness of the
weight time evolution. On the contrary, OJA, OJA + , and FENG are algorithms
with low variance and high bias. However, FENG has larger fluctuations and is
unreliable for near-singular matrices.
Obviously, for all the algorithms, the presence of outliers worsens the dynamic
stability.
Remark 74 (Fluctuations Comparison) When the weight modulus is quite a
bit smaller than 1, the LUO neuron learning process has the smallest fluctuations
( transient ) with respect to OJAn and MCA EXIN ( the worst transient ) . Increas-
ing the weight modulus decreases the differences among the neurons, which are
null on the unit sphere. For larger moduli, LUO is still the neuron with fewer
fluctuations.
Remark 75 (Fluctuations and Weight Moduli) Working with weight moduli
less than 1 gives fewer fluctuations.
2.8 NUMERICAL CONSIDERATIONS
2.8.1 Computational Cost
The MCA learning laws are iterative algorithms and have a different compu-
tational cost per iteration, which is here evaluated in floating-point operations
Search WWH ::




Custom Search