Information Technology Reference
In-Depth Information
OJAn
LUO
LUO
OJA
MCA EXIN
Figure 2.15 Line fitting for noise variance equal to 0.1: plot of the index parameter
(expressed in decibels). The values are averaged using a temporal mask with width equal to
500 iterations, except in the first 500 iterations, in which the mask is equal to the maximum
number of iterations. ( See insert for color representation of the figure .)
the plot of the index parameter. The accuracy is very good and, in particular,
MCA EXIN has the best
ρ
for most of iterations (recall that the initial weight
norm is low, but this is the best choice, as a consequence of Remark 64). The
figure also shows its bigger fluctuations. As anticipated by the theory, LUO has
the worst behavior. Figure 2.16 represents the same plot for a higher level of
noise (
2
5). Evidently, the accuracy is lower, but the analysis about the
MCA neuron properties is still valid. Figure 2.17 shows the first iterations: MCA
EXIN is the fastest and LUO is the slowest algorithm.
The following simulations use, as data, a zero mean Gaussian random vec-
tor x ( t ) generated by an autocorrelation matrix R whose spectrum is chosen in
advance. The goal of this approach is the analysis of the behavior of the MCA
laws with respect to the dimensionality n of data and the conditioning of R .Inthe
first group of simulations, the components of the initial weight vector are chosen
randomly in [0, 1]. λ n is always equal to 1. The other eigenvalues are given by
the law λ i = n i ; then the condition number κ 2 ( R ) = λ 1 n increases with n ,
but R always remains a well-conditioned matrix. Table 2.3 shows, for four MCA
laws, the best results, 15 in terms of total cost in flops, obtained for each value of
n . Except for EXIN, all other laws diverge for low values of n : from OJA, which
diverges for only n = 7, to LUO, which diverges for n = 10. This problem can
be explained by the choice of initial conditions: increasing the number of com-
ponents, the initial weight modulus increases and quickly becomes greater than
σ
=
0
.
15 For each value of n , several experiments have been done by changing the learning rate (initial and
final values, monotonic decreasing law), and only the best result for each MCA law is reported.
 
Search WWH ::




Custom Search