Information Technology Reference
In-Depth Information
where λ min = 0 . 1235 < 1. For simplicity, α( t ) = const = 0 . 01. Every 100 steps,
an outlier is presented. It is generated with the autocorrelation matrix
σ
2 00
0
R =
2
σ
0
(3.15)
2
00
σ
The norm of the difference between the computed weight and the right MC unit
eigenvector is called ε w and represents the accuracy of the neuron. After 10 trials
under the same conditions (initial conditions at [0 . 1, 0 . 1, 0 . 1] T , α( t ) = const =
0 . 01) and for σ
2
= 100, NMCA EXIN yields, in the average, after 10,000 itera-
tions, λ = 0 . 1265 and ε w = 0 . 1070 (the best result yields λ = 0 . 1246 and ε w =
0 . 0760). A plot of the weights is given in Figure 3.1. Observe the peaks caused
by the outliers and the following damping. Figure 3.2 shows the corresponding
behavior of the squared weight norm, which is similar to the MCA EXIN dynamic
behavior. In the same simulation for σ
2
= 100, NOJA + yields, on the average,
after 10, 000 iterations, λ = 0 . 1420 and ε w = 0 . 2168. This poorer accuracy is
caused by its learning law, which is an approximation of the true Rayleigh quo-
tient gradient. In the presence of strong outliers, this simplification is less valid.
In fact, for lower
2 , the two neurons have nearly the same accuracy. Figure 3.3
shows the sudden divergence phenomenon for the robust version of LUO. Here
the experiment conditions are the same, but the initial conditions are the exact
solution: It shows that the solution is not stable even in the robust version.
σ
Figure 3.1 NMCA EXIN weights in the presence of outliers.
Search WWH ::




Custom Search