Information Technology Reference
In-Depth Information
Figure 5.22 Line fitting without preprocessing for a noise variance of 0.5: plot of the index
parameter (expressed in decibels). The values are averaged using a temporal mask with
width equal to the number of iterations up to a maximum of 500. ( See insert for color
representation of the figure .)
As shown in Section 2.7, big fluctuations of the weights imply that the learning
law increases the estimation error, and when this increase is too large, it will make
the weight vector deviate drastically from normal learning, which may result in
divergence or an increased learning time. This is a serious problem for MCA
EXIN (see Second 74) when the initial conditions are infinitesimal. On the other
hand, MCA EXIN converges faster for smaller initial conditions (Theorem 62).
Recalling these observations and Remark 125, the MCA EXIN + improvements
with respect to the MCA EXIN are:
Smoother dynamics , because the weight path in every plane z i z j remains
near the hyperbola branch containing the solution locus
Faster convergence , because of the smaller DLS scheduling fluctuations,
which reduce the settling time
Better accuracy , because of the small deviations from the solution
The next simulations deal with the benchmark in [195] considered in Sections
2.10 and 5.5.3. MCA EXIN + uses the same normalized training set of MCA neu-
rons. The first simulation uses the learning rate α( t ) such that α( 0 ) = 0 . 01; then it
is linearly reduced to 0.001 after 500 iterations, and, afterward, kept constant. The
initial conditions are [0 . 1, 0 . 1] T . The additive noise is Gaussian with σ
2
= 0 . 5.
The scheduling is linear from ζ( 0 ) = 0to ζ( 5000 ) = 1. Figures 5.23 and 5.24
show, respectively, the plot of the index parameter (expressed in decibels) and
Search WWH ::




Custom Search