Information Technology Reference
In-Depth Information
OJA + . It behaves exactly as OJA as a consequence of the fact that because
of the problem of the empty space for increasing n (it is not an exact RQ
gradient), it does not converge to 1.
FENG . There is always instability, but for higher n because of the advantage
of starting with lower initial conditions. It is possible that for high n ,there
is the phenomenon of numerical divergence.
Other simulations can be found in [24].
2.11 CONCLUSIONS
Minor components analysis is becoming more and more important, not only in
signal processing, but above all, in data analysis (orthogonal regression, TLS).
Important applications in computer vision (estimation of the parameters of the
essential matrix in structure from motion problems) may be found in [24] and
are very promising.
This chapter is not only the presentation of a novel neural MCA algorithm
that overcomes the drawbacks of the other neural MCA laws, but is above all
a complete theory of the neural MCA. Indeed, the analysis is done from the
stochastic, the asymptotic, the differential geometry, and the numerical point
of view. Three types of divergence are presented and studied. An equivalence
analysis is presented. The relevance of the RQ degeneracy property is shown,
and very important consequences are deduced from it. For the first time all the
algorithms are tested on problems of higher dimensionality. A real application is
presented briefly.
From this chapter, the following conclusions can be deduced from each of
the already existing MCA learning laws, by considering the choice of low initial
conditions (justified by Remarks 56 and 64):
LUO . It has a slow convergence to the MC direction and then diverges
2 ]. It also suffers from the sudden divergence, which can be
anticipated by noisy data. It cannot be stopped reliably and is very sensitive
to outliers. It is a high variance/low bias algorithm, but it oscillates too much
around the solution. It works badly for medium-dimensional data.
OJAn . It has a slow convergence (faster than LUO) to the MC direction and
[order O α
2 , but slower than LUO]. The rate of divergence
depends exponentially on the level of noise in the data ( λ) . It cannot be
stopped reliably and is very sensitive to outliers. It is a high variance/low
bias algorithm, but it oscillates too much around the solution. It works badly
for medium-dimensional data.
OJA . It has a very slow convergence to the MC direction and its squared
weight modulus change is of order O (α) . The weights decrease to 0 (in case
of initial conditions of modulus greater than 1, there is sudden divergence,
then diverges [order O α
Search WWH ::




Custom Search