Information Technology Reference
In-Depth Information
2.2 MINOR COMPONENTS ANALYSIS
Definition 46 (Minor Components) The eigenvectors that correspond to the
smallest eigenvalues of the autocorrelation matrix of the data vectors are defined
as the minor components [ 195 ] .
The minor components are the directions in which the data have the smallest
variances (the principal components are the directions in which the data have the
largest variances). Expressing data vectors in terms of the minor components is
called minor components analysis (MCA).
2.2.1 Some MCA Applications
There are a lot of MCA applications, especially in adaptive signal processing.
MCA has been applied to frequency estimation [128,129], bearing estimation
[167], digital beamforming [77], moving-target indication [107], and clutter can-
cellation [5]. It has also been applied to TLS algorithms for parameter estimation
[63,64]; Xu et al. [195] have shown the relationship of MCA with the curve
and surface fitting under the TLS criterion, and as a result, MCA is employed
increasingly in many engineering fields, such as computer vision (see [24]).
2.2.2 Neural Answers
There are a lot of neural networks to solve the task of MCA. The only nonlinear
network is the Hopfield network by Mathew and Reddy [128,129]. The authors
develop a constrained energy function, using a penalty function, to minimize the
RQ. The neurons have sigmoidal activation functions. However, the structure
of the network is problem dependent (the number of neurons is equal to the
dimension of the eigenvectors); in addition, it is necessary to estimate the trace of
the covariance matrix for selecting appropriate penalty factors. All other existing
neural networks are made up of one simple linear neuron.
2.2.3 MCA Linear Neurons: An Overview
In building a neural network, linear units are the simplest to use. They are often
considered to be uninteresting because only linear functions can be computed in
linear networks, and a network with several layers of linear units can always be
collapsed into a linear network without hidden layers by multiplying the weights
in the proper fashion. On the contrary, there are very important advantages. Oja
[138] has found that a simple linear neuron with an unsupervised constrained
Hebbian learning rule can extract the principal component from stationary input
data. Later, Linsker [115-117] showed that in a layered feedforward network of
linear neurons with random inputs and Hebbian learning, spatial opponent cells,
orientation selective units, and orientation columns emerge in successive layers
just like the organization of the mammalian primary visual cortex. Contrary to the
Search WWH ::




Custom Search