Information Technology Reference
In-Depth Information
6. Eigenvectors (basis images) from the cross-correlation matrix C will be Z v i (N
by M), such that Z ZZ v i
e i Z v i .
7. Since, C is a hermitian matrix, its all eigenvalues will be real. In high dimen-
sional image space, the energy mainly locates in the subspace constituted by
first few eigenvectors. Thus, a significant compression can be achieved by letting
those eigenvectors with large eigenvalues. Subspace capture as much variation as
possible of training a set with less numbers ( M <
=
M ) of eigenfaces as possible.
8. Project each input vector (face image) onto basis vectors to find a set of M
coefficients that describe the contribution of each vector in the subspace.
9. Thus, input vector of size N is reduced to a new representation (feature vector)
of size M , which be used for classification.
7.2.3 Independent Component Analysis
Independent component analysis (ICA) in real domain has emerged as a powerful
solution to blind source separation and feature extraction problem. PCA aims at the
decomposition of a linear mixture of independent source signals into uncorrelated
components using only second-order statistics. Uncorrelatedness is a weaker form
of independence. Basis images found by PCA separates only on pairwise relation-
ships between pixels in the image database, however, higher-order relationships still
appear in the joint distribution of basis images (PCA coefficients). In typical pattern
classification problems, the significant information may be contained in the higher-
order relationships among pixels. So, it will be desirable to use methods which are
sensitive to these higher-order statistics to get better basis images. The assumption
of Gaussian sources is implicit in PCA which makes it inadequate, because in real
world, the data often does not follow a Gaussian distribution. ICA in real domain
(ICA or R ICA) is a method for transforming multidimensional random vectors into
its components that are both statistically independent and non-Gaussian [ 35 , 37 , 54 ].
An important principle of ICA estimation is the maximization of the nongaussianity
of linear combination of the observed mixture variables, which will yield the inde-
pendent components. This transformation brings out the essential features of image
data more visible or accessible. The ICA is the natural way to 'fine tune' the PCA. In
researches it is observed that ICA defined subspace encode more information about
image/data identity than PCA defined subspaces. The feature vectors obtained from
R ICA are as independent as possible and do not contain redundant data. Therefore, it
is expected that these data of reduced dimension will be rich in features. ICA yields
far better distinctiveness among classes of objects.
The feature extraction problem with one of the architectures of ICA or R ICA
[ 33 , 38 ] can be stated as: Given a set of training images X (M by N), where images
are random variables and pixels are number of observations. The image data con-
sists of M variables (images) that have been observed together and the number of
observations denotes the dimensionality of the image. R ICA determines indepen-
dent rather than uncorrelated image decomposition, thus provides a more powerful
 
Search WWH ::




Custom Search