Information Technology Reference
In-Depth Information
the eigenvectors of AA T
(a M by M matrix), which is easier to obtain, since
N .
6. PCA has property of packing the energy into least number of principal compo-
nents. The associated eigenvalues will be used to rank the eigenvectors according
to their usefulness in characterizing the variation among the images. The eigen-
vectors (PCs) corresponding to the higher eigenvalues (i.e., subspace of basis
images of size M ) carry significant information for representation.
7. Select the M
generally M
M eigenvectors (transformation matrix) from basis images
corresponding to highest eigenvalues to extract facial features efficiently.
8. The new representation of an image is computed by projecting that image (input
vector) onto subspace of basis images. After applying the projection, the input
vector in a N dimensional space is reduced to a feature vector in a M dimensional
subspace.
<
7.2.2 Feature Extraction with C PCA
The presented PCA in complex domain ( C PCA ) for feature extraction of image data
is a generalization of PCA to complex variables. The complex ones are formed from
the original data and their Hilbert transform. In Hilbert transformation, the amplitude
of each spectral component is unchanged, but each component's phase is advanced
by
ˀ/
2[ 12 ]. Complex principal components are determined from the complex cross-
correlation matrix of image data matrix Z . Basic idea of C PCA presented in [ 8 , 42 ]
has been used to factor the image matrix into a set of orthonormal basis vectors.
7.2.2.1 Algorithm
The well-known conditions of PCA or R PCA technique for extracting features has
been extended to the complex domain, ie C PCA . Basic steps in C PCA algorithm for
feature extraction can be stated as follows:
1. Collect the images in data matrix X (M by N). Find the mean subtracted data
matrix, A
avg .
2. Determine a complex image data matrix Z, using Hilbert transformation.
3. Compute the cross-correlation matrix C
=
X
Z Z , where Z denotes the complex
=
conjugate transposition of Z.
4. Find the eigenvectors of C . But for the moderate size of an image ( N
=
p
×
q ),
the dimension of C will be pq
×
pq . Hence, calculations will be computationally
extensive and intractable.
5. The problem can be circumnavigated by considering the eigenvectors v i of ZZ ,
such that ZZ v i
e i v i . Vector v i is of size M and there are M eigenvectors. The
calculations are greatly reduced from the order of number of pixels (N) in the
image to the order of number of images (M) in the training set, M
=
N .
 
Search WWH ::




Custom Search