Digital Signal Processing Reference
In-Depth Information
principal component analysis, every image is expressed as a linear
combination of some basis vectors, i.e. eigenimages that best describe the
variation of intensities from their mean. When a given image is projected
onto this lower dimensional eigenspace, a set of r eigenface coefficients is
obtained, that gives a parameterization for the distribution of the signal.
Obtaining principal components of an image signal, i.e. eigenimages, can be
thought of as an eigenvalue problem. Suppose that the training set consists of
M mean-removed image vectors Then the eigenimages
m = 0, 1,..., M, can be computed as the eigenvectors of the following
covariance matrix X:
Each eigenimage is associated to an eigenvalue and principal
components are given by the first R eigenimages associated to the first R
eigenvalues when ordered with respect to their magnitudes. Usually the
reduced dimension R is much smaller than M, and the r -th eigenimage
coefficient
is obtained by the projection
for a given test image
vector y.
The eigenface coefficients, when computed for every frame i of a given
test sequence, constitute the face texture feature
vector i = 1 ,...,K. The face images in the training set are
all used first to obtain the eigenspace. The training set contains a number of
image sequences, say L, from each speaker class Let
denote the feature vectors of these images belonging to the class in the
training set. Then the minimum distance between these two sets of
feature vectors can be used as a similarity metric between the speaker class
and the unknown person:
The similarity metric defined in (9) can also be expressed as a
probabilistic likelihood by making use of Gibbs distribution: Given the face
texture feature vectors
the class conditional probability of the
feature set can be written as
Search WWH ::




Custom Search