Graphics Reference
In-Depth Information
of the surface, that is it captures the information pertaining to the Gaussian curvature but
not that pertaining to the mean curvature of the facial surface. On one hand, the approach
limits the discriminative information but on the other hand, it largely removes the influence of
expression variations on the retained discriminative information.
Subspace Modeling of Expressions
PCA subspaces have been used in different ways and for different purposes in 3D face
recognition. PCA is mostly applied on the facial data for dimensionality reduction. When
used in this manner, it simultaneously captures the facial surface data and any expression
variations together in the PCA subspace/features. This blend of the discriminative information
with the expression variations becomes a source of error if not addressed at a later stage in
the recognition system. In the work by McCool et al. (2006), the variations of facial PCA
features relative to some other PCA features (differences of feature vectors) are modeled using
Gaussian mixture models (GMMs). The GMMs provide a probabilistic means for computing
a similarity measure between a pair of facial scans (from which a feature vector difference
is formed). In matching a probe to a gallery of scans, a feature vector difference is formed
between the probe and each of the gallery scans. The identity of the probe is deemed to be
that of the gallery scan that produces the feature vector difference with the highest similarity
measure.
In the work by Al-Osaimi et al. (2009), a PCA subspace is constructed from image dif-
ferences between pairs of pose-corrected range images. Each range image difference is the
subtraction of the neutral face from a non-neutral image belonging to the same subject but
the training data includes pairs belonging to many subjects. Before the computation of the
range image difference, the two range images are registered to each other using ICP and on
the basis of only semi-rigid regions of the face (the forehead and the nose). This reduces
the effects of expression deformations on the registration and therefore the accuracy of the
image difference. Since each training pair belongs to the same subjects, the PCA subspace
represents only expression deformations in this case. Projecting an unseen probe difference d
(between a probe image and a gallery scan) on the PCA subspace results in the separation of
the expression deformations from the discriminative information since the projection retains
only the expression deformations. The projected range image difference is then reconstructed
d and subtracted from the original range image difference, e
d . The vector e , which
represents the discriminative information, is then post processed by thresholding the pixels
with high absolute values and finally a dissimilarity measure is computed from the post pro-
cessed vector. The approach has achieved a much higher recognition accuracy compared with
the approach that models PCA features.
=
d
Elasticity-based Modeling of Expressions
In the work by Kakadiaris et al. (2007), a generic model of the 3D face, called the annotated
facial model AFM, Kakadiaris et al. (2005), is elastically deformed/fitted to probe and gallery
facial scans (which may be under facial expression). For matching, the features are extracted
from the fitted models rather than the facial scans. The AFM is basically an average facial
mesh with marked facial regions and a u and v parameterization. AFM tends to have no person
Search WWH ::




Custom Search