Graphics Reference
In-Depth Information
according to their sensitivities to facial expressions. They applied sparse representations to the
collected low level features and achieved good results on a challenging data set [GAVABDB
(Moreno and Sanchez, (2004b)]. This approach, however, required a training step.
Along similar lines, Wang et al., (2010) computed a signed shape difference map (SSDM)
between two aligned 3D faces as an intermediate representation for the shape comparison.
With regards to the SSDMs, they used three kinds of features to encode both the local similarity
and the change characteristics between facial shapes. Selected the most discriminative local
features optimally by boosting and trained as weak classifiers for assembling three collective
strong classifiers. The individual features were of the type: Haar-like, Gabor, and local binary
pattern (LBP).
McKeon and Russ (2010) used the 3D Fisherface region ensemble approach. After faces
registration using the ICP algorithm, the Fisherface approach seeks to generate a scatter matrix
for improved classification by maximizing the ratio of the between scatter and within scatter.
Twenty-two regions were used as input for 3D Fisherface. To select most discriminative
regions, sequential forward search Sun et al. (2008) was used.
Huang et al. (2010), proposed to use the multiscale LBP as a new representation for 3D face
jointly to shape index. They then extracted Scale Invariant Feature Transform(SIFT)-based
local features. The matching also involves holistic constraint of the facial component and
configuration.
In Cook et al. (2006), the Log-Gabor templates were used to exploit the multitude of
information available in human faces to construct multiple observations of a subject, which
were classified independently and combined with score fusion. Gabor features were recently
used in Moorthy et al. (2010) on automatically detected fiducial points.
In Chang et al. (2006) and Mian et al. (2007a) the focus was on matching nose regions albeit
using ICP. To avoid passing over deformable parts of the face encompassing discriminative
information, the authors in Faltemier et al. (2008a) proposed to use a set of 38 face regions
that densely cover the face and fused the scores and decisions after performing ICP on each
region.
In Queirolo et al. (2010a), the circular and elliptical areas around the nose were used
together with the forehead and the entire face region for authentication. Surface interpenetration
measure (SIM) was used for the matching. Taking advantage of invariant face regions, an
annealing simulated approach was used to handle expressions.
In Alyuz et al. (2008b), the authors proposed to use average region models (ARMs) locally
to handle the missing data and the expression-inducted deformation challenges. They manually
divided the facial area into several meaningful components, and registration of faces was carried
out by separate dense alignments to relative ARMs. A strong limitation of this approach is the
need for manual segmentation.
5.3.3 Partial Face Matching
Many of the 3D face recognition methods that have been proposed in the last few years focused
on face recognition in the presence of expression variations reporting very high accuracies on
benchmark databases such as the FRGC v2.0 data set (Phillips et al., 2005). However, only a
few solutions explicitly addressed the problem of 3D face recognition in case only a part of
the facial scan was available ( partial face match ), or parts of the face are occluded by hair,
Search WWH ::




Custom Search