Graphics Reference
In-Depth Information
glasses, scarves, hand gestures, and such. In a traditional face recognition experiment, both the
probe and gallery scans are assumed to be acquired cooperatively so as to precisely represent
the whole face. Differently, an increasing interest is targeting the development of solutions
enabling recognition in uncooperative scenarios. In such cases, acquisition of the probe scan
is performed in suboptimal conditions that can yield a nonfrontal face scan, missing parts, or
occlusions.
In general, global approaches cannot effectively manage partial face match, whereas local
approaches entail the potential to cope with the problem. To manage missing data obtained
by randomly removing certain regions from frontal scans, Bronstein et al. (2006a) proposed a
canonical representation of the face, which exploits the isometry invariance of the face surface.
On a small database of 30 subjects, they reported high recognition rates, but no side scans
were used for recognition. Alyuz et al. (2008a), proposed a part-based 3D face recognition
method, which operates in the presence of both expression variations and occlusions. The
approach is based on the use of ARMs for registration. Under variations, such as those
caused by occlusions, the method can determine noisy regions and discard them. Savran et
al. (2008) tested the performance of this approach tested on the Bosphorus 3D face database.
However, a strong limitation of this solution was the use of manually annotated landmarks
that were used for face alignment and region segmentation. Faltemier et al. (2008b) used a
set of 38 overlapping regions that densely cover the face around the nose and selected the
best-performing subset of 28 regions to perform matching using the ICP algorithm. They
reported a recognition experiment accounting for missing parts in the probe faces. However, in
this case, too, region segmentation across different facial scans strongly relied on the accurate
identification of the nose tip. More recently, a method that addresses the partial matching
problem has been proposed in Perakis et al. (2009). This is obtained by using an automatic
face landmarks detector to identify the pose of the facial scan so as to mark regions of missing
data and to roughly register the facial scan with an annotated face model (AFM) (Kakadiaris
et al., 2007c). The AFM is fitted using a deformable model framework that exploits facial
symmetry where data are missing. Wavelet coefficients extracted from a geometry image
derived from the fitted AFM are used for the match. Experiments have been performed using
the FRGC v2.0 gallery scans and side scans with 45 and 60 rotation angles as probes. In
Drira et al. (2010), the facial surface is represented as a collection of radial curves originating
from the nose tip and face comparison is obtained by the elastic matching of the curves. A
quality control permits the exclusion of corrupted radial curves from the match, thus enabling
recognition even in the case of missing data. Results of partial matching are given for the 61
left and 61 right side scans of the GAVAB data set (Moreno and Sanchez, 2004a).
Local approaches based on regions are limited by the need to identify some facial landmarks
to define the regions of the face to be matched. In addition, because parts of these regions
can be missing or occluded, the extraction of region descriptors is difficult; hence, regions
comparison is often performed using rigid (ICP) or elastic registration ( deformable models ).
Methods that use keypoints of the face promise to solve some of these limitations. In particular,
a few recent works have shown that local descriptors computed around salient keypoints can
be usefully applied to describe 3D objects and faces. In Mian et al. (2008), a 3D keypoint
detector and descriptor inspired to the scale invariant feature transform (SIFT) (Lowe, 2004)
has been designed and used to perform 3D face recognition through a hybrid 2D+3D approach
that also uses the SIFT detector and descriptor to index 2D texture face images. In Mayo and
Zhang (2009), SIFT detectors are used to detect and represent salient points in multiple 2D
Search WWH ::




Custom Search