Graphics Reference
In-Depth Information
applied to studying shapes of anatomical parts using medical images (Miller and Younes, 2001;
Grenander and Miller, 1998). The set of nonrigid deformations can be subdivided into linear
and nonlinear deformations. Nonlinear deformations imply local stretching, compression, and
bending of surfaces to match each other and are also referred to as elastic deformations. Earlier
attempts at elastic matching used graphs on the basis texture images of faces (Kotropoulos
et al., 2000).
Kakadiaris et al. (2007a) used an annotated face model to study geometrical variability
across faces. The annotated face model was deformed elastically to fit each face thus allowing
the annotation of its different anatomical areas, such as the nose, eyes, and mouth. Elastic
registration with models was used the points of an annotated 3D face reference model were
shifted according to elastic constraints so as to match the corresponding points of 3D target
models in a gallery. Similar morphing was performed for each query face. Then, face matching
was performed by comparing the wavelet coefficients of the deformation images obtained
from morphing. This approach was automatic. Similar approaches were based on manually
annotated models (Lu and Jain, 2006, 2008; Mpiperis et al., 2008a).
Lu and Jain (2008) presented an approach that is robust to self-occlusions (due to huge pose
variations) and expressions. Three-dimensional deformations learned from a small control
group was transferred to the 3D models with neutral expression in the gallery. The cor-
responding deformation was synthesized in the 3D neutral model to generate a deformed
template. The matching was performed by fitting the deformable model to a given test scan,
which was then formulated as a minimization of a cost function.
ter Haar and Velkamp (2010), proposed a multiresolution approach to semi-automatically
build seven morphable expression models, and one morphable identity model from scratch.
The proposed algorithm automatically selects the proper pose, identity, and expression such
that the final model instance accurately fits the 3D face scan.
A strong limitation of these approaches is that the fiducial landmarks needed during expres-
sion learning have to be extracted manually for some approaches. They are usually semi-
automatic and rarely full automatic.
Local Regions / Features Approaches
A different way proposed in the literature to handle expression variations is to match parts or
regions of faces rather than the whole faces. Several notable local techniques were proposed in
Gordon (1992) and Moreno et al. (2005), where the authors employed surface areas, curvatures
around facial landmarks, distances, and angles between them with a nearest neighbor classifier.
In Lee et al. (2005), on the basis of ratios of distances and angles between eight fiducial points,
the authors' technique used a support vector machine classifier. Euclidean/geodesic distances
between anthropometric fiducial points were employed as features in Gupta et al. (2007) along
with linear discriminant analysis classifiers. However, a successful automated detection of
fiducial points is critical here.
In Mahoor and Abdel-Mottaleb (2009) and Mousavi et al. (2008), the authors presented
low level geometric features-based approaches and reported results on neutral faces but the
performance decreased when expressions variations were introduced. Using similar features,
the authors in Li et al. (2009) proposed designing a feature pooling and ranking scheme
to collect various types of low level geometric features, such as curvatures, and rank them
Search WWH ::




Custom Search