Graphics Reference
In-Depth Information
their fitting method, they manually select seven corresponding face features on their model
and in the depth scan. A morphable model of expressions was proposed by Lu and Jain (2008).
Starting from an existing neutral scan, they use their expression model to adjust the vertices
in a small region around the nose to obtain a better fit of the neutral scan to a scan with a
certain expression. Amberg et al. (2008) built a PCA model from 270 identity vectors and a
PCA model from 135 expression vectors and combined the two into a single morphable face
model. They fitted this model to 3D scans of both the UND (University of Notre Dame) and
GAVAB (Grupo de Algorıtmica para la Visi on Artificial y la Biometrıa) face sets (see next
section), and use the acquired model coefficients for expression invariant face matching with
considerable success.
Non statistical deformation models were proposed as well. Huang et al. (2006) proposed
a global-to-local deformation framework to deform a shape with an arbitrary dimension (2D,
3D or higher) to a new shape of the same class. They show their framework's applicability
to 3D faces, for which they deform an incomplete source face to a target face. Kakadiaris
et al. (2006) deform an annotated face model to scan data. Their deformation is driven by
triangles of the scan data attracting the vertices of the model. The deformation is restrained
by stiffness, mass, and damping matrices that control the resistance, velocity, and acceleration
of the models vertices. Whitmarsh et al. (2006) fit a parameterized CANDIDE face model to
scan data by optimizing shape and action parameters. The advantage of such deformable faces
is that they are not limited to the statistical changes of the example shapes, so the deformation
has less restrictions. However, this is also their disadvantage, because these models cannot
rely on statistics in case of noise and missing data.
4.2 Data Sets
In this chapter, we fit a morphable face model that is defined as S inst =
+ i = 1 w i σ i s i to 3D
scan data. By doing this, we obtain a clean model of the face scan, that we can use to identify
3D faces. The scans that we fit the morphable face model to are the 3D face scans of the
UND (Chang et al., 2005), a subset of the GAVAB (Moreno and Sanchez, 2004; ter Haar et al.,
2008) and a subset of the Binghamton University 3D Facial Expression (BU-3DFE) (Yin et al.,
2006) databases. The UND set contains 953 frontal range scans of 277 different subjects with
mostly neutral expression. The GAVAB set consists of nine low quality scans for each of its
61 subject, including scans for different poses and expressions. From this set, we selected, per
subject, four neutral scans, namely the two frontal scans and the scans in which subjects look
up and down. Acquired scan data from these poses differ in point cloud density, completeness
and relatively small facial changes. The BU-3DFE set was developed for facial expression
classification. This set contains one neutral scan and 24 expression scans having different
intensity levels, for each of its 100 subjects. From this set, we selected the neutral scans and
the low level expression scans (anger, disgust, fear, happiness, sadness, surprise at level 1).
Although the currently used morphable model is based on faces with neutral expressions
only, it makes sense to investigate the performance of our face model fitting in case of
changes in pose and expressions. These variations in 3D scan data, which are typical for a non
cooperative scan environment, allow us to evaluate our 3D face recognition methods.
We aim at 3D face recognition, so we need to segment the face from each scan. For that,
we employ our pose normalization method (ter Haar and Veltkamp, 2008) that normalizes
S
Search WWH ::




Custom Search