Graphics Reference
In-Depth Information
information to perform facial expression recognition. A few years ago, the first solutions to
automatically perform facial expression recognition based on 3D face scans were proposed
using very small databases and categorizing only a few facial expressions (Ramanathan et al.,
2006). The availability of new facial expression databases, like those constructed at the BU-
3DFE database (Yin et al., 2006), and at the Bogazi¸i University (Bosphorus database) (Savran
et al., 2008) have now propelled research on this topic. In particular, the BU-3DFE database
has become the de facto standard for comparing facial expression recognition algorithms.
This is because unlike other 3D face data sets, the BU-3DFE database provides a precise
categorization of facial scans according to the Ekman's six basic facial expressions plus the
neutral one, also providing different levels of expression intensities (see also the description
in Section 5.2).
In the following paragraphs, the problem of facial expression recognition is introduced by
first reviewing the more recent and influencing state-of-the-art solutions then presenting in
detail some specific solutions. For the characterizing features of the main 3D face databases
for expression analysis, we refer to Section 5.2.
5.4.1 3D Facial Expression Recognition: State of the Art
Most of the works on 3D facial expression recognition can be categorized as those based on
the generic facial model or feature classification .
In the first category, a general 3D face model ( template model) is trained with prior knowl-
edge, such as feature points, shape and texture variations, or local geometry labels. A dense
correspondence between points of 3D faces is usually required to build the template model.
For example, in Ramanathan et al. (2006) a correspondence is established between faces with
expression and their neutral pair by minimizing an energy function. A morphable expres-
sion model (MEM) is constructed by applying the PCA to different expressions, so that
new expressions can be projected into points in a low dimensional space constructed by the
eigen-expressions obtained by MEM. Expression classification is performed by comparing
the Euclidean distances among projected points in the eigen-expression space, and a recog-
nition rate of over 97% is reported on a small and private data set (just 25 subjects with 4
expressions per subject are included in the data set). An Approach inspired by the advances
in the artificial intelligence techniques such as ant colony optimization (ACO) and particle
swarm optimization (PSO) is proposed in Mpiperis et al. (2008c). In this work, first anatomical
correspondence between faces is established using a generic 3D deformable model and the 83
manually detected facial landmarks of the BU-3DFE database. Then, surface points are used
as a basis for classification, according to a set of classification rules that are discovered by an
ACO/PSO-based rule-discovery algorithm. The performance of the algorithm evaluated on the
BU-3DFE database scored a total recognition rate of 92.3%. In Mpiperis et al. (2008b), face
recognition and facial expression recognition are performed jointly by decoupling identity
and expression components with a bilinear model. An elastically deformable model algorithm
that establishes correspondence among a set of faces is proposed. Construction of the model
relies on manually identified landmarks that are used to establish points correspondence in
the training stage. Fitting these models to unknown faces enables face recognition invariant
to facial expressions and facial expression recognition with unknown identity. A quantitative
evaluation of the technique is conducted on the BU-3DFE database with an overall 90.5%
Search WWH ::




Custom Search