Graphics Reference
In-Depth Information
M-step: for the facial motion region, estimate the translator vector
based on equations (7.10) and (7.12). Then we construct the vector
and project it using equation (7.13). Finally, the constrained estimate
of
is given by equation (7.14).
Experimental Results
2.
We evaluate the efficacy of the proposed hybrid motion analysis method
by using the extracted features in a facial expression classifications task. The
public available CMU Cohn-Kanade expression database [Kanade et al., 2000]
is used. From the database, we selected 47 subjects who has at least 4 coded
expression sequences. Overall the selected database contains 2981 frames.
There are 72% female, 28% male, 89% Euro-American, 9% Afro-American,
and 2% Asian. Several different lighting conditions are present in the selected
database. The image size of all the data is 640 × 480. For the Cohn-Kanade
database, Tian and Bolle [Tian and Bolle, 2001] achieved a high neutral face
detection rate using geometric features only. That indicates the database does
not contain expressions with little geometric motion yet large texture variation.
Using geometric feature only on the database, Cohen et al. [Cohen et al., 2003]
reported good recognition results for happiness and surprise, but much more
confusion among anger, disgust, fear and sadness. In this section, we present our
experimental results showing the proposed method improves the performance
for these four expressions.
We select seven exemplars including six expressions and neutral. The six
expressions are anger, disgust, fear, happiness, sadness, and surprise. In our
experiments, we first assign neutral vs. non-neutral probability using a neutral
network similar to [Tian and Bolle, 2001], which achieved a recognition rate
of 92.8% for neutral. For the remaining exemplars, we use 4 components for
each GMM model. The tracking results are used to perform facial expression
classification as Although this classifier may not be as
good as more sophisticated classifiers such as those in [Bartlett et al., 1999,
Cohen et al., 2003, Donate et al., 1999, Zhang et al., 1998], it can be used as
a test-bed to measure the relative performances of different features and the
proposed adaptation algorithm.
In the first experiment, we compare the classification performances of using
geometric feature only and using both geometric and ratio-image-based appear-
ance features. We use 60% data of each person as training data and the rest as
test data. Thus it is a person-dependent test. For all experiments we have done,
geometric-feature-only method and hybrid-feature method give similar results
for “happiness” and “surprise”. That means these two expressions have distinct
geometric features so that appearance features are not crucial for them. This
Search WWH ::




Custom Search