Graphics Reference
In-Depth Information
Table 5.7
Classification results using local shape analysis and several classifiers
MultiBoost LDA
MultiBoost NB
MultiBoost NN
SVM-Linear
Recognition rate
98.81%
98.76%
98.07%
97.75%
scaled to the true physical dimensions of the captured human faces. Following a similar setup
as in Gong et al. (2009), we randomly divided the 60 subjects into two sets, the training set
containing 54 subjects (648 samples), and the test set containing 6 subjects (72 samples).
To drive the classification experiments, we arbitrarily choose a set of six reference subjects
with its six basic facial expressions. We point out that the selected reference scans do not appear
either in the training or in the testing set. These references, with their relative expressive scans
corresponding to the highest intensity level, are taken to play the role of representative models
for each of the six classes of expressions. For each reference subject, we derive a facial
expression recognition experience.
Several facial expression recognition experiments were conducted with changing at each
time the reference. Using the Waikato Environment for Knowledge Analysis (Weka) (Hall
et al., 2009), we applied the MultiBoost algorithm with three weak classifiers, namely, Linear
Discriminant Analysis (LDA), Naive Bayes (NB), and Nearest Neighbor (NN), to the extracted
features, and we achieved average recognition rates of 98.81%, 98.76%, and 98.07%. We
applied the SVM linear classifier as well, and we achieved an average recognition rate of
97.75%. We summarize the resulting recognition rates in Table 5.7.
We note that these rates are obtained by averaging the results of the 10 independent and
arbitrarily run experiments (10-fold cross validation) and their respective recognition rate
obtained using the MultiBoost-LDA classifier. We note that different selections of the reference
scans do not affect significantly the recognition results and that there is no large variation in
recognition rates values. The reported results represent the average over the six performed
experiments. The MultiBoost-LDA classifier achieves the highest recognition rate and shows
a better performance in terms of accuracy than do the other classifiers. This is mainly as
a result of the capability of the LDA-based classifier to transform the features into a more
discriminative space and, consequently, result in a better linear separation between facial
expression classes.
The average confusion matrix relative to the best performing classification using Multi-
Boost-LDA is given in Table 5.8.
Table 5.8
Average confusion matrix given by MultiBoost-LDA classifier
AN
DI
FE
HA
SA
SU
AN
97.92
1.11
0.14
0.14
0.69
0.0
DI
0.56
99.16
0.14
0.0
0.14
0.0
FE
0.14
0.14
99.72
0.0
0.0
0.0
HA
0.56
0.14
0.0
98.60
0.56
0.14
SA
0.28
0.14
0.0
0.0
99.30
0.28
SU
0.14
0.56
0.0
0.0
1.11
98.19
 
Search WWH ::




Custom Search