Graphics Reference
In-Depth Information
5.4 Facial Expression Analysis
As machines become more and more involved in everyday human lives and take part in both
their living and work spaces, they need to become more intelligent in terms of understanding
the moods and emotions of humans. Embedding these machines with systems capable of rec-
ognizing human emotions and mental state is precisely what the human-computer interaction
research community is focusing on in the affective computing and human-machine interac-
tion communities. The following are some interesting areas where automatic facial expression
recognition systems find applications:
Human-machine interface: Facial expression is a way of communication as many other ways
(e.g., speech signal). Emotional detection is natural for humans, but it is a very difficult task
for machines; therefore, the purpose of an emotion recognition system is to use emotion-
related knowledge in such a way that human-machine communication can be improved and
make machines and robots more human-like.
Medical care and cure field: Facial expressions are the direct means to identify when specific
mental processes (e.g., pain, depression) occur.
Psychological field: Expression detection is tremendously useful for the analysis of the
human psychology.
Security field: Decoding the language of micro-expressions is crucial for establishing or
detracting from credibility, and to determine any deception from suspects during interroga-
tions. This is because micro-expression is a momentary involuntary facial expression that
people unconsciously display when hiding an emotion.
Education field: Pupils' facial expressions inform the teacher of the need to adjust the
instructional message.
The first studies on this subject date back to the late 1970s with the pioneering work of
Ekman (1972). In these studies, it is evidenced that a number of basic facial expressions exist
that can be categorized into six classes, namely, anger , disgust , fear , happiness , sadness , and
surprise ,plusthe neutral expression. This categorization of facial expressions has been also
proved to be consistent across different ethnicities and cultures; hence, these expressions are
in some sense “universally” recognized.
In their studies, Ekman and Friesen (1977) also defined the Facial Action Coding System
to code the facial expressions through the movement of face points as described by the action
units . This work inspired many researchers to analyze facial expressions in 2D by tracking
facial features (e.g., facial landmarks) and measuring the amount of facial movements these
landmarks undergo from expression to expression in still images and videos. Almost all of
the methods developed in 2D use distributions or facial distances of these facial landmarks
as features to be used as inputs to classification systems. Then, the outcome of the classifiers
is one of the facial expression classes. These approaches mainly differ in the facial features
selected and the classifier used to distinguish among the different facial expressions.
Recently, there has been a progressive shift from 2D to 3D in face analysis approaches,
mainly motivated by the robustness of the 3D facial shape to illumination changes, pose,
and scale variations. Although many studies have appeared to perform 3D face recognition
(Berretti et al., 2010c; Gupta et al., 2010; Kakadiaris et al., 2007c; Mian et al., 2008; Queirolo
et al., 2010b; Samir et al., 2009b), very few have taken advantage of the 3D facial geometric
Search WWH ::




Custom Search