Game Development Reference
In-Depth Information
distinguishable facial movements” called the Facial Action Coding System
(FACS) (Ekman & Friesen, 1978). FACS is an anatomically oriented coding
system, based on the definition of “Action Units” (AU) of a face that cause facial
movements. An Action Unit could combine the movement of two muscles or
work in the reverse way, i.e., split into several muscle movements. The FACS
model has inspired the derivation of facial animation and definition parameters
in the framework of MPEG-4 (Tekalp & Ostermann, 2000). In particular, the
Facial Definition Parameter (FDP) and the Facial Animation Parameter (FAP)
set were designed to allow the definition of a facial shape and texture. These sets
eliminate the need for specifying the topology of the underlying geometry,
through FDPs, and the animation of faces reproducing expressions, emotions and
speech pronunciation, through FAPs.
Affective Facial Expression Analysis
There is a long history of interest in the problem of recognizing emotion from
facial expressions (Ekman & Friesen, 1978), and extensive studies on face
perception during the last 20 years (Davis & College, 1975). The salient issues
in emotion recognition from faces are parallel in some respects to the issues
associated with voices, but divergent in others.
In the context of faces, the task has almost always been to classify examples of
archetypal emotions. That may well reflect the influence of Ekman and his
colleagues, who have argued robustly that the facial expression of emotion is
inherently categorical. More recently, morphing techniques have been used to
probe states that are intermediate between archetypal expressions. They do
reveal effects that are consistent with a degree of categorical structure in the
domain of facial expression, but they are not particularly large, and there may be
alternative ways of explaining them — notably by considering how category
terms and facial parameters map onto activation-evaluation space (Karpouzis,
Tsapatsoulis & Kollias, 2000).
Analysis of the emotional expression of a human face requires a number of pre-
processing steps which attempt to detect or track the face, to locate character-
istic facial regions such as eyes, mouth and nose, to extract and follow the
movement of facial features, such as characteristic points in these regions or
model facial gestures using anatomic information about the face.
Facial features can be viewed (Ekman & Friesen, 1975) as static (such as skin
color), slowly varying (such as permanent wrinkles), or rapidly varying (such as
raising the eyebrows) with respect to time evolution. Detection of the position
and shape of the mouth, eyes and eyelids and extraction of related features are
the targets of techniques applied to still images of humans. It has, however, been
Search WWH ::




Custom Search