Game Development Reference
In-Depth Information
Expression Analysis Frameworks for
Facial Motion Understanding
Systems analyzing faces from monocular images are designed to give motion
information with the most suitable level of detail, depending on their final
application. Some of the most significant differences among the techniques
found in the literature come from the animation semantics they utilize to describe
face actions. Some systems may aim at providing very high level face motion and
expression data in the form of emotion semantics, for instance, detecting joy, fear
or happiness on faces. Some others may provide generic motion data determining
what the action of the facial features is, for example, detecting open/closed eyes.
And others could even estimate more or less accurately the 3D-motion of the
overall face, giving out very low-level face animation parameters.
In an analysis-synthesis scheme for generating face animation, both analysis and
synthesis parts must share the same level of semantics. The more specific the
motion information given by the analysis is, the fewer free-style interpretations
the FA system will have to make. To replicate the exact motion of the person
being analyzed, it is necessary to generate very detailed action information.
Otherwise, if we only generate rough data about the face actions, we will only
be able to get customized face motion if the person's expression behavior has
previously been studied and the FA already has the specific details of the
individual.
It is quite difficult to classify face motion and expression analysis methods due
to the common processing characteristics that many of them share. Despite this
fact, we have tried to group them based on the precision of the motion information
generated and the importance of the role that the synthesis plays during the
analysis.
Methods that Retrieve Emotion Information
Humans detect and interpret faces and facial expressions in a scene with little
or no effort. The systems we discuss in this section accomplish this task
automatically. The main concern of these techniques is to classify the observed
facial expressions in terms of generic facial actions or in terms of emotion
categories and not to attempt to understand the face animation that could be
involved to synthetically reproduce them.
Yacoob has explored the use of local parameterized models of image motion for
recognizing the non-rigid and articulated motion of human faces. These models
provide a description of the motion in terms of a small number of parameters that
Search WWH ::




Custom Search