Game Development Reference
In-Depth Information
Although face is considered the main “demonstrator” of user's emotion (Ekman
& Friesen, 1975), the recognition of the accompanying gesture increases the
confidence of the result of the facial expression subsystem. In the current
implementation, the two subsystems are combined as a weighted sum: Let b k be
the degree of belief that the observed sequence presents the k-th emotional
state, obtained from the facial expression analysis subsystem, and EI k be the
corresponding emotional state indicator, obtained from the affective gesture
analysis subsystem, then the overall degree of belief d k is given by:
(6)
=
+
d
w
b
w
EI
k
1
k
2
k
where the weights w 1 and w 2 are used to account for the reliability of the two
subsystems as far as the emotional state estimation is concerned. In this
implementation we use w 1 =0.75 and w 2 =0.25. These values enable the affective
gesture analysis subsystem to be important in cases where the facial expression
analysis subsystem produces ambiguous results, while at the same time leave the
latter subsystem to be the main contributing part in the overall decision system.
For the input sequence shown in Figure 3, the affective gesture analysis
subsystem consistently provided a “surprise” selection. This was used to fortify
the output of the facial analysis subsystem, which was around 85%.
Conclusions - Future Work
In this chapter, we described a holistic approach to emotion modeling and
analysis and their applications in MMI applications. Beginning from a symbolic
representation of human emotions found in this context, based on their expres-
sion via facial expressions and hand gestures, we show that it is possible to
transform quantitative feature information from video sequences to an estima-
tion of a user's emotional state. This transformation is based on a fuzzy rules
architecture that takes into account knowledge of emotion representation and the
intrinsic characteristics of human expression. Input to these rules consists of
features extracted and tracked from the input data, i.e., facial features and hand
movement. While these features can be used for simple representation purposes,
e.g., animation or task-based interfacing, our approach is closer to the target of
affective computing. Thus, they are utilized to provide feedback on the user's
emotional state while in front of a computer.
Future work in the affective modeling area includes the enrichment of the
gesture vocabulary with more affective gestures and feature-based descrip-
Search WWH ::




Custom Search