Graphics Reference
In-Depth Information
determining which emotional state could be associated with the
performed gesture. Those Bayesian classifiers are used before and
after performing data fusion between modalities. In the first case, the
classifiers are applied separately to speech, face, and gesture features.
Separate results are then combined via a voting algorithm. In the
second case, there is a single classifier that receives all the features
coming from different modalities as input.
Sanghvi et al. (2011) evalute children's emotional reaction while
playing chess with a robot by analyzing their upper body movements
and posture. For this, computer vision algorithms are applied (e.g.,
CAMShift) to extract the children's body silhouette from the input
video. Body (frontal/backward) lean angle and curvature of the
back, Quantity of Motion, and a Contraction Index are determined.
Then, the authors compute the 1st, 2nd, and 3rd derivative of each
feature's time-series and the histograms of each derivative. The result
is a stream of features that is classified using 63 classifiers. The best
results are obtained by the ADTree and OneR classifiers and they
show that Quantity of Motion and the 2nd derivative of the movement
features are the most significant indicators in discriminating the user's
emotional state.
Kleinsmith and Bianchi-Berthouze (2011) revise psychological and
neuroscientific works demonstrating the importance of both gesture
movement and form in the process of human affect recognition.
By showing participants a set of, for example, emotional upright
and upside-down reversed videos, researchers prove that emotion
recognition is still possible but with lower rate, revealing the
contribution of form in the process. The authors present a system for
recognition of affective postures based on sequences of static postures.
Non-acted expressions of affect are collected by motion capturing
the body joints rotations of human video game players at the
time when the game is won or lost. The system is composed by two
separate modules: the first one for classification of static postures
and the second one for classification of a sequence of postures. Each
posture is described as a vector of body joints rotations. The input
of the posture classification module consists in a static posture while
the output is a probability distribution for the set of labels: defeated,
triumphant, neutral . Then a decision rule is applied to sequences of
static postures, by computing the cumulative probability that each
label appears in the sequence.
Previous work from the same authors (Kleinsmith et al., 2011)
focuses on the recognition of non-acted affective states grounded
only on body postures. In particular, they present models based
on low-level description of body configuration. Each body posture
Search WWH ::




Custom Search