Graphics Reference
In-Depth Information
what can be assembled out of the pre-recorded behavior. Neff et al.
(2008) aim at generating character-specific gesture style capturing the
individual differences of human speakers. Based on statistical gesture
profiles learned from annotated multimodal behavior, the system takes
arbitrary texts as input and produces synchronized conversational
gestures in the style of a particular speaker. The resulting gesture
animations succeed in making an embodied character look more lively
and natural and have empirically been shown to be consistent with a
given performer's style. The approach does not need to account for
the meaning-carrying functions of gestures, since the gestures focused
on are discourse gestures and beats.
In summary, research on automatic gesture production has either
emphasized general patterns in the formation of iconic gestures, or
concentrated on individual gesturing patterns. In the next section, a
modeling approach will be presented which goes beyond by accounting
for both, systematic commonalities across speakers and idiosyncratic
patterns of individual speakers.
4. The GNetIc Generation Approach for Iconic Gesture
A recent approach to generate iconic gesture forms from an underlying
imagistic of contents is the Generation Network for Iconic Gestures
(GNetIc; Bergmann and Kopp, 2009a). A formalism called Bayesian
decision networks (BDNs)—also termed Influence Diagrams (Howard
and Mattheson, 2005)—is employed that supplement standard Bayesian
networks by decision nodes. This formalism provides a representation
of a finite sequential decision problem, combining probabilistic
and rule-based decision making. GNetIc is a feature-based account
of gesture generation, i.e. gestures are represented in terms of
characterizing features as their representation technique and physical
form features. These make up the outcome variables in the model
that divide into chance variables quantified by conditional probability
distributions in dependence on other variables ('gesture occurrence'
(G), 'representation technique' (RT), 'handedness' (H), and 'handshape'
(HS)), and decision variables for which sufficient data is lacking or a
sound theoretical account is available. These can be modeled by way
of explicit rules in the respective decision nodes (for the gesture form
features 'palm orientation' (PO), 'back-of-hand orientation' (BoH),
'movement type' (MT), and 'movement direction' (MD)). Factors which
potentially contribute to either of these choices are considered as
input variables. So far, three different factors have been incorporated
into this model: discourse context, the previously performed gesture,
and features of the referent. The probabilistic part of the network is
Search WWH ::




Custom Search