Graphics Reference
In-Depth Information
referent features are linked to morphological gesture features by an
intermediate level of image description features. These explicate the
imagistic content of iconic gestures, consisting of separable, qualitative
features describing the meaningful geometric and spatial properties
of entities.
3.2 Customized gesture generation
The research reviewed so far is devoted to build general models of
gesture use, i.e. systematic inter-personal patterns of gesture use are
incorporated exclusively. What is not considered in these systems is
individual variation which is investigated by another line of research.
Hartmann et al. (2006) identifi ed six expressivity parameters of gesture
quality as an intermediate level of behavior parameterization between
holistic, qualitative communicative functions such as mood, personality,
and emotion on the one hand, and low-level animation parameters
like joint angles on the other hand. These parameters were applied
to the gesture engine of the embodied agent GRETA, whose library
of known prototype gestures are tagged for communicative function.
Recently, Mancini and Pelachaud (2010) implemented the concept of
expressivity parameters to create distinctive behavior patterns for
embodied conversational agents. Their proposed algorithm generates
nonverbal behavior for a given communicative intention and emotional
state, driven by the agent's general behavior tendency ('Baseline') and
modulated by dynamic factors such as emotional states, relation with
interlocutor, physical constraints, social roles, etc.
Rehm et al. (2008) presented another variant that makes use of the
gestural expressivity parameters to generate culture-specific gestures.
Differences between the cultures were identified and integrated in
a probabilistic model for generating agent behaviors. In a Bayesian
network, the culture to be simulated is connected with dimensions of
culture as a middle layer. These culture-specific dimensions were then
connected with gesture parameters.
3.3 Data-based gesture generation
In another line of research, data-driven methods are employed to
simulate individual speakers' gesturing behavior. Stone et al. (2004)
recombine motion-captured pieces with new speech samples to
recreate coherent multimodal utterances. Units of communicative
performance are re-arranged while retaining temporal synchrony and
communicative coordination that characterizes peoples' spontaneous
delivery. The range of possible utterances is naturally limited to
Search WWH ::




Custom Search