Graphics Reference
In-Depth Information
main theoretical representations. It also includes the representation
of Action Tendencies (Frijda, 1986). As no common vocabulary has
been agreed upon by emotion theoreticians, the choice of vocabulary
for an emotion within each of these representations is left open. It is
possible to use an existing one from the literature or a specific one
tailored for a particular application. EmotionML encompasses tags for
specifying information such as the intensity of the felt emotion, its
temporal course, to which events it refers to. Coping strategies linked
to appraisals is considered within EMA (Marsella and Gratch,
2009).
3.2 Communicative intentions
When communicating we convey not only our emotions but also
our epistemic states, our attitudes, information about the world,
discursive and semantic information… FML encodes any factors that
are relevant when planning verbal and non-verbal behavior (NVB). So
far the scope of FML has been discussed (Heylen et al., 2008) but no
attempt of formalization has been made so far. This is partly due to
the extensive variety of theories regarding communication. Different
examples exist such as APML (De Carolis et al., 2004) which have
been built from Isabella Poggi's theory of communicative acts (Poggi,
2007). Bickmore (2008) proposed the inclusion of meta-information for
characterizing interaction types (e.g. encouragement, empathy, task-
oriented) which he called Frames. He also advocated the inclusion
of information on the interpersonal relations between speakers and
those with whom they interact. Indeed communication is a process
involving several partners who sense, plan and adapt to each other
continuously. Interaction involves several exchanges of speaking turns.
A turn-taking description should also be included (Heylen et al., 2008;
Kopp et al., 2006).
3.3 Behavior Mark-up Language—BML
The idea behind BML is to describe multimodal behavior independently
of the specific body and animation parameters characterizing a given
virtual character. Its description should also be independent of the
animation player to be used during an application. The choice is to
define behavior within BML at a symbolic level. Several modalities
are considered: face, gaze, arm-gesture, posture, etc. All the behavior
encoded contains two types of information: a description of its shape
and of its temporal course. Whenever possible, these descriptions rely
on theoretical works. For example, facial expressions are described as
Search WWH ::




Custom Search