Graphics Reference
In-Depth Information
emotions and attitudes, own communication and partner interaction
management, formation exchange, etc. In speech and language-
related annotations, the levels correspond to different modalities and
communicative phenomena. The verbal linguistic level categories
concern phonemes, morphemes, words, and utterances, accompanied by
phonetic and prosodic features, as well as paralinguistic vocalizations
such as laughs and coughs. Pragmatic and discourse level aspects
usually form their own level and include categories such as topic,
focus, new information, discourse structure, and rhetorical relations,
which are linked to the correlated words and sentences, analogously to
interaction categories such as the speakers' communicative intentions,
dialogue acts, feedback, turn-taking, and sequencing. Multimodal
annotation levels concern hand-gestures, head movements, body
posture, facial expressions and eye gazing, encoding, to various details,
movements of the fingers, eyes, mouth, and legs. Various affective
displays and social behavior are also part of multimodal annotation
schemes, comprised, in particular, of emotions and personality traits,
engagement, dominance, cooperation, etc. The attributes concerning
the properties of the observed phenomena need not be fine-grained,
but they need to capture the significant and relevant aspects of the
phenomena (expressive adequacy), be explicit and consistent.
Each annotation scheme has guidelines that explain and exemplify
the meanings of the categories, based on theoretical assumptions, and
also practical needs of research projects. For instance, the MUMIN
annotation scheme (Allwood et al., 2007) focuses on the general
form and function features of multimodal elements, and it is used in
the Nordic NOMCO project (Navarretta et al., 2012), which aims to
create comparable annotated resources for the languages involved in
the project, in order to investigate specific communicative functions
of hand and head gesturing in the neighboring countries. Other
frameworks have aimed at the registration of facial movements (Ekman
and Friesen, 1978), hand gestures (Duncan, 2004); or studying emotions
as expressed by facial movements (Ekman and Friesen, 2003). Some
of these schemes can be used to annotate gestures within different
scientific settings as in the construction of virtual agents (see below)
or within different scientific domains (psychopathology, education).
Dialogue act annotation has been mainly derived at for the
practical needs of tasks in hand, and thus it has been difficult to
compare annotations or the systems which have been built on these
annotations. Although, in general, the Speech Act theory by Austin and
Searle has been the basis for the schemes, the particular application
and interaction task has influenced the set of dialogue acts. Efforts on
standardization, based on a range of dialogue act annotation schemes
Search WWH ::




Custom Search