Image Processing Reference
In-Depth Information
coarticulation model determines the synchronization between the mouth movements and the
synthesized voice. The 3D head is created with a Virtual Reality Modeling Language (VRML)
model.
LUCIA Tisato et al. (2005) is a MPEG-4 talking head based on the INTERFACE Cosi et al.
(2003) platform. Like the previous work, LUCIA consists in a VRML model of a female head.
It speaks Italian thanks to the FESTIVAL Speech Synthesis System Cosi et al. (2001). The
animation engine consists in a modified Cohen-Massaro coarticulation model. A 3D MPEG-4
model representing a human head is used to accomplish an intelligent agent called SAMIR
(Scenographic Agents Mimic Intelligent Reasoning) Abbattista et al. (2004). SAMIR is used
as a support system to web users. In Liu et al. (2008) a talking head is used to create a
man-car-entertainment interaction system. The facial animation is based on a mouth gesture
database.
One of the most important features in conversations between human beings is the capability
to generate and understand humor: “Humor is part of everyday social interaction between
humans” Dirk (2003). Since having a conversation means having a kind of social interaction,
conversational agents should be capable to understand and generate also humor. This leads
to the concept of computational humor , which deals with automatic generation and recognition
of humor.
Verbally expressed humor has been analyzed in literature, concerning in particular very short
expressions (jokes) Ritchie (1998): a one-liner is a short sentence with comic effects, simple
syntax, intentional use of rhetoric devices (e.g., alliteration, rhyme), and frequent use of
creative language constructions Stock & Strapparava (2003). Since during a conversation the
user says short sentences, one-liners, jokes or gags can be good candidates for the generation
of humorous sentences. As a consequence, literature techniques about computational humor
regarding one-liners can be customized for the design of a humorous conversational agent.
In recent years the interest in creating humorous conversational agents has grown. As
an example in Sjobergh & Araki (2009) an humorous Japanese chat-bot is presented,
implementing different humor modules, such as a database of jokes and conversation-based
jokes generation and recognition modules. Other works Rzepka et al. (2009) focus on the
detection of emotions in user utterances and puns generation.
In this chapter we illustrate a humorous conversational agent, called EHeBby , equipped with a
realistic talking head. The conversational agent is capable to generate humorous expressions,
proposing to the user riddles, telling jokes, ironically answering to the user. Besides, the
chatbot is capable to detect, during the conversation with the user, the presence of humorous
expressions, listening and judging jokes and react changing the visual expression of the
talking head, according to the perceived level of humor. The chatbot reacts accordingly to
the user jokes, adapting the expression of its talking head. Our talking head offers a realistic
presentation layer to mix emotions and speech capabilities during the conversation with the
user. It shows a smiling expression if it considers the user's sentence “funny”, indifferent if
it does not perceive any humor in the joke, or angry if it considers the joke in poor taste. In
the following paragraphs we illustrate both the talking head features and the humorous agent
brain.
2. EHeBby architecture
The system is composed by two main components, as shown in figure 1, a reasoner module
and a Talking Head (TH) module. The reasoner processes the user question by means of
the A.L.I.C.E. (Artificial Linguistic Internet Computer Entity)engine ALICE (2011), which has
been extended in order to manage humoristic and emotional features in conversation.
In
Search WWH ::




Custom Search