Information Technology Reference
In-Depth Information
6
Conclusions and Future Work
This work proposes a model of long term reasoning integrating emotions, drives, prefer-
ences and personality traits in autonomous agents, based on AI planning. The emotional
state is modeled as two functions: valence and arousal. This two-dimensional model has
been chosen because it is simpler and offers the same representation capabilities as the
rest of emotional models. Anyhow, it is not difficult now to integrate any other emo-
tional model. Thus, actions produce variations in the valence depending on the agent
personality and agent preferences. The goal is to generate plans that maximize the va-
lence, while satisfying the agent needs or drives. Given that current planners only deal
with monotonous functions as metric functions, we converted the non-monotonous va-
lence into a monotonous one, v-valence . The results of the experiments show that
the quality of the solutions (measured as the value of the valence )improveswhen
the deliberative model is used compared to the reactive one. Thus, the increase in the
quality of the solutions implies a more realistic behavior of the agent.
The proposed model is the first step in the development of a richer and more com-
plex architecture. In the next future, we would like to include new actions in the domain,
especially those related to the processes of social interaction, by including some compo-
nent that reasons about multi-agent interaction and collaboration. Another future work
is to model the idea of well-being, which will focus the agent to keep all its needs below
a certain level along time. The physiological well-being of the agent will influence its
emotional state altering the value of valence. This idea is very related to the idea of
continuous planning to control the behaviour of virtual agents [1].
References
1. Avradinis, N., Aylett, R.S., Panayiotopoulos, T.: Using Motivation-Driven Continuous Plan-
ning to Control the Behaviour of Virtual Agents. In: Balet, O., Subsol, G., Torguet, P. (eds.)
ICVS 2003. LNCS, vol. 2897, pp. 159-162. Springer, Heidelberg (2003)
2. Aylett, R.S., Louchart, S., Dias, J., Paiva, A., Vala, M.: Fearnot!: an experiment in emergent
narrative, pp. 305-316 (2005)
3. Bach, J., Vuine, R.: The AEP Toolkit for Agent Design and Simulation. In: Schillo, M.,
Klusch, M., Muller, J., Tianfield, H. (eds.) MATES 2003. LNCS (LNAI), vol. 2831, pp. 38-
49. Springer, Heidelberg (2003)
4. Bates, J.: The role of emotion in believable agents. Communications of the ACM 37, 122-
125 (1994)
5. Blythe, J., Reilly, W.S.: Integrating reactive and deliberative planning for agents. Technical
report (1993)
6. Breazeal, C.: Biological Inspired Intelligent Robots. SPIE Press (2003)
7. Canamero, D.: Modeling motivations and emotions as a basis for intelligent behavior. In:
First International Symposium on Autonomous Agents (Agents 1997), pp. 148-155. The
ACM Press, New York (1997)
8. Canamero, D.: Designing Emotions for Activity Selection in Autonomous Agents. MIT Press
(2003)
9. Cavazza, M., Lugrin, J., Pizzi, D., Charles, F.: Madame bovary on the holodeck: Immer-
sive interactive storytelling. In: Proceedings of the ACM Multimedia 2007. The ACM Press,
Augsburg (2007)
 
Search WWH ::




Custom Search