Graphics Reference
In-Depth Information
Lingenfelser, F., J. Wagner and E. André. 2011. A systematic discussion of
fusion techniques for multi-modal affect recognition tasks. In Proceedings
of the 13th International Conference on Multimodal Interfaces ICMI 2011,
pp. 19-26.
Martin, J.C., R. Veldman and D. Béroule. 1998. Developing multimodal
interfaces: A theoretical framework and guided propagation networks. In
H. Burt, R.J. Beun and T. Borghuis (eds.), Multimodal Human-Computer
Communication (Vol. 1374, pp. 158-187). Berlin: Springer Verlag.
Martin, J.-C., S. Buisine, G. Pitel and N.O. Bernsen. 2006. Fusion of Children's
Speech and 2D Gestures when Conversing with 3D Characters. Journal
of Signal Processing. Special issue on Multimodal Human-computer Interfaces,
86(12): 3596-3624.
Martinovsky, B. and D. Traum. 2003. Breakdown in Human-Machine Interaction:
The Error is the Clue. Proceedings of the ISCA Tutorial and Research
Workshop on Error Handling in Dialogue Systems, pp. 11-16.
Mehlmann, G. and E. André. 2012. Modeling multimodal integration with event
logic charts. In Proceedings of the 14th ACM International Conference on
Multimodal Interfaces, ICMI 2012, Santa Monica, USA, October 22-26,
pp. 125-132.
Mehrabian, A. 1980. Basic Dimensions for a General Psychological Theory:
Implications for Personality, Social, Environmental, and Developmental
Studies. Cambridge, MA: Oelgeschlager, Gunn & Hain.
Oviatt, S.L. 1999. Mutual disambiguation of recognition errors in a multimodel
architecture. In Williams, Marian G. and Mark W. Altom (eds.), Proceeding
of the CHI '99 Conference on Human Factors in Computing Systems: The
CHI is the Limit, Pittsburgh, PA, USA, May 15-20, pp. 576-583. ACM.
Rich, C., B. Ponsleur, A. Holroyd and C.L. Sidner. 2010. Recognizing engagement
in human-robot interaction. Human-Robot Interaction, (HRI 2010), pp.
375-382.
Sandbach, G., S. Zafeiriou, M. Pantic and L. Yin. 2012. Static and dynamic
3D facial expression recognition: A comprehensive survey. Image Vision
Comput. , 30(10): 683-697.
Sanghvi, J., G. Castellano, I. Leite, A. Pereira, P.W. McOwan and A. Paiva.
2011. Automatic analysis of affective postures and body motion to detect
engagement with a game companion. Human Robot Interaction (HRI
2011), pp. 305-312. Scherer, S., Marsella, S., Stratou, G., Xu, Y., Morbini,
F., Egan, A., Rizzo, A.A. and Morency, L.P. (2012). Perception markup
language: Towards a standardized representation of perceived nonverbal
behaviors. Intelligent Virtual Agents (IVA 2012), pp. 455-463.
Scherer, S., S. Marsella, G. Stratou, Y. Xu, F. Morbini, A. Egan, A. Rizzo and L.-
P. Morency. 2012. Perception Markup Language: Towards a Standardized
Representation of Perceived Nonverbal Behaviors. Intelligent Virtual
Agents. Y. Nakano, M. Neff, A. Paiva and M. Walker, Springer-Verlag
Berlin Heidelberg, 7502: 455-463
Sowa, T., M. Latoschik and S. Kopp. 2001. A communicative mediator in a
virtual environment: Processing of multimodal input and output. Proc.
Search WWH ::




Custom Search