Graphics Reference
In-Depth Information
REFERENCES
Bangalore, S. and M. Johnston. 2009. Robust Understanding in Multimodal
Interfaces. Computational Linguistics , 35(3): 345-397.
Bolt, Richard A. 1980. Put-that-there: Voice and Gesture at the Graphics
Interface. Proceedings of the 7'th Annual Conference on Computer
Graphics and Interactive Techniques, SIGGRAPH '80, pp. 262-270. ACM,
New York, NY.
Bosma, W. and E. André. 2004. Exploiting Emotions to Disambiguate Dialogue
Acts. Proceedings of the 9th International Conference on Intelligent User
Interfaces, IUI '04, pp. 85-92. ACM, New York, NY.
Burger, B., I. Ferrané, F. Lerasle and G. Infantes. 2011. Two-handed gesture
recognition and fusion with speech to command a robot. Autonomous
Robots , 32(2): 129-147.
Busso, C., Z. Deng, S. Yildirim, M. Bulut, C.M. Lee, A. Kazemzadeh, S. Lee,
U. Neumann and S. Narayanan. 2004. Analysis of emotion recognition
using facial expressions, speech and multimodal information. International
Conference on Multimodal Interfaces (ICMI 2004), pp. 205-211.
Caridakis, G., G. Castellano, L. Kessous, A. Raouzaiou, L. Malatesta, S.
Asteriadis and K. Karpouzis. 2007. Multimodal emotion recognition from
expressive faces, body gestures and speech. In Artificial Intelligence and
Innovations (AIAI 2007), pp. 375-388.
Chen, F., N. Ruiz, E. Choi, J. Epps, A. Khawaja, R. Taib and Y. Wang. 2012.
Multimodal Behaviour and Interaction as Indicators of Cognitive Load.
ACM Transactions on Interactive Intelligent Systems, Volume 2, Issue 4,
Article No. 22, pp. 1-36.
Crook, N., D. Field, C. Smith, S. Harding, S. Pulman, M. Cavazza, D. Charlton,
R. Moore and J. Boye. 2012. Generating context-sensitive ECA responses to
user barge-in interruptions. Journal on Multimodal User Interfaces, 6: 13-25.
Dimitriadis, D.B. and J. Schroeter. 2011. Living rooms getting smarter with
multimodal and multichannel signal processing. IEEE SLTC newsletter.
Summer 2011 edition, http://www.signalprocessingsociety.org/technical-
committees/list/sl-tc/spl-nl/2011-07/living-room-of-the-future/
D'Mello, S.K. and J. Kory. 2012. Consistent but modest: A meta-analysis on
unimodal and multimodal affect detection accuracies from 30 studies.
International Conference on Multimodal Interaction (ICMI 2012), pp. 31-38.
Eyben, F., M. Wöllmer, M.F. Valstar, H. Gunes, B. Schuller and M. Pantic. 2011.
String-based audiovisual fusion of behavioural events for the assessment
of dimensional affect. Automatic Face and Gesture Recognition (FG 2011),
pp . 322-329.
Gilroy, S.W., M. Cavazza and V. Vervondel. 2011. Evaluating multimodal
affective fusion using physiological signals. Intelligent User Interfaces
(IUI 2011), pp. 53-62.
Gilroy, S.W., M. Cavazza, R. Chaignon, S.-M. Mäkelä, M. Niranen, E. André, T.
Vogt, J. Urbain, M. Billinghurst, H. Seichter and M. Benayoun. 2008. E-tree:
Emotionally driven augmented reality art. ACM Multimedia pp. 945-948.
Search WWH ::




Custom Search