Graphics Reference
In-Depth Information
of the International Workshop on Multimodal Presentation and Natural
Multimodal Dialogue—IPNMD 2001. Verona, Italy, ITC/IRST, pp. 71-74.
Stiefelhagen, R., H. Ekenel, C. Fügen, P. Gieselmann, H. Holzapfel, F. Kraft,
and A. Waibel. 2007. Enabling multimodal human-robot interaction for
the Karlsruhe humanoid robot. IEEE Transactions on Robotics, Special Issue
on Human-Robot Interaction , 23(5): 840-851.
Sun, Y., Y. Shi, F. Chen and V. Chung. 2009. Skipping spare information in
multimodal inputs during multimodal input fusion. Proceedings of the
14th International Conference on Intelligent User Interfaces, pp. 451-456,
Sanibel Island, USA.
Sun, Y., H. Prendinger, Y. Shi, F. Chen, V. Chung and M. Ishizuka. 2008. THE
HINGE between Input and Output: Understanding the Multimodal Input
Fusion Results in an Agent-Based Multimodal Presentation System. CHI
'08 Extended Abstracts on Human Factors in Computing Systems, pp.
3483-3488, Florence, Italy.
Visser, T., D. Traum, D. DeVault and R. op den Akker. 2012. Toward a model
for incremental grounding in spoken dialogue systems. In the 12th
International Conference on Intelligent Virtual Agents.
Vogt, T. and E. André. 2005. Comparing feature sets for acted and spontaneous
speech in view of automatic emotion recognition. Proceedings of the 2005
IEEE International Conference on Multimedia and Expo , ICME 2005, July 6-9,
2005, pp. 474-477. Amsterdam, The Netherlands.
Vogt, T., E. André and N. Bee. 2008. Emovoice—A framework for online
recognition of emotions from voice. In André, Elisabeth, Laila Dybkjær,
Wolfgang Minker, Heiko Neumann, Roberto Pieraccini, and Michael Weber
(eds.), Perception in Multimodal Dialogue Systems, 4th IEEE Tutorial
and Research Workshop on Perception and Interactive Technologies for
Speech-Based Systems, PIT 2008, Kloster Irsee, Germany, June 16-18,
2008, Proceedings, volume 5078 of Lecture Notes in Computer Science,
pp. 188-199. Springer.
Wagner, J., E. André, F. Lingenfelser and J. Kim. 2011a. Exploring fusion
methods for multimodal emotion recognition with missing data. T.
Affective Computing, 2(4): 206-218.
Wagner, J., F. Lingenfelser, N. Bee and E. André. 2011b. Social signal
interpretation (SSI)—A framework for real-time sensing of affective and
social signals. KI , 25(3): 251-256.
Wahlster, W. 2003. Towards symmetric multimodality: Fusion and fission of
speech, gesture, and facial expression. In Günter, Andreas, Rudolf Kruse,
and Bernd Neumann (eds.), KI 2003: Advances in Artificial Intelligence,
26th Annual German Conference on AI, KI 2003, Hamburg, Germany,
September 15-18, 2003, Proceedings, volume 2821 of Lecture Notes in
Computer Science, pp. 1-18. Springer.
Search WWH ::




Custom Search