Information Technology Reference
In-Depth Information
criteria. By using such events not only that the amount of processing is reduced
which is of extreme importance in vision applications, but the process of gesture
segmentation is considerably facilitated. Events can also be intuitive for users which
are developing thus a mental model of how the interface works: for example, a
valid gesture always starts from one location and ends in another; a given posture
specifies the beginning of a command while discarding it ends the recording process;
a touch or a tap is clearly associated to the intent to interact. The chapter presented
an overview of such events that can be detected in video sequences with specific
references to working systems.
While specific events ease both the system as well as the user's input task, it is
interesting to note how the system reaction or feedback is interpreted by humans as
an important event in the interaction process. Such an event marks a specific inter-
action point which represents an agreement in communication. The way a system
reacts is deciphered and represented by the human operator at multiple levels includ-
ing affect, behavior and cognition which, in turn, influence the operator's response.
This has implications on the way feedback should be reported as well as on the
understanding of human-computer interaction by means of general human-human
communication.
References
1. Austermann, A., Yamada, S., Funakoshi, K., Nakano, M.: How do users interact with a
pet-robot and a humanoid. In: Proceedings of the 28th of the International Conference
Extended Abstracts on Human Factors in Computing Systems, CHI EA 2010, Atlanta,
Georgia, USA, April 10-15, pp. 3727-3732. ACM, New York (2010)
2. Baudel, T., Beaudouin-Lafon, M.: Charade: remote control of objects using free-hand
gestures. Communications of the ACM 36(7), 28-35 (1993)
3. Baudisch, P., Chu, G.: Back-of-device interaction allows creating very small touch de-
vices. In: Proceedings of the 27th International Conference on Human Factors in Com-
puting Systems, CHI 2009, Boston, MA, USA, April 04-09, pp. 1923-1932. ACM, New
York (2009)
4. Caetano, T.S., Olabarriaga, S.D., Barone, D.A.C.: Do mixture models in chromaticity
space improve skin detection? Pattern Recognition 36(12), 3019-3021 (2003)
5. Cassell, J., Bickmore, T., Billinghurst, M., Campbell, L., Chang, K., Vilhjalmsson, H.,
Yan, H.: Embodiment in conversational interfaces: Rea. In: Proceedings of the SIGCHI
Conference on Human Factors in Computing Systems: the CHI Is the Limit, CHI 1999,
Pittsburgh, Pennsylvania, United States, May 15-20, pp. 520-527. ACM, New York
(1999)
6. Cassell, J.: Embodied conversational interface agents. ACM Commun. 43(4), 70-78
(2000)
7. Cerlinca, T.I., Pentiuc, S.G., Vatavu, R.D., Cerlinca, M.C.: Hand posture recognition for
human-robot interaction. In: Proceedings of the 2007 Workshop on Multimodal Inter-
faces in Semantic Interaction, WMISI 2007, Nagoya, Japan, November 15, pp. 47-50.
ACM, New York (2007)
8. Cho, K.-M., Jang, J.-H., Hong, K.-S.: Adaptive skin color filter. Pattern Recogni-
tion 34(5), 1067-1073 (2001)
Search WWH ::




Custom Search