Information Technology Reference
In-Depth Information
Brenton, H., Gillies, M., Ballin, D., & Chatting,
D. J. (2005, September 5). The Uncanny Valley:
Does it exist? Paper presented at the HCI 2005,
Animated Characters Interaction Workshop, Na-
pier University, Edinburgh, UK.
Farnell, A. (2011). Behaviour, structure and causal-
ity in procedural audio . In Grimshaw, M. (Ed.),
Game sound technology and player interaction:
Concepts and developments . Hershey, PA: IGI
Global.
Browning, T. (Producer/Director). (1931). Dracu-
la [Motion picture]. England: Universal Pictures.
Ferber, D. (2003, September) The man who mis-
took his girlfriend for a robot. Popular Science .
Retrieved April 7, 2009, from http://iiae.utdallas.
edu/news/pop_science.html.
Busso, C., & Narayanan, S. S. (2006). Interplay
between linguistic and affective goals in facial
expression during emotional utterances. In Pro-
ceedings of 7th International Seminar on Speech
Production , 549-556.
Freud, S. (1919). The Uncanny . In The standard
edition of the complete psychological works of
Sigmund Freud ( Vol. 17 , pp. 219-256). London:
Hogarth Press.
Calleja, G. (2007). Revising immersion: A con-
ceptual model for the analysis of digital game
involvement. In Proceedings of Situated Play,
DiGRA 2007 Conference , 83-90.
Gaver, W. W. (1993). What in the world do we hear?
An ecological approach to auditory perception.
Ecological Psychology , 5 (1), 1-29. doi:10.1207/
s15326969eco0501_1
Cao, Y., Faloustsos, P., Kohler, E., & Pighin, F.
(2004). Real-time speech motion synthesis from
recorded motions. In R. Boulic & D. K. Pai (Eds.),
Eurographics/ACM SIGGRAPH Symposium on
Computer Animation (2004), 345-353.
Gouskos, C. (2006). The depths of the Uncanny
Valley. Gamespot. Retrieved April 7, 2009, from,
http://uk.gamespot.com/features/6153667/index.
html.
Chion, M. (1994). Audio-vision: Sound on screen
(Gorbman, C., Trans.). New York: Columbia
University Press.
Grant, W., Wassenhove, V., & Poeppel, D.
(2004). Detection of auditory (cross-spectral) and
auditory-visual (cross-modal) synchrony. Speech
Communication , 44 (1/4), 43-53. doi:10.1016/j.
specom.2004.06.004
Edworthy, J., Loxley, S., & Dennis, I. (1991).
Improving auditory warning design: Relationship
between warning sound parameters and perceived
urgency. Human Factors , 33 (2), 205-231.
Green, R. D., MacDorman, K. F., Ho, C. C., &
Vasudevan, S. K. (2008). Sensitivity to the pro-
portions of faces that vary in human likeness.
Computers in Human Behavior , 24 (5), 2456-2474.
doi:10.1016/j.chb.2008.02.019
Ekman, I., & Kajastila, R. (2009, February 11-13).
Localisation cues affect emotional judgements:
Results from a user study on scary sound . Paper
presented at the AES 35 th International Confer-
ence, London, UK.
Grey Matter [INDIE arcade game]. (2008).
McMillen, E., Refenes, T., & Baranowsky, D.
(Developers). San Francisco, CA: Kongregate.
(2008). Emily Project . Santa Monica, CA: Image
Metrics, Ltd.
Grimshaw, M. (2008a). The acoustic ecology of
the first-person shooter: The player experience of
sound in the first-person shooter computer game .
Saarbrücken, Germany: VDM Verlag Dr. Mueller.
(2008). Faceposer [Facial Animation Tool as Part
of Source SDK]. Bellevue, WA: Valve Corpora-
tion.
Search WWH ::




Custom Search