Biomedical Engineering Reference
In-Depth Information
12.1 Introduction
Sensations accompanying walking on natural ground surfaces in real world environ-
ments are rich, multimodal and highly evocative of the settings in which they occur
[ 110 ]. For these reasons, foot-based human-computer interaction represents a new
means of interacting in Virtual Reality (VR), with potential applications in areas such
as architectural visualization, immersive training, rehabilitation or entertainment.
However, floor-based multimodal (visual, auditory, tactile) information displays have
only recently begun to be investigated [ 108 ]. Research work has remained limited
as there has been a lack of efficient interfaces and interaction techniques capable of
capturing touch via the feet over a distributed display. Related research on virtual
and augmented reality environments has mainly focused on the problem of natural
navigation in virtual environments [ 56 , 81 , 96 ]. A number of haptic interfaces for
enabling omnidirectional in-place locomotion in virtual environments have been
developed [ 46 ], but known solutions either limit freedom in walking, or are highly
complex and costly.
The rendering of multimodal cues combining visual, auditory and haptic feed-
backs has rarely been exploited when walking in a virtual environment. Many aspects
of touch sensation in the feet have been studied in prior scientific literature, includ-
ing its roles in the sensorimotor control of balance and locomotion over different
terrains. However considerably less is known about how the nature of the ground
itself is perceived, and how its different sensory manifestations (touch, sound, visual
appearance) and those of the surroundings contribute to the perception of properties
of natural ground surfaces, such as their shape, irregularity, or material composition,
and our movement upon them. Not surprisingly then, in human-computer interac-
tion and virtual reality communities, little research attention has been devoted to the
simulation of multisensory aspects of walking surfaces in ways that could parallel
the emerging understanding that has, in recent years, enabled more natural means of
human computer interaction with the hands, via direct manipulation, grasping, tool
use, and palpation of virtual objects and surfaces.
The present chapter proposes to review the recent interactive techniques that have
contributed to develop multimodal rendering of walking in virtual worlds by repro-
ducing virtual experiences of walking on natural ground surfaces. These experiences
are enabled primarily through the rendering and presentation of virtual multimodal
cues of ground properties, such as texture, inclination, shape, material, or other
affordances in the Gibsonian sense [ 37 ]. The related work presented in this chapter
is organized around the hypothesis that walking, by enabling rich interactions with
floor surfaces, consistently conveys enactive information that manifests itself through
multimodal cues, and especially via the haptic and auditory channels. In order to bet-
ter distinguish this investigation from prior work, we adopt a perspective in which
vision plays a primarily integrative role linking locomotion to obstacle avoidance,
navigation, balance, and the understanding of details occurring at ground level. That
is why we will not detail the visual rendering of walking over virtual grounds itself.
 
Search WWH ::




Custom Search