Biomedical Engineering Reference
In-Depth Information
12.3 From Haptic to Multimodal Rendering
12.3.1 Introduction
12.3.1.1 Walking and Haptic Feedback in Virtual Environments
Virtual reality applications aim at simulating digital environments with which users
can interact and, as a result, perceive through different modalities the effects of their
actions in real time. Current VR applications draw primarily on vision and hearing.
Haptic feedback—which aims to reproduce forces, movements and other cutaneous
sensations felt via the sense of touch—is rarely incorporated, especially in those VR
applications where users are enabled to walk.
A heightened sense of presence can be achieved in a VR simulation via the addition
of even low-fidelity tactile feedback to an existing visual and auditory environment,
and the potential gains can, in some cases, be larger than those obtained by improving
feedback received from a single existing modality, such as the visual display [ 91 ].
High-frequency information in mechanical signals often closely links the haptic
and auditory modalities, since both types of stimuli have their origin in the same
physical contact interactions. Thus, during walking, individuals can be said to be
performing simultaneous auditory and haptic probing of the ground surface and
environment. As demonstrated in recent literature, walkers are capable of perceptu-
ally distinguishing ground surfaces using either discriminative touch via the feet or
audition [ 39 ]. Thus, approaches to haptic and auditory rendering like those reviewed
in this chapter share common features, while the two types of display can be said to
be partially interchangeable.
An important component of haptic sensation is movement. Walking is arguably
the most intuitive means of self-motion within a real or virtual environment. In
most research on virtual environments, users are constrained to remain seated or
to stand in place, which can have a negative impact on the sense of immersion
[ 90 ]. Consequently, there has been much recent interest in enabling users of such
environments to navigate by walking. One feasible, but potentially cumbersome and
costly, solution to this problem is to develop motorized interfaces that allow the use
of normal walking movements to change position within a virtual world. Motorized
treadmills have been extensively used to enable movement in one-dimension, and
this paradigm has been extended to allow for omnidirectional locomotion through an
array of treadmills revolving around a larger one [ 49 ]. Another configuration consists
of a pair of robotic platforms beneath the feet that are controlled so as to provide
support during virtual foot-ground contact, while keeping the user in place. Another
configuration consists of a spherical cage that rotates as a user walks inside of it [ 46 ].
The reader could refer to the chapter by Iwata in this volume for further discussion
of these scenarios. The range of motion, forces, and speeds that are required to
simulate omnidirectional motion make these devices intrinsically large, challenging
to engineer, and costly to produce. In addition, while they are able to simulate the
 
Search WWH ::




Custom Search