Biomedical Engineering Reference
In-Depth Information
In what has been called the “virtual reality model” of interaction [ 30 , Chap. 20] or
the highest level of “interaction fidelity” [ 20 ], the user would just carry out actions
in the VE in the same manner as she would in the real world. The system would track
the body of the user and recreate virtual images, sounds and other cues that mimicked
cues and sensations that the user would get from the analogous real situation. The user
would see the VE from a first-person point of view and would be able to effect natural
interactions with her body. If the user wanted to pick up an object, she could reach
out her hand and grasp the object and lift it. More importantly for this topic, if she
wanted to pick up an object that was out of reach, she could walk over and pick it up.
Of course, the virtual reality model is an ideal because the hardware and software
we use can't simulate cues anywhere near as rich as the real world. Even if the
participant isn't walking or moving, our technologies for simulating tactile and force
cues are very limited in that they can provide a few points of contact or small area
of stimulation, whereas the task could involve the whole body of the user. If the
user walks or otherwise moves, then there is a much more pressing problem: virtual
reality devices only allow actual movement within a small area. This is for various
reasons: the displays might be static (e.g., small room), tethered (e.g., a wired head-
mounted display) or otherwise limited through infrastructure (e.g., a tracking system
that only functions within a bounded region). We will discuss such technologies
in greater detail later in the chapter, but there is a more pertinent question: how
can we simulate real walking, giving the impression of unconstrained motion, when
physical motion is actually limited? What devices can give the impression of walking
on different surface types, over long distances or on different inclines?
There are two fundamental problems: walking is implicitly a task that involves
very complex simulation of the walking surface, and walking involves inducing
momentum into the moving object: the walker's body. The first requirement might
only be solved by what Sutherland called the ultimate display [ 33 ]. In his seminal
paper he described that in this display the existence of matter would be controlled
and that a speeding bullet could potentially be fatal. Thus the ultimate walking
display would be a display that could simulate any surface by creating that surface.
However even the ultimate display doesn't directly solve the second problem: creating
momentum in an object. Researchers are only just starting to solve the problem of
configurable surfaces (e.g., see Chaps. 9 & 17 in this volume), and the problem of
momentum is recognized and some attempts have been made to simulate it by pushing
on the body (e.g., see Chap. 6 in this volume).
Reproducing natural walking is thus one of the toughest challenges in human-
computer interaction. We can try to imitate real walking, but we will be limited in
the range we can support, or the naturalness of the interaction. The alternative is
to provide interaction techniques that produce movement, or travel, through the VE
using other metaphors and devices.
In this chapter we outline the broad range of displays and devices that are used for
travel techniques in VEs. Other parts of the topic focus on reproduction of natural
walking through sophisticated devices. We will place these in context of supporting
the general task in a broad range of VE systems.
 
Search WWH ::




Custom Search