Biomedical Engineering Reference
In-Depth Information
devices such as joysticks and mice to travel through a VE and look around, and
are provided with minimal body-based sensory information about their movement.
Given the difficulty that many users encounter when trying to learn spatial layouts in
desktop VEs [ 1 ], which only provide visual information, it is likely that “walking”
interfaces could have a widespread and beneficial impact on VE applications.
This chapter is divided into four main parts. The first summarizes the characteris-
tics of VE applications from a navigational perspective, by mapping them onto differ-
ent scales of environment (model vs. small vs. large). The second identifies attributes
of ecological validity that should be considered when applying the results of naviga-
tion research to a given VE application. The third, and most substantive, part reviews
experimental studies that have investigated the effect of body-based information on
navigation, focusing on studies that investigated the rotational and/or translational
components of body-based information, rather that different cues (proprioception,
vestibular and efference copy) [ 2 ]. These studies are categorized according to type
of navigation participants performed while acquiring knowledge of the environment
(single-route vs. whole-environment), the scale of the environment (small vs. large),
the environment's spatial extent, and the richness of the visual scene. The chapter
concludes by using these research results to identify the types of navigation interface
that are suited to different applications, and highlight areas in which further research
is needed.
5.2 Applications of Virtual Environments
From a navigational perspective, VE applications [ 3 - 5 ] may be divided into three
broad categories (see Table 5.1 ). The categories are defined by the scale of the envi-
ronment in spatial cognition terms [ 6 ].
In the first category are model-scale applications, where users look around while
remaining in one position (model-scale spaces, which in the real world would be
placed on a table top, can be seen and reached from a single place). Examples
include designing the layout of the cockpit of a car and training communication
between the pilot and winch-man of search and rescue helicopters. Head-mounted
displays (HMDs) are ideal for these applications, because they allow users to look
around naturally by turning their head, with positional changes lying within the
bounds of low-cost tracking devices (say, a 1m 3 ). Thismeans that effective navigation
interfaces for these applications do not require a walking interface, so they are not
considered further until the Conclusions section of this chapter.
The second category is small-scale applications, where users can resolve all of
the detail necessary for navigation from a single place (e.g., any position in a room),
but have to travel through the VE during usage. Examples range from analyzing the
ease with which an engine may be assembled, or a control room layout for visibility,
to being a witness in a virtual identity parade (a courtroom lineup, conducted using
avatars in a VE). In these applications it is typically straightforward for users to
 
Search WWH ::




Custom Search