Biomedical Engineering Reference
In-Depth Information
17.2.2 Relevance to Virtual Reality
Virtual reality (VR) aims to digitally simulate man-made or natural environments
that users can experience perceptually, via one or more sensory modalities, and
interact in, by moving or otherwise acting in real time. In order to improve the
degree of realism of the virtual environment, and the presence of users within it, it
is often desired to preserve as many features of the environment through as many
modalities as possible. When users are permitted to navigate by walking within
the virtual environment, as illustrated in the other chapters of this topic, it can be
desirable to represent features of the virtual ground surface, and, particularly in
simulated natural environments, to represent dynamical and material-dependent as-
pects of interactions between the foot and ground surface, such as the sense of
soft materials like snow or sand deforming underfoot. Sensations accompanying
walking on natural ground surfaces in real world environments (sand in the desert,
or snow in winter) are multimodal and can greatly reinforce users' sense of pres-
ence in the virtual settings in which they occur [ 38 ]. Limited previous research
has addressed foot-based interaction with deformable ground surfaces in virtual
reality [ 38 ]. This may be due to a lack of efficient techniques for capturing foot-
floor contact interactions and rendering them over distributed floor areas, and to the
emerging nature of the applications involved. Some newly developed methods for
accomplishing these tasks are presented in this chapter and that of Marchal et al. in
this topic.
Related research on virtual and augmented reality environments has focused on
the problem of natural navigation in virtual reality environments. Solutions such as
walking in place [ 34 ] and redirected walking [ 30 ] map kinematically tracked move-
ments onto a user's coordinates in a virtual environment (VE); see Chap. 11 for a
review. A number of haptic interfaces for enabling omnidirectional in-place locomo-
tion in VEs, based on treadmills or other mechanisms, have been developed, and can
serve some of the same purposes as augmented floor surfaces in permitting their users
to navigate virtual reality environments. The design of such locomotion interfaces
and several examples of them are reviewed in Chap. 9 of this topic, and in earlier
literature [ 16 ], and consequently they are not discussed here. Instead, in this chapter
we focus on effectively stationary ground surfaces that are augmented with audio,
visual, and/or tactile feedback.
One example is the shoe-based Step WIM interface of LaViola et al. [ 19 ]
(Fig. 17.3 ), which introduced foot gestures performed through a pair of electronically
instrumented shoes for controlling navigation in a larger virtual environment. The
system operated with reference to a visual map display representing the surrounding
virtual environment, and did not provide auditory or haptic feedback.
 
Search WWH ::




Custom Search