Biomedical Engineering Reference
In-Depth Information
the rotation of the virtual scene relative to the tracked space. It is not a goal of MC
to make the rotation undetectable by users.
Redirected walking (RDW) [ 27 - 29 ] is a technique that exploits the imprecision of
human perception of self-motion—themotion of humans based on sensory cues other
than vision. RDWmodifies the direction of the user's gaze by imperceptibly rotating
the virtual scene around the user and redirecting the user's (future) path back into
the tracked space. Unlike MC, RDW was designed to make rotation undetectable
to the user. RDW achieves undetectable rotation by exploiting the visual vestibular
crossover described above. The vestibular system is dominant over the visual system
at head frequencies greater than 0.07Hz, approximately one head turn over a 14 s
period, causing users to not notice unmatched real and scene rotation while turning
their heads at frequencies greater than 0.07Hz. For this reason, an integral part of
the design for RDW was to make users frequently turn their heads.
Razzaque's environments and tasks depended on static waypoints, locations that
defined the user's virtual route within the VE, for two reasons. First, a series of
waypoints predetermined the user's sequence of goal locations. Knowledge of the
future goal locations enables the system to always know what part of the virtual
scene should be rotated into the tracked space. Second, waypoints are a mechanism
designed to make people look around. That is, users had to turn their heads to find
the next waypoint. This enabled the RDW algorithm to rotate the virtual scene (dur-
ing head turns) and redirect the user's next-path-direction, i.e., the path to the next
waypoint, into the tracked space.
Waypoints provided a simple answer for one of the most challenging parts of im-
plementing a redirection system: predicting the user's future direction. Although
waypoints enable RDW, they limit applications to those that have predetermined
paths and task-related reasons for users to turn their heads.
Newer implementations of redirection have added dynamic controllers: Peck and her
colleagues controlled the amount of rotation added to the virtual scene based on the
rotation speed of the user's head [ 21 , 22 ]; Neth et al. controlled the curvature gain
based on the user's walking speed [ 18 ]; and Hodgson et al. altered the redirection
amounts based on both the user's linear and angular velocities [ 11 ]. Chapter 10 pro-
vides a detailed description of how to modify the view transformation in redirection
systems.
Additional studies and techniques have explored determining the appropriate amount
of redirection that can be added at any instant [ 15 , 32 ], how to steer the user within
the environment [ 11 , 21 , 22 , 27 ], and how to predict the user's future direction [ 13 ,
21 , 22 ].
Finally, a method presented by Suma et al. harnesses change blindness techniques by
altering part of the scene model when the user is not looking at that part of the scene
[ 34 ]. For example, the location of a door to a room may change from one wall to
another while the user is not looking at it, thus guiding the user to walk in a different
direction in the physical space by walking a different direction in the virtual space.
 
Search WWH ::




Custom Search