Biomedical Engineering Reference
In-Depth Information
closed, or does not have a target in the virtual scene, it is possible that the user may
stray off the path planned with curvature gains.
In general, such curvature gains can be applied not only to yaw rotations, but also
to pitch and roll rotations, e.g., to simulate slopes in a virtual scene [ 17 ]. Moreover,
such virtual camera rotations can be applied time-dependently, i.e., not caused by
translational or rotational movements of the user in the VR laboratory, which can be
described as a simple extension of the above mapping. However, anecdotal evidence
suggests that virtual rotations that are not coupled to self-motions are usually easily
detectable by users, and potentially distracting [ 23 , 32 ].
A Basic Redirection Controller
Sophisticated implementations of unrestricted virtual walking with redirection tech-
niques, i.e., redirection controllers , are usually based on information about the extents
of the physical workspace, the structure of the virtual scene, and assumptions about
typical user behavior. For instance, if a user is turning towards a door in a virtual
building model, redirection controllers may predict the user's future virtual path
to determine how to optimally scale rotations and compress distances, as well as to
apply curvature gains, such that the user will be able to walk through the virtual door,
without being able to detect applied manipulations [ 5 , 20 , 34 ]. However, in many
cases such optimizations with virtual path prediction are not possible, e.g., when no
information about the virtual scene is available. Some redirection controllers can be
adapted to such cases, including works by the research groups of Razzaque et al.
[ 23 ], Field and Vamplew [ 7 ], Peck et al. [ 22 ], Williams et al. [ 38 , 39 ], Steinicke
et al. [ 34 ], and Nitzsche et al. [ 20 , 26 ].
A basic redirection controller can be implemented using only curvature gains.
For each rendering frame n
∈ N
we read the current two-dimensional head posi-
x ( n )
z ( n )
2 , and compute the current two-dimensional view direction
tion
(
,
) ∈ R
r
r
v ( n )
v ( n )
v ( n )
v ( n )
2 with
(
1 in the physical workspace (see Fig. 10.3 ).
Based on the prediction that the user will walk in the virtual view direction [ 1 ], we
try to map the user's real movements onto a circular path in the physical workspace
with largest possible radius, in order to minimize applied curvature manipulations.
We accomplish that by computing the strafe view direction
,
) ∈ R
(
,
) =
r x
r z
r x
r z
v ( n )
v ( n )
2 in
the physical workspace, and solving the optimization problem of finding the point
(
(
,
) ∈ R
r z
r x
x ( n )
z ( n )
v ( n )
v ( n )
,
)
· (
,
)
∈ R
that is locatedwithin the physical workspace
and provides the largest circle through the current user position
r
, with r
r
r
r z
r x
x ( n )
z ( n )
, while
maintaining at least the same distance to all boundaries of the interaction space, and
all obstacles in the laboratory (including a small safety offset, see Fig. 10.4 ). Map-
ping user movements onto this computed maximal circle in the physical workspace
corresponds to applying a curvature gain of g C =
(
,
)
r
r
r , using the formulas described
above. That means, for each frame the user is redirected onto the optimal circle
in the physical workspace, assuming the user will walk straight in the computed
view direction. This simple approach allows practitioners to implement a reasonable
1
/
 
Search WWH ::




Custom Search