Biomedical Engineering Reference
In-Depth Information
a button. Alternatively, for example, the start and stop might be implicit in the user
manipulating a joystick. In our terminology, the indicate position sub-task effects a
change in translation of the Display Center, either directly, or by giving the dynamics
of change over time (i.e., giving a velocity or acceleration). The task decomposition
shows that this can be achieved in three ways: specify position, specify velocity
and specify acceleration. Each of these can in turn be achieved in many ways. For
example specifying a position can be achieved by selecting a target, giving a route,
or by continuous specification. The last of these is the most common in real-time,
interactive systems: the user can point or gaze towards the target and effect a control
to start and continue travelling. There would be a similar breakdown for orientation:
e.g., one might set a target orientation, one might turn using a direct angular control,
or one might set a rotation speed.
In looking at the options here we can make a connection to Fig. 7.17 , where the
input devices had different dimensionality that might map conveniently to the differ-
ent options here. Obviously a joystick is a common choice for specifying velocity,
whereas a mouse might be better deployed for relative rotation or clicking to select
targets in the world.
We examine some common control configurations in the following sections. The
reader is also referred to Bowman et al. [ 5 ].
7.4.2 Direct Self Motion Control Techniques
In this section we cover direct control, where the user has continuous control over the
direction of travel from a first-person view at every update cycle of the simulation.
The most obvious technique for travel is gaze-directed steering [ 22 ]. This is the
default travel technique in many immersive systems, and it is also found in many
3D games. When the user makes the relevant control input (e.g., presses a button or
moves the joystick forwards), the Display Center coordinate system moves forward
along the direction of gaze. In a desktop virtual reality system, this would be through
the center of the screen. In an immersive VE system, this is typically in the direction
of a line in the center of the two eye lines. There are many variants depending on
the control input as suggested previously. The velocity of travel might be constant,
a joystick deflection might control the velocity of travel, or it might set an acceleration.
The movement might ease in or ease out when the control is changed. If a joystick is
used, typically it is forward/backward that controls travel along the direction of travel,
but the other axis might map to strafing (sideways movement) or turning (rotation),
or it might be ignored. Finally, the actual direction of travel might be clamped to be
in only two dimensions or so that the user is at a set height above the ground.
Gaze-directed steering has the advantage that it is simple to explain to users, but
it has the major disadvantage that the user must look in the direction they intend
to travel. There are obvious variants: one could use any other coordinate system or
relation between tracked points to set the travel direction. The most obvious ones are
flying in the direction of pointing with a hand tracker [ 22 ], direction of gaze recorded
 
Search WWH ::




Custom Search