Environmental Engineering Reference
In-Depth Information
Fig. 5. Two different strategies of highlighting the space that a dragged object would occupy
on a surface: at right the 3D bounding box, at left the projected area.
4.1 Free navigation
The free navigation is the classical first person four degrees of freedom model. Users control
the viewing vector orientation through the yaw and pitch angle. As usual in computer games,
rolling is not allowed. The pitch angle is restricted within a parameterized range between
50 to 50 degrees to allow looking at the floor and ceiling but forbidding complete turns. In
addition, users control the camera position by allowing its movement in a plane parallel to the
floor at a fixed height. Jumping and crouching down are not allowed. The movement follows
the direction of the projection of the viewing vector in that plane, therefore it is not possible to
go back. Users can also stop and restart the camera movement. The movement has constant
speed except for a short acceleration at its beginning and deceleration at its end. The camera
control is done using the mouse to specify the viewing vector and camera path orientation and
the bar-space key to start and stop the motion. This system has the advantage that it requires
to control only two types of input (mouse movement and space bar key), which is suitable for
patients with neuropsychological impairments.
4.2 Assisted navigation
The aim of the assisted navigation mode is to provide means for users to indicate where they
want to go, and then automatically drive them to this location. This way, the focus is put
on the destination and not on the path towards it. Therefore, navigation is decoupled from
interaction.
This assistance can be implemented in two ways: by computing only the final camera position,
or by calculating all the camera path towards this position. In the first case, the transition from
one view to the next is very abrupt. Therefore, we reserve it for the transition between one
scenario to the other. In this work, we focus on the second mode. We compute all the camera
path and orientation.
To indicate the target location, users click onto it. If the target location is reachable from the
avatar's position, i.e. if it is at a smaller distance than the avatar's arm estimated length, the
system interprets the user click as a petition of interaction (to open, pick, put or transform),
Search WWH ::




Custom Search