Graphics Reference
In-Depth Information
Sometimes it is convenient to distinguish between two flavors of VR, namely,
passive (nonimmersive) versus immersive virtual reality (IVR). An example of the
former would be running a flight simulator on a computer screen. In the case of IVR,
a user would feel surrounded by the computer-generated environment and would be
able to walk through it and interact with it. There are two standard implementations
of IVR. One uses a head-mounted display and the other puts the user in a “cave.” The
original cave environment was the CAVE system described in [CrSD93]. IVR can be
further distinguished by how much freedom a user has and what the constraints are.
What mobility does a user have? Can one “walk” around the environment? What is
the field of view?
LaViola ([VFLL00]) divides three-dimensional user interaction in a VE into three
categories:
Navigation. This includes physical movement (e.g., actual walking, walking in place
on a treadmill, riding a stationary vehicle like a bicycle), manual viewpoint manipu-
lation (e.g., hand motions determine movement), steering (e.g., gaze directed motion),
target-based travel (user specified destination), and route planning (e.g., the user
specifies a path to follow by drawing on a map).
Selection and Manipulation. A basic approach here would be a virtual hand or a
cursor that tracks one's hand. One can also implement indirect control via widgets
(e.g., handle widgets that allow rotation, translation, . . .). A third way is via physical
props.
Application Control. This can be achieved, for example, with graphical menus,
voice commands, gestural interaction, or tools.
Brooks ([Broo99]) argues that four technologies are needed for VR to be
successful:
(1) To get multiple sensory information, one needs visual, auditory, haptic, and
tactile displays to immerse a user in the virtual world while at the same time
blocking out any contradictory sensory impressions from real world.
(2) Graphics rendering systems need to be able to sustain 20-30 frames per
second motions.
(3) Tracking systems need to be capable of continually reporting a user's position
and orientation.
(4) One has to be able to create and maintain large databases of models in the
virtual world.
Resolution will have to increase before VR looks real. A user would want to get
the same sensations from the virtual environment as from a real one. In particular,
there should be force feedback. A big early problem was latency. In 1994 it tended to
be 250-500 ms, which was much too large because flight simulators have shown that
a latency of more than 50 ms is perceptible. The latency problem is especially notice-
able for head rotations. Motion capture can be achieved via wireless optical and
magnetic tracking systems with or without wires or by using an exoskeleton. Virtual
humans are very difficult because their motions a very complex. Models for a virtual
environment are not easy to come by. They usually come either from a CAD systems
Search WWH ::




Custom Search