Graphics Reference
In-Depth Information
Glueck et al. [GCA + 09] demonstrate how to improve the output side of inter-
action to make it easier for the user to understand the placement of objects in a 3D
modeling system (see Figure 21.17) by showing their relationship to a ground
plane (with a multiscale grid to assist in understanding size). This, in turn, is
related to work of Herndon et al. [HZR + 92], in which shadows of an object are
projected on three walls, and the user can drag the shadow on any wall to induce
a corresponding motion of the object (see Figure 21.18).
Figure 21.17: Position pegs give
cues about the vertical position
of objects. Transparent peg bases
indicate objects below the plane.
Pink pegs, like the one clos-
est to the central grid-crossing,
represent assemblies rather than
individual objects. (Courtesy of
Michael Glueck and Azam Khan
©2009 ACM, Inc. Reprinted by
permission.)
All of these techniques show off possibilities for improved navigation and
manipulation in a particular context (3D CAD and modeling); for navigation in
a 3D environment (e.g., in video games) rather different approaches make sense.
In CAD, for instance, you may want to be able to pass through surfaces to reach
hidden surfaces on which you will then perform further operations, while in video
games, it's typical to prevent players from passing through walls, for instance, and
the control is often primarily 2D (forward-back and turn-left-or-right), with height
above the floor determined by typical human dimensions. While generally under-
stood camera and motion and object controls may evolve (just as some standard
controls have evolved in 2D), we anticipate that application- or domain-specific
controls will continue to be developed.
The form of interaction is also dependent on the device you're using: A
user in a virtual reality system typically adjusts the view by moving his/her
head and body, although there are many alternatives, like the World-in-Miniature
approach [PBBW95], in which the VR user can hold in one hand a miniature ver-
sion of the world, and move a miniature camera with the other, thus establishing
a new point of view for the full-size world.
In some contexts, camera control can be inferred from other aspects of the
application. He et al. [HCS96] describe a “virtual cinematography” tool that uses
various film idioms to automatically choose views of scenes containing multiple
interacting people. For instance, in a film, when two people begin talking to each
other, we typically see them both in profile; as the conversation proceeds, we typ-
ically see jump cuts between reciprocal over-the-shoulder views. Idioms like this
can be used to automatically place the virtual camera in a scene with interacting
people, or to assist in virtual storytelling, etc.
In general, the success of these methods can be characterized by context inte-
gration and expression of intent and the integration of expert knowledge into inter-
faces. For camera control, for instance, the viewer typically doesn't really want to
dolly the camera. Instead, she wants to get a closer look at something; dollying
the camera is a means to an end. The Unicam system provides a gesture to say,
“Give me an oblique view of this object from slightly above it,” for instance, and
generates the camera transition to that view automatically. Similarly, the virtual
cinematography system incorporates expert knowledge into the design of view
transitions so that the user need not consider anything except “which person to
look at.” In general, there's a cognitive advantage to interfaces that let a user
express intent rather than the action needed to achieve that intent.
Surprisingly often, the technology of interaction is closely tied to the rest of
graphics. Pick correlation, for instance, is most easily implemented with a ray-
scene intersection test, the very same thing we optimized for making efficient
ray-casting renderers. Keeping a virtual camera from passing through walls by
surrounding it with a sphere that's constrained to lie in empty space uses the
underlying technology of collision detection and response to ensure that the sphere
doesn't pass through any scene geometry.
Figure 21.18: Dragging any one
of the three “shadows” of the air-
plane makes the airplane itself
move. (Courtesy of the Brown
Graphics Group, ©1992 ACM,
Inc. Reprinted by permission.)
Search WWH ::




Custom Search