Image Processing Reference
In-Depth Information
29.3.2 Interaction with Large Displays
A big conceptual challenge of using large displays for visualization is providing
efficient and natural paradigms for user interaction. On large displays, normal win-
dow system interaction metaphors break down [ 25 ]. A simple example of this is
moving windows on the screen, which is trivial on regular displays but much harder
to achieve on large displays if larger pixel distances need to be covered. More-
over, using mouse and keyboard interaction together with large displays is cumber-
some, because they usually require the user to sit down with mouse and keyboard
in front of the display. Large displays make walking around and gesturing natural,
whereas classical interaction paradigms lack this naturalness [ 49 ]. However, high-
resolution displays have been shown to improve the capabilities of users to compare
and find targets, as well as to outperform displays of lower resolution for navigation
tasks [ 2 ].
Large displays invite metaphors such as those of standard whiteboards [ 51 ], which
affords using informal writing, sketching and space management [ 22 ]. These display
devices also invite collaborative approaches to visualization [ 52 ], and multi-user
interaction [ 28 ]. Related approaches have been developed for collaborative visual-
ization on tabletop displays [ 58 ], which often employ tracking via overhead-mounted
cameras like the Lambda table system [ 37 ].
Common interaction approaches leverage tracked objects or pointers, such as the
VisionWand [ 8 ], which provides the user with a (passive) wand that is tracked in 3D,
and then used together with visual widgets overlayed on the display. The physical
size of large displays can become a significant problem in such approaches, which
can be circumvented by interaction metaphors that do not require direct pointing.
Examples are using infrared laser tracking [ 10 ], or the LumiPoint system [ 14 ] that
is capable of tracking multiple pen-like objects with video cameras.
Whiteboard-style interaction can also be achieved by giving users small touch
screens and a pen [ 51 ]. More recently, similar interaction capabilities can be provided
using multi-touch tablets such as an Apple iPad where multi-touch gestures steer the
visualization on the display wall [ 1 ]. Instead of using pointer objects, gestures can
be recognized directly, such as multi-finger gestural input [ 40 ]. Multi-touch input
technologies can broadly be categorized as being either optical, capacitive, or resis-
tive [ 55 ]. Painting gestures can also be seamlessly supported across the boundaries
of multiple displays, by stitching the individual parts together [ 23 ]. Gesture recog-
nition can further be combined with speech recognition [ 48 , 49 ]. Recent approaches
employ other technological advances such as the cheap gyroscopes and accelerom-
eters found in mobile phones and video console hardware, for example using the
peripherals of the Nintendo Wii video game console for stroke-based rendering [ 21 ].
 
Search WWH ::




Custom Search