Information Technology Reference
In-Depth Information
private. Pen switching is also intuitive and provides clear visual feedback. How-
ever, the user has to buy several digital pens. Moreover, research shows that people
tend to use one single pen rather than switching between many different pens [91].
Switching between different pen modes using one single pen, e.g. by pressing a bar-
rel button on the pen, might alleviate this issue but only allows for a small number
of visibilities.
A third method relies on buttons that are printed on the printout. Each button rep-
resents one visibility level. By tapping on that button before writing the annotation,
the respective visibility is set. The advantage is that a large number of visibilites can
be supported using only one single pen. Moreover, performing a simple pen tap is
quick and can be easily included into the annotation process. However, no visual
feedback on the visibility level is available on the printed document.
A fourth method consists of classifying notes by performing specific pen ges-
tures. While this requires memorizing specific gestures, a potentially large number
of visibility levels can be supported. The gestures also visually communicate which
visibility has been defined. However, as gesture recognition involves uncertainty,
the user must be provided with instant feedback on whether a gesture has been
correctly recognized. This requires specific digital pens that can perform gesture
recognition and provide visual feedback directly on the pen, even in mobile condi-
tions without another computer around. Currently only Livescribe pens have these
capabilities. Moreover, the system has to distinguish gestures from ordinary hand-
writings or drawings. For this purpose, related research suggests to use additional
hardware like a foot pedal or a second pen [75].
All of these methods can be combined with CoScribe and are compatible with the
collaborative visualizations presented in this chapter. In our current implementation,
we opted for the button-based method. It would have been to costly to equip a large
number of participants of our field studies with multiple pens each to allow for pen-
based differentiation. Moreover, at the time of our evaluations, Livescribe pens were
not yet available on the market 1 ; so in the case of gesture-based differentiation no
immediate feedback on the result or failure of recognition could have been given.
A toolbar containing several printed “buttons” is printed in the center region of
each paper sheet (see Fig. 5.5). A visibility is associated to an individual annotation
by tapping with the pen on the corresponding button before writing the annotation.
Moreover, a visibility level can be set or modified later on by making two consecu-
tive pen taps on the button and on the annotation. While no graphical feedback on
the tagging is provided on the printed slide unless the user makes additional mark-
ings, the visibility level is visualized with specific colors in the CoScribe viewer
and on subsequent printouts. The viewer contains similar buttons as the printouts
for defining or modifying visibilities.
A correct interpretation is guaranteed, as determining the pen position on a paper
button does not imply uncertainty. Most pens provide graphical feedback to the user
at the moment the button area is tapped on (Fig. 5.6). Hence, the user can be sure the
classification has been correctly recognized. A problem of most current Anoto pens
1 Even current Livescribe pens do only partially support our concept, since they do not allow for
live streaming of pen data.
Search WWH ::




Custom Search