Information Technology Reference
In-Depth Information
This “worst case” results in an average of 15.2 annotations shared by the own group
and an additional 8 annotations for 100 participants (or 41 annotations for 500 par-
ticipants). These high numbers apply only to a very small number of slides (only the
top 10 % of slides have more than an average of 2.8 annotations per user). In order to
cope in these situations with a too large number of public annotations, these annota-
tions could be automatically filtered. For instance, while personal and shared group
annotations are visualized as discussed above, only these public annotations that
have been classified as relevant by members of the author's group are displayed to
the entire audience. Another approach could consist of automatically summarizing
contents of annotations. This however requires quite reliable handwriting recogni-
tion.
5.4 Evaluation and Discussion
We conclude this chapter with the results of three evaluation studies of CoScribe.
The studies aim at evaluating CoScribe's annotation functionality. In two user stud-
ies, we evaluated the usability of CoScribe for within-lecture annotations and post-
lecture review. Our main goal was to examine whether the interaction techniques
are efficient, reliable, easy to learn and easy to use. A second main goal of the user
studies was to assess user satisfaction in order to examine if the novel interaction
techniques and visualizations are accepted by the users. In this respect, subjective
feedback was an important instrument. Further, we aimed at gaining first user ex-
periences and feedback on potential for improvements. In a third study, we focused
on handwriting recognition. We analyzed the recognition accuracy of handwritten
annotations that were made on lecture slides. Based on these results, we present an
approach that considerably increases the recognition accuracy for domain-specific
terms.
5.4.1 Study I: Field Study of Lecture Annotation
A first user study examined the use of CoScribe for annotating documents in the
field, as a tool for annotating lecture slides during regular computer science lec-
tures. Our goal was to assess the ease of learning and the ease of use of the printed
user interface and of the interaction techniques for making annotations and for clas-
sifying them with visibilities. A further question was whether the techniques can be
easily integrated into the ecology of lecture notetaking, which is characterized by a
high degree of intrinsic cognitive load. We moreover analyzed the annotations made
by the participants in order to assess types and frequencies of annotations made
during the lectures.
Search WWH ::




Custom Search