Information Technology Reference
In-Depth Information
the preview function, which visualizes all shared comments or all comments of a
specific user in a down-scaled manner.
In the interviews, three participants stated that a list view of all annotations
should complement the view to support users in systematically reviewing all an-
notations. One participant suggested a function for filtering out slides that are not
important. This would allow for creating a printed summary of the lecture that con-
tains only slides which are classified as important.
5.4.3 Study III: Performance of Handwriting Recognition
A substantial advantage of digital over traditional handwritings is the possibility
to recognize handwritten text and to offer full text search within handwritings. In
this section, we analyze the recognition accuracy for handwritten annotations and
present an approach that significantly increases the performance for domain-specific
terms. 2
Method
For the evaluation, we used the Microsoft handwriting recognition engine that is part
of Windows XP Tablet PC Edition [118]. It is an on-line handwriting recognition en-
gine [119] that recognizes text from digital ink. In contrast to off-line handwriting
recognition, on-line recognition uses not only the visual image of the handwriting,
but relies on spatio-temporal data, i.e. the temporal sequence of the two-dimensional
coordinates of the writing is available and used for the recognition. The engine uti-
lizes a built-in dictionary of words of a given language, which can be extended or
replaced by other dictionaries. It automatically segments a set of pen traces into in-
dividual words and separates text from graphics. For each word, a best guess and up
to nine alternates with lower confidence scores are returned. The version available
at the time of our experiments was not trainable to an individual user's handwriting.
We used the German dictionary for our experiments.
First we evaluated the baseline performance of the handwriting recognition en-
gine. We used a set of annotations in German language we had collected during the
field study of CoScribe. From a set of 679 annotations that 10 students had made on
lecture slides of a computer science lecture, we removed annotations that contained
only drawings, whose text varied extremely in size (characters have more than 100
% of difference in height) or which were illegible to human readers. Of this reduced
set, we randomly chose 169 handwritten annotations. The text of these annotations
was manually transcribed.
Common metrics used to evaluate the performance of speech recognition and
handwriting recognition engines are word error rate and character error rate. Both
2 I gratefully acknowledge the graduate student Jie Zhou with whom I conducted this study in
collaboration.
Search WWH ::




Custom Search