Databases Reference
In-Depth Information
Lambrix and Edberg [ Lambrix and Edberg 2003 ] performed a user evaluation
of the matching tools P ROMPT and Chimaera [ McGuinness et al. 2000 ] for the spe-
cific use case of merging ontologies in bioinformatics. The user experiment involved
eight users, four with computer science backgrounds and four with biology back-
grounds. The participants were given a number of tasks to perform, a user manual
on paper, and the software's help system for support. They were also instructed to
“think aloud” and an evaluator took notes during the experiment. Afterward, the
users were asked to complete a questionnaire about their experience. The tools were
evaluated with the same precision and recall measurements as used in the previously
described P ROMPT experiment [ Noy and Musen 2002 ], while the user interfaces
were evaluated using the REAL (Relevance, Efficiency, Attitude, and Learnability)
[ Lowgren 1994 ] approach. Under both criteria, P ROMPT outperformed Chimaera,
however, the participants found learning how to merge ontologies in either tool
was equally difficult. The participants found it particularly difficult to perform
non-automated procedures in P ROMPT , such as creating user-defined merges.
The third experiment evaluated P ROMPT and the alternative user-interface
C OG Z. The experiment focused on evaluating the cognitive support provided
by the tools in terms of their effectiveness, efficiency, and satisfaction [ Falconer
2009 ]. Researchers assigned eighteen matching and comprehension tasks to partic-
ipants that they had to perform using each tool (nine per tool). The evaluators then
measured the time that it took a participant to complete the task and accuracy with
which they performed the task. They measured the participant satisfaction via exit
interviews and the System Usability Scale [ Brooke 1996 ].
This last experiment was significantly more comprehensive than the previous
studies. Researchers used quantitative analysis to analyze the differences in par-
ticipant performance across the tasks. They used qualitative approaches to help
explain the differences in participant task performance. Furthermore, the design
of the experiment was guided by an underlying theory that the authors previously
proposed [ Falconer and Storey 2007b ].
7
Discussion
In this section, we return to the ontology tools discussed in our survey. We pro-
vide a brief summary of these tools in terms of their visual paradigms, plugins, and
algorithm support (see Table 2.1 ).
Tab le 2.1 provides a high-level comparison between the surveyed tools. However,
more details of comparison and evaluation are needed. In the next section, we dis-
cuss this need more deeply as well as other challenges facing the area of interactive
techniques for ontology matching.
Search WWH ::




Custom Search