Databases Reference
In-Depth Information
create matchings interactively through the web application. The online community
can comment on the matchings, discuss and refine them. There is currently more
than 30,000 such matchings available [ Noy et al. 2008 ].
One important aspect of BioPortal's matching support is that both the ontologies
and the matchings are available via web services. This is an important distinction
from the early work of Zhdanova and Shvaiko. By making the consumption of
these resources readily available to anyone that wishes to make use of this infor-
mation, it greatly lowers the barrier of entry for applications that need matchings.
The consuming applications do not need to be concerned with updates to the ontolo-
gies or matchings, as those are handled by BioPortal and immediately available via
the services. The services also potentially help promote feedback and improvement
about the matchings in BioPortal as it is in consuming application's best interest to
improve the matchings. However, without the services, if the matchings were simply
downloaded, consumers could make local changes without contributing those back
to the community.
There is great potential with a community web-based approach for collecting and
sharing matchings. However, this area of study is still very new. To the best of our
knowledge, researchers have not yet performed any evaluation to determine whether
users can be motivated to contribute to such projects and whether such an approach
is feasible. In the next section, we survey existing user-based evaluations and exper-
iments that have been carried out in the ontology matching community. These
experiments have mostly been focused on the differences between two tools or how
users interpret the automatic suggestions computed by the underlying algorithms.
6
Experiments and Evaluation
As our survey of tools in this chapter demonstrates, the development of semi-
automatic tools for ontology matching has been gaining momentum. However,
evaluation of such tool is still very much in its infancy. There has been only a hand-
ful of user-based evaluations carried out in this area. All of these experiments have
involved the P ROMPT system.
The first experiment was led by the authors of the P ROMPT tool. The experiment
concentrated on evaluating the correspondence suggestions provided by the tool by
having several users merge two ontologies. The researchers recorded the number
of steps, suggestions followed, suggestions that were not followed, and what the
resulting ontologies looked like. Precision and recall were used to evaluate the qual-
ity of the suggestions: precision was the fraction of the tool's suggestions that the
users followed and recall was the fraction of the operations performed by the users
that were suggested by the tool. The experiment only involved four users, which
was too small to draw any meaningful conclusions. The authors stated that, “[w]hat
we really need is a larger-scale experiment that compares tools with similar sets of
pragmatic criteria [ Noy and Musen 2002 , p. 12].”
Search WWH ::




Custom Search