Databases Reference
In-Depth Information
8
Challenges and Next Steps
As our survey in this chapter demonstrates, workers are developing more and more
interactive approaches for supporting semiautomatic ontology matching. Many
desktop tools for both ontology and schema matching make use of a similar visual
representation of matchings - the line-based metaphor for representing a corre-
spondence. This approach is attractive because it is easy to understand what the
visualization is attempting to convey. However, previous studies have indicated large
variation in the usability of such an approach [ Falconer and Storey 2007a ; Falconer
2009 ]. It appears that visual support for matching is not as simple as copying this
particular interface style. It is a combination of features and support techniques that
assist with a user's workflow that is ultimately needed to help matching users make
efficient and effective matching decisions.
Most of the tools in this research area have not been based on theoretical find-
ings from behavioral user studies. They have instead often evolved from a need
for some level of interaction with the underlying algorithm. However, without tool
evaluations or underlying theories, it is impossible to pinpoint the exact features
that lead to a more usable tool. Researchers must address this lack of evaluation and
theoretical foundations.
In 2005, a group of researchers started the Ontology Alignment Evaluation Ini-
tiative (OAEI) 10 to help provide a standard platform for developers to compare and
evaluate their ontology matching approaches. OAEI provides benchmark match-
ing datasets that enable developers of different matching systems to compare their
results. At the moment, OAEI evaluates only automatic approaches. We must extend
this evaluation framework to compare and contrast interactive tools as well.
Such evaluation will require the development of a standardized comparison
framework and evaluation protocols. Comparing interactive tools is more chal-
lenging than comparing automatic tools for several reasons: First, the evaluation
of interactive tools is more expensive because it requires participation of domain
experts in creating the matchings. Second, participation of humans in the evaluation
introduces the inevitable bias and differences in the level of expertise and interests
of those users who perform the matchings. Familiarity with some tools might bias
users toward particular approaches and paradigms. Third, as our survey shows, the
tools vary significantly in the types of input that they take and the types of analysis
that they perform during the interactive stages. To compare the tools, we must not
only characterize these differences but also develop protocols that would allow us
to evaluate unique aspects of the tools, while keeping the comparison meaningful.
There will need to be common interfaces that would enable evaluators to provide
similar initial conditions for the tools, such as the initial set of matchings and to
compare the results, such as the matchings produced by the users.
This evaluation would also face some of the same challenges that OAEI faces.
For example, there are many strong tools from both industry and research, yet many
are not publicly available, making even informal comparisons challenging.
10 http://oaei.ontologymatching.org .
Search WWH ::




Custom Search