Databases Reference
In-Depth Information
may require more effort than others [ MacKenzie et al. 1991 ], for example, a point-
and-click is much easier than dragging or typing, weights can be assigned to each
type of action to build a cost model for quantifying the total required effort.
One of the limitations of the above model is that it does not distinguish between
clicks leading to the final mapping design and corrective actions, such as, undo or
delete operations. It assumes that the mapping designer is familiar with the mapping
tool and makes no mistakes. Another limitation is that the model does not capture
the time the designer spends on thinking. A mapping tool that requires the designer
think for long time before designing the mapping with only few clicks should not be
considered more efficient than others that require less thinking by the designer but
a few more clicks. A final limitation of this idea is that the model does not consider
features such as presentation layout, visual aids, access to frequently used tasks, etc.
In the area of schema integration, the Thalia benchmark [ Hammer et al. 2005 ]
can be used for objectively evaluating the capabilities of integration technology
by taking into account, besides the correctness of the solution, the amount of pro-
grammatic effort (i.e., the complexity of external functions) needed to resolve any
heterogeneity. For a fair comparison, any measurement of the needed effort must be
done on the implementation of the twelve queries that Thalia provides. However,
Thalia, does not provide any specifications on how this “effort” is to be measured.
7
Measuring Effectiveness
Measuring the effectiveness of a mapping or matching tool means measuring
whether (or how much) the tool can fulfill its expectations for a given task. In the
case of matching, an expert user typically knows what the correct matches are, and
the matching tool is expected to find them. Thus, evaluating its effectiveness boils
down to a comparison between the expected set of matchings and the set of match-
ings that the tool generated. The situation is slightly different for the case of mapping
systems. Since the expected output of a mapping system is a set of mappings that
is used to generate the target (or global) instance, evaluating whether the mapping
system has fulfilled its expectations can be done by checking whether the generated
mappings can produce the expected target instance, or how close to the expected
instance is the one that the generated mappings produce. This comparison can be
done either extensionally, by comparing instances, or intensionally, by comparing
the generated transformation expressions, i.e., the mappings. In this section, we pro-
vide an overview of metrics that have been used in the literature for measuring such
effectiveness.
7.1
Supported Scenarios
One way to evaluate a matching or mapping tool is by counting the percentage of
scenarios it can successfully implement from a provided list of scenarios. A basic
assumption is that there is an oracle providing the ground truth for each of these
Search WWH ::




Custom Search