Information Technology Reference
In-Depth Information
11.2.4 Context-Level Evaluation
Sometimes the ontology is a part of a larger collection of ontologies that may refer-
ence one another (e.g., one ontology may use a class or concept declared in another
ontology), for example, on the web or within some institutional library of ontolo-
gies. This context can be used for evaluation of an ontology in various ways. For
example, the Swoogle search engine of Ding et al. [5] uses cross-references between
semantic-web documents to define a graph and then compute a score for each ontol-
ogy in a manner analogous to PageRank used by the Google web search engine. The
resulting “ontology rank” is used by Swoogle to rank its query results. A similar ap-
proach used in the OntoKhoj portal of Patel et al. [21]. In both cases an important
difference in comparison to PageRank is that not all “links” or references between
ontologies are treated the same. For example, if one ontology defines a subclass of
a class from another ontology, this reference might be considered more important
than if one ontology only uses a class from another as the domain or range of some
relation.
Alternatively, the context for evaluation may be provided by human experts;
for example, Supekar [26] proposes that an ontology be enhanced with metadata
such as its design policy, how it is being used by others, as well as “peer reviews”
provided by users of this ontology. A suitable search engine could then be used to
perform queries on this metadata and would aid the user in deciding which of the
many ontologies in a repository to use. The downside of this approach is that it
relies almost entirely on manual human effort to both provide annotations and to
use them in evaluating and selecting an ontology.
11.2.5 Application-Based Evaluation
Typically, the ontology will be used in some kind of application or task. The outputs
of the application, or its performance on the given task, might be better or worse
depending partly on the ontology used in it. Thus one might argue that a good
ontology is one which helps the application in question produce good results on
the given task. Ontologies may therefore be evaluated simply by plugging them
into an application and evaluating the results of the application. This is elegant
in the sense that the output of the application might be something for which a
relatively straightforward and non-problematic evaluation approach already exists.
For example, Porzel and Malaka [22] describe a scenario where the ontology, with its
relations (both is-a and others) is used primarily to determine how closely related
the meaning of two concepts is. The task is a speech recognition problem, where
there may be several hypotheses about what a particular word in the sentence really
means; a hypotheses should be coherent, which means that the interpretations of
individual words should be concepts that are relatively closely related to each other.
Thus the ontology is used to measure distance between concepts and thereby to
assess the coherence of hypotheses (and choose the most coherent one). Evaluation
of the final output of the task is relatively straightforward, and requires simply that
the proposed interpretations of the sentences are compared with the gold standard
provided by humans.
An approach like this can elegantly side-step the various complications of on-
tology evaluation and translate them to the problem of evaluating the application
output, which is often simpler. However, this approach to ontology evaluation also
Search WWH ::




Custom Search