Database Reference
In-Depth Information
some parallels to test-driven development, whereby more time is spent on crafting
the tests, in this case the competency questions, than even on the ontology authoring
itself to ensure that all the required reasoning is possible, and the expected outcomes
are met. For example, a competency question might be “Find all the places with agri-
culture as a specified purpose that are adjacent to watercourses.” To return the test
results, the ontology needs to include concepts like Place, Agriculture, Purpose, and
Watercourse, as well as relationships like “adjacent.” But, we also need to make sure
the links in the logic chain are included so that the question will return examples of
Farms adjacent to Streams. That is, statements like “Every Farm is a kind of Place
that has purpose Agriculture” and “Every Stream is a kind of Watercourse” will
need to be included so that facts such as “Manor Farm is adjacent to Kingfisher
Stream” will cause “Manor Farm” to be included in the results.
Where competency questions differ from traditional software engineering prac-
tices of test-driven development, however, is that we may well also be hoping for
unexpected outcomes of a reasoner; particularly when integrating two ontologies,
new information may be discovered serendipitously. It is therefore almost impossible
to write questions to test for these unexpected outcomes. It is obvious that defining
competency questions of this nature cannot be done early in the authoring process.
It is important, though, that they are done as early as possible and are developed in
line with the ontology, being built on and expanded as the development progresses.
10.4.3 B UilDing a l lexicon anD g lossary
There are two principal sources that are used to build the lexicon and glossary: docu-
mentation and domain experts. The involvement of domain experts, either through
direct involvement in the ontology authoring processes (something that we would
strongly recommend) or indirectly through interviewing, can be especially enlight-
ening. This is because documentary sources are often incomplete, out of date, and
contradictory; documents that describe working practices or specifications may not
define what is actually done (raising a separate question regarding which needs to be
corrected): the guidance or practice. Both documents and domain experts will also
provide descriptions that rely on assumed knowledge, so more detail may need to be
teased out either through questioning of the domain expert or through reference to
other material, such as dictionaries that provide a full definition.
Lexically, nouns typically identify possible classes of interest and verbs possible
relationships (properties). Let us consider the following definition of a Duck Pond
found in Merea Maps' Guide for Field Surveyors : “A duck pond is a pond that pro-
vides a habitat for ducks. Duck ponds may contain a duck house.” Duck Pond, Pond,
Habitat, Ducks, and Duck House are therefore all candidates for classes within the
ontology and “is a,” “provides,” and “contain” are candidates for properties.
This description itself can probably be used to directly help construct the glossary
entry for Duck Pond as we can easily determine that Duck Pond is a core concept
(since the Guide for Field surveyors defines the things that surveyors need to record).
Similarly, Pond is also a core concept, and we would expect to find this defined else-
where in the guide. Habitat and Duck, however, are not core concepts, so we would
not expect Merea Maps to provide a detailed description, and the glossary will merely
Search WWH ::




Custom Search