Databases Reference
In-Depth Information
ff
erence compared to the Protégé plugin is that the knowledge
base is accessed via SPARQL, since OntoWiki is a SPARQL-based web application. In
Protégé, the current state of the knowledge base is stored in memory in a Java object. As
a result, we cannot easily apply a reasoner on an OntoWiki knowledge base. To over-
come this problem, we use the DL-Learner fragment selection mechanism described in
[56,57,30]. Starting from a set of instances, the mechanism extracts a relevant fragment
from the underlying knowledge base up to some specified recursion depth. Figure 32
provides an overview of the fragment selection process. The fragment has the property
that learning results on it are similar to those on the complete knowledge base. For a
detailed description we refer the reader to the full article.
The fragment selection is only performed for medium to large-sized knowledge
bases. Small knowledge bases are retrieved completely and loaded into the reasoner.
While the fragment selection can cause a delay of several seconds before the learning
algorithm starts, it also o
A major technical di
ers flexibility and scalability. For instance, we can learn class
expressions in large knowledge bases such as DBpedia in OntoWiki. 32
ff
Fig. 33. Screenshot of the result table of the DL-Learner plugin in OntoWiki
Figure 33 shows a screenshot of the OntoWiki plugin applied to the SWORE [130]
ontology. Suggestions for learning the class “customer requirement” are shown in
Manchester OWL Syntax. Similar to the Protégé plugin, the user is presented a table
of suggestions along with their accuracy value. Additional details about the instances
of “customer requirement”, covered by a suggested class expressions and additionally
contained instances can be viewed via a toggle button. The modular design of OntoWiki
allows rich user interaction: Each resource, e.g. a class, property, or individual, can be
viewed and subsequently modified directly from the result table as shown for “design
32
OntoWiki is undergoing an extensive development, aiming to support handling such large
knowledge bases. A release supporting this is expected for the first half of 2012.
 
Search WWH ::




Custom Search