Databases Reference
In-Depth Information
Finally, being interested in data integration, we often treat correspondences
between elements from different models separately although in principle they
could be represented by ordinary DL axioms. In particular, we often use the
following translation of correspondences to weighted ground predicates of the
Markov logic network
( e 1 ,e 2 ,R,c )
map R ( e 1 ,e 2 ) ,c
where c is a a-priori confidence values.
5 Markov Logic and Ontology Matching
We provide a formalization of the ontology matching problem within the
probabilistic-logical framework. The presented approach has several advantages
over existing methods such as ease of experimentation, incoherence mitigation
during the alignment process, and the incorporation of a-priori confidence val-
ues. We show empirically that the approach is ecient and more accurate than
existing matchers on an established ontology alignment benchmark dataset.
5.1 Problem Representation
Given two ontologies
O 2 and an initial a-priori similarity σ we apply the
following formalization. First, we introduce observable predicates O to model
the structure of
O 1 and
O 2 with respect to both concepts and properties. For
the sake of simplicity we use uppercase letters D,E,R to refer to individual
concepts and properties in the ontologies and lowercase letters d, e, r to refer
to the corresponding constants in C . In particular, we add ground atoms of
observable predicates to the set of hard formulas for i
O 1 and
∈{
1 , 2
}
accordingtothe
following rules:
O i |
= D
E
sub i ( d, e )
O i |
= D
¬
E
dis i ( d, e )
sub i ( r, d )
O i |
=
R.
D
R 1 .
sub i ( r, d )
O i |
=
D
sup i ( r, d )
O i |
=
R.
D
R 1 .
sup i ( r, d )
O i |
=
D
dis i ( r, d )
O i |
R.
¬
D
=
R 1 .
dis i ( r, d )
O i |
=
¬
D
The knowledge encoded in the ontologies is assumed to be true. Hence, the
ground atoms of observable predicates are added to the set of hard formulas,
making them hold in every computed alignment. The hidden predicates map c
Search WWH ::




Custom Search