Information Technology Reference
In-Depth Information
15 <gato rdf:about="#korat"/>
16 </owl:distinctMembers>
17 </owl:AllDifferent>
18 </rdf:RDF>
With both files, we could execute the following SPARQL query:
Obtaining the breeds of cats from the two different websites that have not
established any explicit semantic data link, but that a third person (for example,
us) has exploited, for example, with twinkle (a SPARQL query tool), and we have
linked the ABOXs with the TBOXs. Rule 6 is thus fulfilled.
To generalise the former query, logically we cannot specify FROM of all the
OWL files containing cats. “SPARQL EndPoints” must be used. Assuming that
in some URL we had a SPARQL EndPoint storing semantic data on cats, fed
with some LDSpider tool[5], we could make the following query:
PREFIX rdfs99 : <http ://www.w3. org/1999/02/22
ns#>
PREFIX vertebrados: <http://www. criado . info/owl/vertebrados_es
.owl#>
SELECT ? s ? v
WHERE { ?s rdfs99 : type vertebrados: gato . ?s rdfs99 : type ?v }
rdf
syntax
Obtaining the list of all the cats that there are in different URLs of the Semantic
Web that LDSpider would have been able to explore and incorporate into a
database.
4 Conclusions
In this article we have considered Linked Data and Semantic Web concepts and
their problems. A Semantic Web based on the currently accepted four rules
provides a Semantic Web in deferred time, since this Semantic Web will always
be out of date with the current Web, which is the source of information. We
only have to check Linked Data platforms such as, for example, the DBpedia
to discover that they are updated with subsequent versions and thus are not in
real time 15 . This view of a Semantic Web behind the current Web is apparently
enough of an obstacle to prevent full implementation.
15 http://dbpedia.org/About
 
Search WWH ::




Custom Search