Information Technology Reference
In-Depth Information
the argument would seem to have been won by Hayes. Still, there is reason to pause
to consider the possibility that Berners-Lee is correct. First, while his notion may
seem counter to 'common-sense' within formal logic, it should be remembered that
as far as practical results are concerned, the project of logic-based modelling of
common-sense knowledge in classical artificial intelligence inaugurated by Hayes
earlier is commonly viewed to be a failure by current researchers in AI and cognitive
science (Wheeler 2005). In contrast, despite the earlier and eerily similar argument
that Berners-Lee had with original hypertext academic researchers about broken
links and with the IETF about the impossibility of a single naming scheme for the
entire Internet, the Web is without a doubt an unparalleled success. While in general
the intuitions of Berners-Lee may seem to be wrong according to academia, history
has proven him right in the past. Therefore, one should take his pronouncements
seriously.
The Identity Crisis is not just a conflict between merely two differing individual
opinions, but a conflict between two entire disciplines: the nascent discipline of
'Web Science' as given by the principles of Web architecture, and that of knowledge
representation in AI and logic (Berners-Lee et al. 2006). Berners-Lee's background
is in the Internet standardization bodies like the IETF, and it is primarily his
intuitions behind Web architecture. Hayes, whose background in logic jumpstarted
the field of knowledge representation in artificial intelligence, should be taken
equally seriously. If two entire fields, who have joined common-cause in the
Semantic Web, are at odds, then trouble at the level of theory is afoot.
Troubles at levels of theory invariably cause trouble in practice. So this disagree-
ment would not be nearly as worrisome were not the Semantic Web itself in such a
state of perpetual disrepair, making it practically unusable. In a manner disturbingly
similar to classical artificial intelligence, the Semantic Web is always thought of
as soon-to-be arriving, the 'next' big thing, but its actual uses are few and far
between. The reason given by Semantic Web advocates is that the Semantic Web is
suffering from simple engineering problems, such as a lack of some new standard,
some easily-accessible list of vocabularies, or a dearth of Semantic Web-enabled
programs. That the Semantic Web has not yet experienced the dizzying growth of
the original hypertext Web, even after an even longer period of gestation, points
to the fact that something is fundamentally awry. The root of the problem is the
dependence of the Semantic Web on using URIs as names for things non-accessible
from the Web.
Far from being a mandarin metaphysical pursuit, this problem is the very first
practical issue one encounters as soon as one wants to actually use the Semantic
Web. If an agent receives a graph in RDF, then the agent should be able to determine
an interpretation. The inference procedure itself may help this problem, but it may
instead make it worse, simply producing more uninterpretable RDF statements. The
agent could employ the follow-your-nose algorithm, but what information, if any,
should be accessible at these Semantic Web-enabled URIs? If a user wants to add
some information to the Semantic Web, how many URIs should they create? One for
the representation, and another for the referent the representation is about ? Should
the same URI for the Eiffel Tower itself be the one that is used to access a web-page
about the Eiffel Tower?
Search WWH ::




Custom Search