Information Technology Reference
In-Depth Information
(1959). These efforts reached their apex in Hayes's “Naive Physics Manifesto,”
which called for parts of human understanding to be formalized as first-order logic.
Although actual physics was best understood using mathematical techniques such
as differential equations, Hayes conjectured that most of the human knowledge
of physics, such as “water must be in a container for it not to spill,” could be
conceptualized better in first-order logic (1979). Hayes took formalization as a
grand long-term challenge for the entire AI community to pursue: “we are never
going to get an adequate formalization of common sense by making short forays
into small areas, no matter how many of them we make” (Hayes 1979). While
many researchers took up the grand challenge of Hayes in various domains, soon
a large number of insidious problems were encountered, primarily in terms of the
expressivity of first-order logic and its undecidability of inference. In particular,
first-order logic formalizations were viewed as not expressive enough, being unable
to cope with temporal reasoning as shown by the Frame Problem, and so had
to be extended with fluents and other techniques (McCarthy and Hayes 1969).
Since the goal of artificial intelligence was to create an autonomous human-level
intelligence, another central concern was that predicate calculus did not match very
well with how humans actually reasoned. For example, humans often use default
reasoning, and various amendments must be made for predicate calculus to support
this (McCarthy 1980). Further efforts were made to improve first-order logic with
temporal reasoning to overcome the Frame Problem, as well as the use of fuzzy
and probabilistic logic to overcome issues brought up by default reasoning and the
uncertain nature of some knowledge (Koller and Pfeffer 1998). Yet as predicted
by Hubert Dreyfus, it seemed none of these formal solutions could solve the
fundamental epistemological problem that all knowledge was in front of an immense
background of a world that itself seemed to resist formalization (Dreyfus 1979).
Under increasing criticism from its own former champions like McDermott,
first-order predicate calculus was increasingly abandoned by those in the field
of knowledge representation (1987). McDermott pointed out that formalizing
knowledge in logic requires that all knowledge be formalized as a set of axioms and
that “it must be the case that a significant portion of the inferences we want...are
deductions, or it will simply be irrelevant how many theorems follow deductively
from a given axiom set” (1987). McDermott found that in practice neither can
all knowledge be formalized and that even given some fragment of formalized
knowledge, the inferences drawn are usually trivial or irrelevant (1987). Moving
away from first-order logic, the debate focused on what was the most appropriate
manner for AI to model human intelligence. Some researchers championed a
procedural view of intelligence that regarded the representation as itself irrelevant if
the program could successfully solve some task given some input and output. This
contrasted heavily with earlier attempts to formalize human knowledge that it was
called the declarative versus procedural debate. Champion of procedural semantics
Terry Winograd stated that “the operations on symbol structures in a procedural
semantics need not correspond to valid logical inferences about the entities they
represent” since “the symbol manipulation processes themselves are primary, and
the rules of logic and mathematics are seen as an abstraction from a limited set of
them” (1976). While the procedural view of semantics first delivered impressive
Search WWH ::




Custom Search