Robotics Reference
In-Depth Information
because there is a relatively small number of ways of asking questions,
and therefore it is much easier to match the form of the user's input to
the most appropriate question (and answer) in a database.
In the mid-1960s research on NLP in America was dealt a severe
blow. Strangely enough this appears to be as a direct result of the pub-
licity engendered in January 1954 by the demonstration in New York of
the Georgetown University/IBM translation program, 6 which led to the
spending of some $20 million of government funds on Machine Trans-
lation research in the U.S.A. With hindsight, the Georgetown system,
which was incredibly crude, never had a hope of translating any but the
most carefully chosen texts. But U.S. government expectations were so
high that when the much expected (or hoped-for) advanced Machine
Translation systems failed to materialize, the U.S. government's Auto-
matic Language Processing Advisory Committee (ALPAC) produced a
report on the results of government funding in this field, in which they
pointed out that there had been no machine translation of general scien-
tific text, “and none is in immediate prospect”. U.S. funding for Machine
Translation was promptly curtailed, with the knock-on effect of halting
most other NLP research, both in the U.S.A. and in other countries. For
several years NLP research stagnated somewhat, but not completely.
After the ELIZA era the next generation of NLP programs, in the
late 1960s, were based on a type of database, called a semantic memory ,
of meanings and general facts relating to words. These systems could
retrieve from their memory text structures that contained specific words
or phrases, so when a user employed one of these words or phrases the
system would “know” something about it. But still the programs had
absolutely no genuine understanding of what was said to them or what
they were saying in reply.
In the early 1970s there was a move towards systems based on au-
tomatic parsing, breaking down a sentence into its component parts of
speech in order to identify the function of each part and how the various
parts are related to each other syntactically. Much of this research was in-
spired by Noam Chomsky's work on grammars, and buoyed by the hope
that parsing would give rise to the structure of a sentence which, in turn,
would assist a program in getting at its meaning. The most successful of
these attempts was the SHRDLU program 7 developed by Terry Wino-
6 See the section “The Start of the Modern Age of Machine Translation” in Chapter 2.
7 The name SHRDLU comes from the seventh through twelfth most frequently occurring letters
in English text, the first six being ETAOIN.
Search WWH ::




Custom Search