Information Technology Reference
In-Depth Information
the issue of what kind of knowledge structures and process a human being
must have to understand the meaning of natural language. Since the meaning
of a sentence is not determinable in isolation, but requires relating the sen-
tence to sentences around it, to prior experience, and to some larger context,
the group's work quickly became focused on understanding narratives. In a
series of programs, they developed a theory of the knowledge structures nec-
essary to understand textual narratives. The story-understanding system SAM
(Cullingford 1981) used scripts to capture the notion of stereotyped situations
or contexts. The scripts captured the typical causal connections holding in a
stereotyped situation. The story-understanding system PAM (Wilensky 1981)
and the story-generation system Tale-Spin (Meehan 1977) both incorporated
a notion of the goals held by characters in a narrative and the various means
they have to accomplish these goals. Other work in this group included a model
of ideologically-biased understanding (Carbonell 1979), the use of themes to
capture aspects of stories more abstract than can be captured just with scripts,
plans and goals (Dyer 1983), and a model of narrative memory and reminding
(Kolodner 1984).
Work in this area generated an impressive range of systems, particularly
given the comparatively primitive hardware technology to which these early re-
searchers were limited. A pleasant discovery for later researchers in re-reading
these early reports is a level of charm and wit in system design often unfor-
tunately lacking in contemporary research. Nevertheless, these early narrative
systems fell out of favor, suffering from the same fate that befell many 70's AI
systems. They were intensely knowledge-based, which meant that they func-
tioned only in very limited domains and could be made more general only by
an intensive and probably eventually infeasible knowledge engineering process.
But, perhaps more importantly, as funding for AI dried up during the AI
Winter, AI research became more focused on constrained problems with clear,
measurable results and immediate practical utility. Researchers tried to make
AI more like engineering than like a craft or an art. This required focusing on
problems with discrete measurable outcomes in which it is possible to say with
certainty that a program achieves or does not achieve the given objective. Yet
such a research agenda rules out the ability to work on complex phenomena
such as the human use of narratives precisely because the complexity of such
a phenomenon rules out the possibility for complete, decisively testable mod-
els. Schank makes this clear in his description of the research agenda at Yale
(Schank & Reisbeck 1981: 4):
Search WWH ::




Custom Search