Information Technology Reference
In-Depth Information
variant to various transformations of the visual input.
No detailed structural model needs to be constructed.
Instead, the problem becomes one of a massively paral-
lel many-to-few mapping of images to internal object
representations that appears to be considerably more
practical.
If we apply these ideas to sentence processing, the
network can accumulate constraints from the incoming
words and produce a distributed representation that best
satisfies these constraints. We explored something very
much like this in the semantic representation model
from the previous section — the network produced a
novel semantic representation that reflected the con-
straints from all the words being presented. This se-
mantics model is also useful for showing that one can
get pretty far without processing any syntactic informa-
tion. Thus, it is possible that the raw collection of words
present makes a large contribution to people's interpre-
tations of sentences, with syntactic structure providing
additional constraints. This view is consistent with the
considerable empirical evidence that semantic proper-
ties of individual words can play an important role in
sentence comprehension
chapter 6, we found that the context representation in a
simple recurrent network (SRN) is sufficiently powerful
to enable a network to learn a simple finite state gram-
mar. The SG model uses this same mechanism for its
sentence level processing — indeed, the context repre-
sentation is the sentence gestalt itself. Thus, if you have
not yet read section 6.5 (and specifically section 6.6) in
chapter 6, we recommend you do so now. Otherwise,
you should still be able to follow the main points, but
may not have as clear an understanding of the details.
10.7.1
Basic Properties of the Model
The SG model receives input about an environment via
sentences. To define the nature of these sentences, we
need to specify both the nature of the underlying envi-
ronment (which provides the deep structure of seman-
tics), and the way in which this environment is encoded
into the surface form of actual sentences (which is a
function of both syntax and semantics). We will first
discuss the nature of the environmental semantics, fol-
lowed by a discussion of the syntactic features of the
language input. Then, we will discuss the structure of
the network, and how it is trained.
(MacDonald, Pearlmutter, &
Seidenberg, 1994).
These kinds of ideas about a constraint-satisfaction
approach to sentence comprehension have been devel-
oped by Elman (1990, 1991, 1993), and by St. John
and McClelland (1990). Elman's models have empha-
sized purely syntactic processing, while the St. John
and McClelland (1990) model combines both seman-
tic and syntactic constraints together into what they re-
ferred to as the “Sentence Gestalt” (SG) model. The
term Gestalt here comes from the holistic notions of the
Gestalt psychologists, aptly capturing the multiple con-
straint satisfaction notion. The Gestalt representation in
the SG model is just the kind of powerful distributed
representation envisioned above, capturing both seman-
tic and syntactic constraints. We explore a slightly mod-
ified version of the SG model in this section (we have
also provided a version of the Elman (1991) model
as grammar.proj.gz in chapter_10 in case you
want to explore that model on your own).
The temporally extended, sequential nature of
sentence-level linguistic structure provides perhaps the
greatest challenge to using neural network models. In
Semantics
In contrast with the previous semantics model, where
large amounts of actual text served as the semantic
database for learning, the SG model is based on a simple
“toy world” that captures something like a child's-eye
(and stereotypically sex-typed) view of the world. This
world is populated with four people ( busdriver (adult
male), teacher (adult female), schoolgirl ,andaboywho
is a baseball pitcher ), who do various different actions
( eat, drink, stir, spread, kiss, give, hit, throw, drive ,and
rise ) in an environment containing various other objects
( spot (the dog), steak, soup, ice cream, crackers, jelly,
iced tea, kool aid, spoon, knife, finger, rose, bat (ani-
mal), bat (baseball), ball, ball (party), bus, pitcher, and
fur ) and locations ( kitchen, living room, shed, and park ).
Eighty underlying events take place in this envi-
ronment that define the interrelationships among the
various entities. Each such event can give rise to a
large number of different surface forms.
For exam-
Search WWH ::




Custom Search