Information Technology Reference
In-Depth Information
repository of word-level representations is referred to
as the
lexicon
, and many traditional approaches have
assumed that there is a centralized, canonical lexicon in
the brain where each word is uniquely represented. In
contrast, our basic principles of representation (chap-
ter 7) suggest that word-level representations should be
distributed across a number of different pathways spe-
cialized for processing different aspects of words.
This idea of a
distributed lexicon
has been cham-
pioned by those who model language from the neural
network perspective (e.g., Seidenberg & McClelland,
1989; Plaut, 1997). We begin this chapter with a model
instantiating this idea, where
orthographic
(written
word forms),
phonological
(spoken word forms), and
semantic
(word meaning) representations interact dur-
ing basic language tasks such as reading for meaning,
reading aloud, speaking, and so forth. The orthographic
and phonological pathways constitute specialized per-
ceptual and motor pathways, respectively, while the se-
mantic representations likely reside in higher-level as-
sociation areas. In this model, activation in any one of
these areas can produce appropriate corresponding ac-
tivation in the other areas. Furthermore, interesting de-
pendencies develop among the pathways, as revealed by
damage to one or more of the pathways. Specifically, by
damaging different parts of this model, we simulate var-
ious forms of acquired
dyslexia
— disorders in reading
that can result from brain damage.
Another theme of the chapter concerns the many reg-
ularities in human language, which can be conceived of
as obeying rules. However, rarely are these rules ab-
solute — there always seem to be exceptions. From a
symbolic, computer-metaphor perspective, one would
implement such a system as a set of rules augmented
with a lookup-table of what to do with the exceptions.
In contrast, the dedicated, content-specific nature of
neural network representations does not require a for-
mal separation between the processing of regularities
and exceptions — the network will automatically pro-
cess a given input pattern according to its specific in-
teractions with the learned weight patterns. If the input
aligns with a regular mapping weight pattern, the appro-
priate mapping units will be automatically engaged; ex-
ception inputs will similarly automatically engage their
appropriate mapping units (and many mapping units are
shared between regulars and exceptions in a rich dis-
tributed representation). Thus, neural network mod-
els can allow more parsimonious accounts of the often
complex web of regularities and exceptions in language
by modeling them with a unified set of principles (Plaut
et al., 1996; Seidenberg, 1997).
Our basic distributed lexicon model is elaborated
throughout the chapter using more detailed models that
focus on subsets of pathways. One extension of our dis-
tributed lexicon model explores the mapping between
orthography and phonology in greater detail. The vi-
sual word perception pathway appears to be located
within the ventral object recognition pathway, and can
be viewed as a specialized version of object recogni-
tion. Thus, we apply the basic principles of visual ob-
ject recognition from chapter 8 to this model. We focus
on the model's ability to
generalize
its knowledge of
the orthography-phonology mapping to the pronunci-
ation of
nonwords
(e.g., “nust,” “mave”), according to
the regularities of the English language. These gener-
alization tests reveal the model's ability to capture the
complex nature of these regularities.
Another extension of the distributed lexicon model
explores the production of properly
inflected
verbs in
the mapping from semantics to phonological speech
output. These inflections alter words to explicitly in-
dicate or
mark
specific types of grammatical contrasts
such as singular/plural or present/past tense, and the
like. We focus on the
past-tense
inflectional system,
which has played a large role in the application of neu-
ral networks to language phenomena. Developmentally,
children go through a period where they sometimes
overregularize
the regular past-tense inflection rule (i.e.,
add the suffix
-ed
), for example producing
goed
instead
of
went
. Overregularization has been interpreted as evi-
dence for a rule-based system that overzealously applies
its newfound rule. However, neural networks can sim-
ulate the detailed pattern of overregularization data, so
a separate rule-based system is unnecessary. We will
see that the correlational sensitivity of Hebbian learn-
ing, combined with error-driven learning, may be im-
portant for capturing the behavioral phenomena.
A third extension of the distributed lexicon model ex-
plores the ultimate purpose of language, which is to
convey meaning (semantics).
We assume that seman-
Search WWH ::
Custom Search