Information Technology Reference
In-Depth Information
Barnes & Underwood, 1959), where A represents one
set of words that are associated with two different sets
of other words, B and C. For example, the word window
will be associated with the word reason in the AB list,
and then window will be associated with locomotive on
the AC list. After studying the AB list of associates,
subjects are tested by asking them to give the appropri-
ate B associate for each of the A words. Then, subjects
study the AC list (often over multiple iterations), and
are subsequently tested on both lists for recall of the
associates after each iteration of learning the AC list.
Although subjects do exhibit some level of interference
on the initially learned AB associations as a result of
learning the AC list, they still remember a reasonable
percentage (see figure 9.4 for representative data).
McCloskey and Cohen (1989) tried to get a standard
backpropagation network to perform this AB-AC list
learning task and found that the network suffered from
what they described as catastrophic interference .A
comparison of typical human data with the network's
performance is shown in figure 9.4. Whereas human
performance goes from 100 percent correct recall on
the AB list immediately after studying it to roughly 60
percent after learning the AC list, the network immedi-
ately drops to 0 percent recall well before the AC list is
learned. In the model we explore here, we start by repli-
cating this catastrophic interference effect in a standard
cortical network like that used to model long-term prim-
ing in the previous section. However, instead of just
concluding that neural networks are not good models of
human cognition (as McCloskey & Cohen, 1989 did),
we will explore how a few important parameters can af-
fect the level of interference. By understanding these
parameters and their consequences, we will gain further
insight into some of the tradeoffs involved in learning
and memory.
The original catastrophic interference finding has in-
spired a fair amount of subsequent research (e.g., Ko-
rtge, 1993; French, 1992; Sloman & Rumelhart, 1992;
McRae & Hetherington, 1993), much of which is con-
sistent with the basic idea that interference results from
the re-use of the same units (and weights) to learn dif-
ferent associations. After learning one association with
a given set of weights, the subsequent weight changes
made to learn a different association tend to undo the
a) AB−AC List Learning in Humans
100
75
50
AB List
AC List
25
0
01
5
10
20
Learning Trials on AC List
b) AB−AC List Learning in Model
100
75
50
25
AB List
AC List
0
0
5
10 15 20 25 30 35 40 45 50
Learning Trials on AC List
Figure 9.4: Human and model data for AB-AC list learning.
a) Humans show some interference for the AB list items as
a function of new learning on the AC list items. b) Model
shows a catastrophic level of interference. (data reproduced
from McCloskey & Cohen, 1989).
previous learning. This will happen any time shared
weights are sequentially used to learn different, incom-
patible associations (e.g., two different locations for
where one's car is parked, or the two different associates
(B and C) for the A words in the AB-AC task).
There are two different ways of avoiding this kind
of interference: (1) have different units represent the
different associations, or (2) perform slow interleaved
learning, allowing the network to shape the weights
over many repeated presentations of the different as-
sociations in such a way as to accommodate their dif-
ferences (e.g., as was done in the initial training of the
Search WWH ::




Custom Search