Information Technology Reference
In-Depth Information
One way of conceptualizing the stem completion task
for modeling purposes is as a one-to-many mapping ,
because one stem input can be completed with many
different possible words. A related priming paradigm
that makes this one-to-many mapping even more ex-
plicit is the homophone task used by Jacoby and With-
erspoon (1982). Homophones are two words that have
the same pronunciation but different spelling. Partic-
ipants in this task were primed with one homophone
(e.g., “Name a musical instrument that uses a reed ”),
and were later asked (by spoken instructi o n) to spell
th e critical word (e.g., “Spell the word =red= ,” where
Output
Hidden
Input
is the phonetic representation of the pronunci-
ation of both spellings). The input is the ambiguous
pronunciation, and the output is one of the two possible
spellings. The behavioral result is that participants pro-
duce the primed spelling more frequently than a control
group who have not been primed.
The model we will explore next simulates this one-
to-many mapping paradigm by learning to associate two
different output patterns with a given input pattern. For
simplicity, we use random distributed patterns for the
input and output patterns. An initial period of slow
training allows the network to acquire the appropriate
associations (i.e., to simulate the subject's prior lifelong
experience that results in the relevant knowledge about
homophones and their spellings). One can think of this
initial training as providing the semantic memory for
the network, so that on any given trial the network pro-
duces one of the two appropriate outputs in response to
the input pattern.
Training is followed by a testing phase in which the
network is presented with one particular association of
a given input, and then tested to see which word it will
produce for that input. We will see that a single trial
of learning, at exactly the same learning rate and us-
ing the same learning mechanisms that enabled the net-
work to acquire the semantic information initially, re-
sults in a strong bias toward producing the primed out-
put. Thus, this model simulates the observed behavioral
effects and shows that long-term priming can be viewed
as simply a natural consequence of the same slow learn-
ing processes that establish cortical representations in
the first place.
Figure 9.1: Network for exploring weight-based (long-term)
priming. Two different output patterns are associated with
each input pattern.
Exploring the Model
Open the project wt_priming.proj.gz in
chapter_9 to begin.
Notice that the network has a standard three layer
structure, with the input presented to the bottom and
output produced at the top (figure 9.1).
Press View , EVENTS in the wt_prime_ctrl con-
trol panel.
You will see an environment view with 6 events
shown (figure 9.2). For each event, the bottom pattern
represents the input and the top represents the output.
As you should be able to tell, the first set of 3 events and
the second set of 3 events have the same set of 3 input
patterns, but different output patterns. The names of the
events reflect this, so that the first event has input pat-
tern number 0, with the first corresponding output pat-
tern (labeled a ), so it is named 0_a . The fourth event
has this same input pattern, but the second correspond-
ing output pattern (labeled b ), so it is named 0_b .The
environment actually contains a total of 13 different in-
put patterns, for a total of 26 input-output combinations
(events).
Now you can iconify this environment window.
First, we will train the network using the standard
combination of Hebbian and error-driven learning as
developed in chapters 4-6.
Search WWH ::




Custom Search