Information Technology Reference
In-Depth Information
groups representing the pathways, and a k value of 2 for
the entire layer — the resulting inhibition is the maxi-
mum of these two).
The top-down prefrontal cortex (PFC) task units are
each connected to the corresponding group of 2 hidden
units (i.e., color naming PFC (cn) connects to g and r
color hidden units, and word reading PFC (wr) connects
to G and R word hidden units). This connectivity as-
sumes that the PFC has a set of representations that dif-
ferentially project to color naming versus word reading
pathways — we discuss this representation issue in a
subsequent section. We also simulate the robust main-
tenance of these PFC units by simply clamping them
with external input — we explore the mechanisms of
activating and maintaining these representations in the
next simulation.
The prepotent nature of word reading is produced by
training with a frequency ratio of 3:2 for word reading
compared to color naming. Although this ratio likely
underestimates the actual frequency difference for read-
ing color words versus naming colors, the inhibitory
competition and other parameters of the model cause it
to be very sensitive to differences in pathway strength,
so that this frequency difference is sufficient to simu-
late typical human behavior. If the actual frequencies
were known, one could easily adjust the sensitivity of
the model to use these frequencies. As it is, we opted for
default network parameters and adjusted the frequen-
cies.
Because of its simplified nature, training in this
model does not actually shape the representations — it
only adapts the weight strengths. The 2 word reading
training events simply present a color word input (G or
R) and clamp the word reading PFC unit (wr), and train
the network to produce the corresponding output (gr or
rd). Similarly, the color naming training has the color
naming PFC unit active (cn) and trains the output to the
color inputs. Thus, the network never experiences ei-
ther a conflict or a congruent condition in its training —
its behavior in these conditions emerges from the fre-
quency effects.
To record a “reaction time” out of the model, we sim-
ply measure the number of cycles of settling, with the
stopping criterion being whenever an output unit ex-
ceeds an activation value of .7. As is typical when the
units in the model represent entire pathways of neurons
(i.e., when using a highly scaled down model), we lower
the unit gain (to 50). Otherwise, default parameters are
used.
11.3.2
Exploring the Model
Open
the
project stroop.proj.gz in
chapter_11 to begin.
You should see the network just as pictured in fig-
ure 11.5.
, !
Begin by exploring the connectivity using r.wt .
You will notice that all of the units for red versus
green are connected in the way you would expect, with
the exception of the connections between the hidden
and output units. Although we assume that people enter
the Stroop task with more meaningful connections than
the random ones we start with here (e.g., they are able to
say “Red” and not “Green” when they represent red in
the environment), we did not bother to preset these con-
nections here because they become meaningful during
the course of training on the task.
Next, let's look at the training environment patterns.
, !
Press View , TRAIN_EVENTS on the stroop_ctrl
control panel.
You will see 4 events, 2 for training the word reading
pathway, and 2 for color naming (one event for each
color). The frequency of these events is controlled by a
parameter associated with each event.
, !
Locate the Evt Label menu in the upper left-hand
corner of the environment window, which controls what
is displayed as the label for each event in the window.
Change the selection from Event::name (the name of
the event) to FreqEvent::freq (the frequency of the
event).
Now you can see that the word reading events have
a frequency of .3, while the color naming events are at
.2 (note that the sum of all frequencies equals 1). This
frequency difference causes word reading to be stronger
than color naming. Note that by using training to estab-
lish the strength of the different pathways, the model
very naturally accounts for the
, !
MacLeod and Dunbar
(1988) training experiments.
Iconify the environment window.
Now, let's train the network.
, !
Search WWH ::




Custom Search