Information Technology Reference
In-Depth Information
Event
look
reach
the network to maintain the representation of even the
A location very well on A trials. Because the weight
changes toward A depend on such maintained activity of
the A units, these weight-based representations will be
relatively weak, making the network perseverate less to
A than it would with slightly stronger recurrent weights.
hid
where
l
t
B−start
B−toy−pres
To see this effect, Set delay back to 3, and then
reduce the rec_wts to .15, Run , and look at the activa-
tions of the units in the Bchoice trial. Then compare this
with the case with rec_wts of .3.
You should be able to see that there is a less strong A
response wit h the weaker recurrent weights, meaning a
less strong AB error (and further analysis has confirmed
that this is due to the amount of learning on the A trials).
, !
B−toy−pres
B−toy−pres
B−lid−pres
To stop now, quit by selecting Object/Quit in the
PDP++Root window.
B−delay
9.6.3
Summary and Discussion
B−delay
This model has brought together some of the threads of
this chapter and shown how some of the complexities
and subtleties of behavior can be explained in terms of
interactions between different kinds of memory traces.
Instead of postulating inert, canonical knowledge rep-
resentations that sit like statements printed in a topic,
waiting to be read by some kind of homunculus, the
view of knowledge that emerges from this kind of
model is embedded, dynamic, graded, and emergent
(Munakata et al., 1997). Within this framework, perse-
veration and inhibition arise through the basic dynamics
of network processing, rather than through a specific in-
hibition system (in the same way that attention deficits
arose from such dynamics rather than through a spe-
cific disengage mechanism in chapter 8). Explicit com-
putational models play a central role in developing this
perspective, by taking what would otherwise sound like
vague handwaving and grounding it in the principles of
neural networks.
An important virtue of the kind of neural network
model we just explored is that it takes advantage of
the task dependency in many different kinds of behav-
ior. The fact that behavior can vary significantly with
sometimes seemingly minor task variations can either
be treated as a nuisance to be minimized and ignored,
or as a valuable source of insight into the dynamics of
_
AB
Error
B−delay
B−choice
Figure 9.23: The grid log display for the AB task, showing
the activations of units in the network over tim e. Shown are
the B trials, where the network makes the AB error.
Infants typically perform better (making fewer AB
errors) when there is a shorter delay between hiding and
when they can reach.
To simulate this, set the delay field to 1 (the default
delay has been 3). Keep the recurrent weights at .47.
Then Run and ViewB . Then, try a Delay of 5 and Run
and ViewB (note that you will need to manually scroll
the grid log to see the last parts of the trial).
Question 9.16 (a) What happens on the B trials with
those two delays? (b) Explain these effects of delay on
the network's behavior.
Finally, there is an interesting effect that can occur
with very weak recurrent weights, which do not allow
Search WWH ::




Custom Search