Information Technology Reference
In-Depth Information
One outstanding problem for mathematical analysis
is caused by the nature of the inhibitory competition
between units within a layer. The two easily analyzed
extremes for this type of competition either produce
a single winner-take-all localist representation, or em-
ploy a noncompetitive constraint that enters into each
unit's activation function or learning rule completely in-
dependent from the other units. In contrast, the kWTA
function produces complex competitive and cooperative
dynamics in the resulting sparse distributed representa-
tion, which we regard as essential aspects of cortical
cognition. However, these complex interactions among
units renders the algorithm analytically intractable, be-
cause of the combinatorial explosion involved in treat-
ing these interactions (analogous to the n body problem
in physics).
Future research will hopefully make advances in de-
veloping useful approximations or other methods that
can enable such analyses to go forward, without sac-
rificing the unique and essential virtues of the kWTA
function.
based on differences between expected and obtained re-
ward in chapters 6 and 11. Relating these findings back
to basic error-driven learning requires that the dopamine
signal occur for any difference in expected versus ob-
tained outcome , not only reward. Electrophysiological
recording studies should be able to test these ideas.
Finally, a growing body of evidence suggests that the
anterior cingulate cortex is involved in detecting “er-
rors” (e.g., Gehring et al., 1993) — it thus seems likely
that this brain area plays an important role in error-
driven learning, but its exact role remains to be specified
(Carter et al., 1998).
12.4.3
Regularities and Generalization
An early and enduring criticism of neural networks
is that they are just rote memorizers in the tradition
of associationism or behaviorism, and are thus inca-
pable of the kind of rule-like systematic behavior some-
times characteristic of human cognition (e.g., Pinker &
Prince, 1988; Marcus, 1998).
Some of these critiques fail to appreciate the basic
points addressed in chapter 7: networks generalize by
systematic recombination of existing representations
(cf. Marcus, 1998), and by forming such representa-
tions at an appropriate level of abstraction that naturally
accommodates subsequent novel instances. These pro-
cesses allow neural networks to capture many aspects
of human generalization. Some particularly relevant
examples include the demonstration by Hinton (1986)
(see also chapter 6) that networks can form systematic
internal re-representations that go beyond the surface
structure of a problem. Two of the models in the lan-
guage chapter (10) specifically demonstrate that a neu-
ral network can simulate human generalization perfor-
mance in pronouncing nonwords, and in overregulariz-
ing irregular past tense inflections.
Some of the generalization critiques stem from the
present limitations of neural network models for deal-
ing with higher-level cognitive function. The applica-
bility of such critiques may be somewhat narrow —
many cases of systematic, rulelike processing are not
the result of higher-level, deliberate, explicit rules, but
can instead be readily explained in terms of basic neural
network principles, and so generalization abilities can
12.4.2
Error Signals
Chapter 5 presented biological mechanisms that could
implement error-driven task learning, and showed how
models could learn on the basis of such mechanisms.
However, these models simply imposed the necessary
minus and plus phase structure required for learning,
and provided target (outcome) patterns in the output
layer. An open challenge is to demonstrate how expec-
tation and outcome representations actually arise nat-
urally in a simple perceptual-motor system operating
within a simulated environment, particularly when the
perception of the outcome happens through the same
layers that represented the expectation. A second chal-
lenge is to address how the system knows when it is in
the plus-phase so that it can perform learning then. Al-
though the resolution of these challenges awaits further
modeling and empirical work, there is evidence in the
brain for the kind of phase-switching proposed to un-
derlie error-driven learning, as well as for signals that
might signal when to learn.
The issue of when to learn is probably related to
dopamine.
We examined its role in driving learning
Search WWH ::




Custom Search