Information Technology Reference
In-Depth Information
the cue or in the opposite region of space, and the sub-
ject must respond (e.g., by pressing a key on the com-
puter) whenever they detect the onset of the target stim-
ulus. The target is detected significantly faster in the
cued location, and significantly slower in the noncued
location, relative to a baseline of target detection with-
out any cues at all. Thus, the processing of the cue
competes with target detection when they are in dif-
ferent locations, and facilitates it when they are in the
same location. All of this happens faster than one can
move one's eyes, so there must be some kind of internal
(“covert”) attention being deployed as a result of pro-
cessing the cue stimulus. We will see in section 8.5 that
these results, and several other related ones, can be ac-
counted for by a simple model that has competition be-
tween neurons (as mediated by the inhibitory interneu-
rons).
the importance of the genetic basis of cognition. In-
deed, we feel that it is perhaps only in the context of
such a learning mechanism that genetic parameters can
be fully understood, much as the role of DNA itself in
shaping the phenotype must be understood in the con-
text of the emergent developmental process.
A consideration of what it takes to learn reveals an
important dependence on gradedness and other aspects
of the biological mechanisms discussed above. The
problem of learning can be considered as the problem of
change . When you learn, you change the way that infor-
mation is processed by the system. Thus, it is much eas-
ier to learn if the system responds to these changes in a
graded, proportional manner, instead of radically alter-
ing the way it behaves. These graded changes allow the
system to try out various new ideas (ways of process-
ing things), and get some kind of graded, proportional
indication of how these changes affect processing. By
exploring lots of little changes, the system can eval-
uate and strengthen those that improve performance,
while abandoning those that do not. Thus, learning
is very much like the bootstrapping phenomenon de-
scribed with respect to processing earlier: both depend
on using a number of weak, graded signals as “feelers”
for exploring possibly useful directions to proceed fur-
ther, and then building on those that look promising.
None of this kind of bootstrapping is possible in a
discrete system like a standard serial computer, which
often responds catastrophically to even small changes.
Another way of putting this is that a computer program
typically only works if everything is right — a program
that is missing just one step typically provides little in-
dication of how well it would perform if it were com-
plete. The same thing is true of a system of logical re-
lationships, which typically unravels into nonsense if
even just one logical assertion is incorrect. Thus, dis-
crete systems are typically too brittle to provide an ef-
fective substrate for learning.
However, although we present a view of learning that
is dominated by this bootstrapping of small changes
idea, other kinds of learning are more discrete in na-
ture. One of these is a “trial and error” kind of learning
that is more familiar to our conscious experience. Here,
there is a discrete “hypothesis” that governs behavior
during a “trial,” the outcome of which (“error”) is used
1.6.5
Learning
The well-worn nature versus nurture debate on the de-
velopment of human intelligence is inevitably decided
in terms of both. Thus, both the genetic configuration
of the brain and the results of learning make important
contributions. However, this fact does nothing to ad-
vance our understanding of exactly how genetic con-
figuration and learning interact to produce adult human
cognition. Attaining this understanding is a major goal
of computational cognitive neuroscience, which is in
the unique position of being able to simulate the kinds
of complex and subtle interdependencies that can exist
between certain properties of the brain and the learning
process.
In addition to the developmental learning process,
learning occurs constantly in adult cognition. Thus, if
it were possible to identify a relatively simple learning
mechanism that could, with an appropriately instanti-
ated initial architecture, organize the billions of neurons
in the human brain to produce the whole range of cog-
nitive functions we exhibit, this would obviously be the
“holy grail” of cognitive neuroscience. For this reason,
this text is dominated by a concern for the properties of
such a learning mechanism, the biological and cogni-
tive environment in which it operates, and the results it
might produce. Of course, this focus does not diminish
Search WWH ::




Custom Search