Information Technology Reference
In-Depth Information
These examples showed that the simplifications are rea-
sonable approximations to the more detailed case, but
it was also clear that there were differences. However,
these simplifications made it possible to explore models
that otherwise would have been impractical.
The language chapter (10) provided additional ex-
amples of overlapping models. There, we explored a
number of phenomena using a simplified model with all
three main representations involved in reading, and then
used larger, more realistic models to explore detailed
aspects of performance specific pathways. The simpli-
fied model made it possible to explore certain broader
aspects of behavior and effects of damage in a manage-
able and more comprehensible manner, and provided a
general framework for situating the more detailed mod-
els. These examples provide a sample of the kinds of
benefits of multiple overlapping levels of analysis.
In the sections that follow we discuss various areas
where simplifications have been made and more de-
tailed models might be revealing.
tion on self-regulation in chapter 2, but also factors like
cellular metabolism, gene expression, and glia.
Second, most neural network models make dramatic
simplifications in the initial wiring of the networks,
often simply starting with random connectivity (with
or without additional topographic constraints) within a
network already constrained to receive particular types
of inputs. A huge and largely unsolved problem in biol-
ogy, not to mention cognitive neuroscience, is to under-
stand how biological structure emerges through a com-
plex sequence of interactions between genetic switches,
chemical gradients, surface protein markers, and so on.
Probably a significant portion of the initial wiring of
the brain derives from this kind of process. The de-
velopment of the brain, to a much greater degree than
other organs, is also subject to influences of experience.
We know specific examples in some detail — in the
early visual system, for example, random noise coming
from the retina plays an important role in configuring
the wiring of neurons in V1 (e.g., Shatz, 1996; Miller
et al., 1989). Thus, the line between when the setup of
the initial configuration ends and learning begins is un-
doubtedly a fuzzy one, so understanding how the brain
gets to its “initial configuration” may be critical to un-
derstanding later effects of learning.
Third, we have typically vastly simplified the con-
trol of processing and learning. Inputs are presented in
a carefully controlled fashion, with the network's ac-
tivations reset between each input and between phases
of input (plus and minus), and learning is neatly con-
strained to operate on the appropriate information. The
real system is obviously not that simple — boundaries
between events are not predefined, and activation re-
setting (if it occurs at all) must be endogenously con-
trolled. Although tests with various simulations show
that activation resetting is important for rapid and suc-
cessful learning with rate-code units, this needs to be
explored in the context of spiking models, which gen-
erally exhibit less persistence of prior states (hystere-
sis) than rate-code models. The constant output of acti-
vation among interconnected rate-code units reinforces
existing activation patterns more strongly than sporadic
spiking. If activation resetting proves to be important
even for discrete spiking models, then its biological re-
ality and potential implementation should be explored.
Details of Neurobiology
Reading through The Journal of Neuroscience, Brain
Research, and other neuroscience journals, one can be
overwhelmed with biological details. How is it that
many of these biological properties can be largely ig-
nored in our models? One general answer to this ques-
tion is that we have used powerful simplifications that
require lots of biological machinery to actually imple-
ment — our simplifications undoubtedly fail to cap-
ture all the subtlety of these mechanisms, but perhaps
they capture enough of the main effect. Thus, generally
speaking, it will be useful to relate the functional prop-
erties of these more detailed mechanisms to the simpler
abstractions to find out exactly what the differences are
and how much they matter. The following may be par-
ticularly relevant.
First, the kWTA function is a powerful simplification
for activity regulation — a lot of biological machinery
is likely necessary to keep neurons firing in the right
zone of activation (not too much, not too little). In ad-
dition to the basic feedforward and feedback inhibition,
the relevant biological machinery probably includes lots
of channels along the lines of those discussed in the sec-
Search WWH ::




Custom Search