Information Technology Reference
In-Depth Information
roscience research supports the idea that Hebbian-like
mechanisms are operating in neurons in most cogni-
tively important areas of the brain (Bear, 1996; Brown,
Kairiss, & Keenan, 1990; Collingridge & Bliss, 1987).
However, Hebbian learning is generally fairly computa-
tionally weak (as we will see in chapter 5), and suffers
from limitations similar to those of the 1960s genera-
tion of learning mechanisms. Thus, it has not been as
widely used as backpropagation for cognitive modeling
because it often cannot learn the relevant tasks.
In addition to the cognitive (connectionist) and bio-
logical branches of neural network research, consider-
able work has been done on the computational end. It
has been apparent that the mathematical basis of neu-
ral networks has much in common with statistics, and
the computational advances have tended to push this
connection further. Recently, the use of the Bayesian
framework for statistical inference has been applied to
develop new learning algorithms (e.g., Dayan, Hinton,
Neal, & Zemel, 1995; Saul, Jaakkola, & Jordan, 1996),
and more generally to understand existing ones. How-
ever, none of these models has yet been developed to
the point where they provide a framework for learning
that works reliably on a wide range of cognitive tasks,
while simultaneously being implementable by a reason-
able biological mechanism. Indeed, most (but not all)
of the principal researchers in the computational end of
the field are more concerned with theoretical, statistical,
and machine-learning kinds of issues than with cogni-
tive or biological ones.
In short, from the perspective of the computational
cognitive neuroscience endeavor, the field is in a some-
what fragmented state, with modelers in computational
cognitive psychology primarily focused on understand-
ing human cognition without close contact with the
underlying neurobiology, biological modelers focused
on information-theoretic constructs or computationally
weak learning mechanisms without close contact with
cognition, and learning theorists focused at a more com-
putational level of analysis involving statistical con-
structs without close contact with biology or cogni-
tion. Nevertheless, we think that a strong set of cogni-
tively relevant computational and biological principles
has emerged over the years, and that the time is ripe for
an attempt to consolidate and integrate these principles.
1.4
Overview of Our Approach
This brief historical overview provides a useful con-
text for describing the basic characteristics of the ap-
proach we have taken in this topic. Our core mech-
anistic principles include both backpropagation-based
error-driven learning and Hebbian learning, the cen-
tral principles behind the Hopfield network for interac-
tive, constraint-satisfaction style processing, distributed
representations, and inhibitory competition. The neu-
ral units in our simulations use equations based di-
rectly on the ion channels that govern the behavior of
real neurons (as described in chapter 2), and our neu-
ral networks incorporate a number of well-established
anatomical and physiological properties of the neocor-
tex (as described in chapter 3). Thus, we strive to es-
tablish detailed connections between biology and cog-
nition, in a way that is consistent with many well-
established computational principles.
Our approach can be seen as an integration of a
number of different themes, trends, and developments
(O'Reilly, 1998). Perhaps the most relevant such devel-
opment was the integration of a coherent set of neural
network principles into the GRAIN framework of Mc-
Clelland (1993). GRAIN stands for graded, random,
adaptive, interactive, (nonlinear) network. This frame-
work was primarily motivated by (and applied to) issues
surrounding the dynamics of activation flow through a
neural network. The framework we adopt in this topic
incorporates and extends these GRAIN principles by
emphasizing learning mechanisms and the architectural
properties that support them.
For example, there has been a long-standing desire
to understand how more biologically realistic mecha-
nisms could give rise to error-driven learning (e.g., Hin-
ton & McClelland, 1988; Mazzoni, Andersen, & Jor-
dan, 1991). Recently, a number of different frameworks
for achieving this goal have been shown to be vari-
ants of a common underlying error propagation mecha-
nism (O'Reilly, 1996a). The resulting algorithm, called
GeneRec, is consistent with known biological mecha-
nisms of learning, makes use of other biological proper-
ties of the brain (including interactivity), and allows for
realistic neural activation functions to be used.
Thus,
Search WWH ::




Custom Search