Information Technology Reference
In-Depth Information
McCulloch and Pitts (1943) model of neural process-
ing in terms of basic logical operations; (b) Hebb's
(1949) theory of Hebbian learning and the cell as-
sembly , which holds that connections between coac-
tive neurons should be strengthened, joining them to-
gether; and (c) Rosenblatt's (1958) work on the per-
ceptron learning algorithm, which could learn from
error signals . These computational approaches built
on fundamental advances in neurobiology, where the
idea that the neuron is the primary information process-
ing unit of the brain became established (the “neuron
doctrine”; Shepherd, 1992), and the basic principles
of neural communication and processing (action poten-
tials, synapses, neurotransmitters, ion channels, etc.)
were being developed. The dominance of the computer
metaphor approach in cognitive psychology was nev-
ertheless sealed with the publication of the topic Per-
ceptrons (Minsky & Papert, 1969), which proved that
some of these simple neuronlike models had significant
computational limitations — they were unable to learn
to solve a large class of basic problems.
While a few hardy researchers continued studying
these neural-network models through the '70s (e.g.,
Grossberg, Kohonen, Anderson, Amari, Arbib, Will-
shaw), it was not until the '80s that a few critical ad-
vances brought the field back into real popularity. In the
early '80s, psychological (e.g., McClelland & Rumel-
hart, 1981) and computational (Hopfield, 1982, 1984)
advances were made based on the activation dynamics
of networks. Then, the backpropagation learning al-
gorithm was rediscovered by Rumelhart, Hinton, and
Williams (1986b) (having been independently discov-
ered several times before: Bryson & Ho, 1969; Wer-
bos, 1974; Parker, 1985) and the Parallel Distributed
Processing (PDP) topics (Rumelhart et al., 1986c; Mc-
Clelland et al., 1986) were published, which firmly es-
tablished the credibility of neural network models. Crit-
ically, the backpropagation algorithm eliminated the
limitations of the earlier models, enabling essentially
any function to be learned by a neural network. Another
important advance represented in the PDP topics was a
strong appreciation for the importance of distributed
representations (Hinton, McClelland, & Rumelhart,
1986), which have a number of computational advan-
tages over symbolic or localist representations.
Backpropagation led to a new wave of cognitive mod-
eling (which often goes by the name connectionism ).
Although it represented a step forward computation-
ally, backpropagation was viewed by many as a step
backward from a biological perspective, because it was
not at all clear how it could be implemented by bio-
logical mechanisms (Crick, 1989; Zipser & Andersen,
1988). Thus, backpropagation-based cognitive model-
ing carried on without a clear biological basis, causing
many such researchers to use the same kinds of argu-
ments used by supporters of the computer metaphor to
justify their approach (i.e., the “computational level”
arguments discussed previously). Some would argue
that this deemphasizing of the biological issues made
the field essentially a reinvented computational cogni-
tive psychology based on “neuronlike” processing prin-
ciples, rather than a true computational cognitive neu-
roscience.
In parallel with the expanded influence of neural net-
work models in understanding cognition, there was a
rapid growth of more biologically oriented modeling.
We can usefully identify several categories of this type
of research. First, we can divide the biological mod-
els into those that emphasize learning and those that
do not. The models that do not emphasize learning
include detailed biophysical models of individual neu-
rons (Traub & Miles, 1991; Bower, 1992), information-
theoretic approaches to processing in neurons and net-
works of neurons (e.g., Abbott & LeMasson, 1993; At-
ick & Redlich, 1990; Amit, Gutfreund, & Sompolin-
sky, 1987; Amari & Maginu, 1988), and refinements
and extensions of the original Hopfield (1982, 1984)
models, which hold considerable appeal due to their
underlying mathematical formulation in terms of con-
cepts from statistical physics. Although this research
has led to many important insights, it tends to make less
direct contact with cognitively relevant issues (though
the Hopfield network itself provides some centrally im-
portant principles, as we will see in chapter 3, and has
been used as a framework for some kinds of learning).
The biologically based learning models have tended
to focus on learning in the early visual system, with an
emphasis on Hebbian learning (Linsker, 1986; Miller,
Keller, & Stryker, 1989; Miller, 1994; Kohonen, 1984;
Hebb, 1949). Importantly, a large body of basic neu-
Search WWH ::




Custom Search