Information Technology Reference
In-Depth Information
Biological Mechanisms of Learning
interaction between CPCA Hebbian learning together
with the network property of inhibitory competition
as described in the previous chapter, and results in dis-
tributed representations of statistically informative prin-
cipal features of the input.
The biological basis of learning is thought to be long-
term potentiation (LTP) and long-term depression
(LTD) . These mechanisms are associative or Hebbian
in nature, meaning that they depend on both presynap-
tic and postsynaptic activation. The associative nature
of LTP/D can be understood in terms of two require-
ments for opening the NMDA receptor : presynaptic
neural activity (i.e., the secretion of the excitatory neu-
rotransmitter glutamate) and postsynaptic activity (i.e.,
asufficiently excited or depolarized membrane poten-
tial to unblock the magnesium ions from the channel).
The NMDA channel allows calcium ions (Ca ++ )toen-
ter the synapse, which triggers a complex sequence of
chemical events that ultimately results in the modifica-
tion of synaptic efficacy (weight). The available data
suggests that when both presynaptic and postsynaptic
neurons are strongly active, the weight increases (LTP)
due to a relatively high concentration of calcium, but
weaker activity results in weight decrease (LTD) due to
an elevated but lower concentration of calcium.
4.11
Further Reading
The last few chapters of Hertz et al. (1991) on compet-
itive learning provide a clear and more mathematically
detailed introduction to unsupervised/self-organizing
learning.
Hinton and Sejnowski (1999) is a collection of influ-
ential papers on Unsupervised learning (model learn-
ing).
Much of Kohonen's pioneering work in unsupervised
learning is covered in Kohonen (1984), though the
treatment is somewhat mathematically oriented and can
be difficult to understand.
The paper by Linsker (1988) is probably the
most comprehensible by this influential self-organizing
learning researcher.
Probably the most influential biologically oriented
application of unsupervised model learning has been
in understanding the development of ocular dominance
columns, as pioneered by Miller et al. (1989).
The journal Neural Computation and the NIPS con-
ference proceedings ( Advances in Neural Information
Processing ) always have a large number of high-quality
articles on computational and biological approaches to
learning.
Hebbian Model Learning
Model learning is difficult because we get a large
amount of low quality information from our senses.
The use of appropriate a priori biases about what the
world is like is important to supplement and organize
our experiences. A bias favoring simple or parsimo-
nious models is particularly useful. The objective of
representing correlations is appropriate because these
reflect reliable, stable features of the world. A par-
simonious representation of such correlations involves
extracting the principal components (features, dimen-
sions) of these correlations. A simple form of Heb-
bian learning will perform this principal components
analysis (PCA) , but it must be modified to be fully
useful. Most importantly, it must be conditionalized
( CPCA ), so that individual units represent the principal
components of only a subset of all input patterns. The
basic CPCA algorithm can be augmented with renor-
malization and contrast enhancement , which improve
the dynamic range of the weights and the selectivity
of the units to the strongest correlations in the input.
Self-organizing learning can be accomplished by the
Search WWH ::




Custom Search