Information Technology Reference
In-Depth Information
A relatively small value of hebb compensates for these
inequalities in the magnitudes of the error-driven and
Hebbian components.
It is important to note that the k hebb parameter is
mostly useful as a means of measuring the relative im-
portance of Hebbian versus error-driven learning in dif-
ferent simulations — it is possible that a single value
of this parameter would suffice for modeling all the dif-
ferent areas of the cortex. On the other hand, different
areas could differ in this parameter, which might be one
way in which genetic biological biases can influence the
development and specialization of different areas (chap-
ter 4).
Note that the Hebbian rule is computed using the
plus-phase activation states, which makes sense both
computationally (you want to move toward these acti-
vation states; there is not much sense in learning the sta-
tistical structure of mistakes), and biologically given the
evidence that learning happens in the plus phase (sec-
tion 5.8.3).
Box 6.1: Summary of Leabra Mechanisms
1. Biological realism
2. Distributed
Representations
3. Inhibitory
Competition
4. Bidirectional
Activation
Propagation
δ aa j
w =
a
+
j
i
5. Error−driven
Learning
6. Hebbian
Learning
w =
δ aa j
j
a
+
i
This figure provides an illustration of the six core princi-
ples behind Leabra. Biological realism (1) is an overarch-
ing constraint. Distributed representations (2) have multiple
units active, while inhibitory competition (3, implemented in
principle via inhibitory connectivity) ensures that relatively
few such units are active. Bidirectional activation propaga-
tion (4, implemented by bidirectional connectivity) enables
both bottom-up and top-down constraints to simultaneously
shape the internal representation, and allows error signals
to be propagated in a biologically plausible fashion. Error-
driven learning (5) shapes representations according to dif-
ferences between expected outputs and actual ones (repre-
sented by the error term Æ j ). Hebbian learning (6) shapes
representations according to the co-occurrence (correlation)
statistics of items in the environment (represented by the
product of the sending and receiving unit activations).
The activation function for Leabra was summarized in
box 2.2 in chapter 2. The learning mechanism is a combi-
nation of Hebbian model learning via the CPCA Hebbian
learning mechanism and error-driven task learning via the
CHL version of GeneRec, as follows:
6.2.5
Summary
By combining error-driven and Hebbian learning with
the unit and network properties discussed in previous
chapters, we have developed a comprehensive set of
principles for understanding how learning might work
in the cortex, and explored both the biological and func-
tional consequences of these principles. A summary
illustration of these principles is provided in box 6.1,
which captures the essential properties of the Leabra
algorithm. These properties can be summarized accord-
ing to six core principles (O'Reilly, 1998):
1. biological realism
2. distributed representations
3. inhibitory competition (kWTA)
4. bidirectional activation propagation
5. error-driven learning (GeneRec)
6. Hebbian learning (CPCA).
We have seen how these six principles have shaped
our thinking about learning in the cortex, and we have
also seen a glimpse of how these principles can have
complex and interesting interactions. We have empha-
sized how error-driven and Hebbian learning can inter-
￿ w
+(1 ￿ k
￿ x
Search WWH ::




Custom Search