Information Technology Reference
In-Depth Information
everyone is working together for the common good.
Also, by stifling individual motivation, socialism can
induce a similar kind of laziness. In contrast, Hebbian
learning is like right-wing politics in that it encourages
rapid and decisive progress as motivated (“greedy”) in-
dividuals do whatever they can in their local environ-
ment unfettered by government intervention, but with
only some vague hope that things might eventually
work out for the common good. All too often, local
greed ends up impeding overall progress.
learning provides assurances that tasks will get learned,
but the addition of model learning provides some impor-
tant biases that should facilitate learning in many ways.
In the field of machine learning, the use of such
biases is often discussed in terms of regularization ,
where an otherwise underconstrained type of learning
can benefit from the additional constraints imposed by
these biases. A commonly used form of regularization
in neural networks is weight decay , where a small por-
tion of each weight value is subtracted when the weights
are updated — this encourages the network to use only
those weights that are reliably contributing something
to the solution, because otherwise they will just decay
to zero.
Hebbian learning can be a much better regularizer
than weight decay because it actually makes a positive
contribution to the development of representations in-
stead of just subtracting away excess degrees of free-
dom in the weights. Of course, the types of representa-
tions formed by Hebbian learning must be at least some-
what appropriate for the task at hand for this to be a ben-
efit, but we have argued that the representational biases
imposed by Hebbian learning should be generally use-
ful given the structure of our world (chapter 4). We will
see many examples throughout the remainder of the text
where Hebbian model learning plays a critical role in
biasing error-driven learning, and we will explore these
issues further in two simple demonstration tasks in sub-
sequent sections.
Finally, we remind the reader that another good rea-
son for combining error-driven and Hebbian learning
is that the biological mechanism for synaptic modifica-
tion discussed in section 5.8.3 suggests that both Heb-
bian and error-driven contributions are present at the
synapse.
6.2.2
Advantages to Combining Hebbian and
Error-Driven Learning
The second question posed earlier asks how Hebbian
and error-driven learning might be used in the cortex.
Does one part of the cortex perform Hebbian model
learning, while another does error-driven task learning?
Many practitioners in the field would probably assume
that something like this is the case, with sensory pro-
cessing proceeding largely, if not entirely, on the basis
of model learning, while higher, more output oriented
areas use task learning. Although there may be some
general appeal to this division, we favor a more centrist
view, where learning throughout the cortex is driven by
a balance between error-driven and Hebbian factors op-
erating at each synapse.
From a purely computational perspective, it seems
likely that the most optimal learning will result from a
combination of both error-driven and Hebbian learning,
so that the advantages of one can counteract the dis-
advantages of the other. Specifically, the “motivated”
local Hebbian learning can help to kickstart and shape
the ongoing development of representations when the
interdependencies of error-driven learning would other-
wise lead to slow and lazy learning. At the same time,
the power of error-driven learning can ensure that the
weights are adjusted throughout the network to solve
tasks.
In combining these two forms of learning, we have
found it useful to remain somewhat “left of center,” in
the terms of the political metaphor — we consider error-
driven task-based learning to be the primary form of
learning, with Hebbian model learning playing an im-
portant but secondary role. This emphasis on task-based
6.2.3
Inhibitory Competition as a Model-Learning
Constraint
Inhibitory competition (e.g., with one of the kWTA
functions) represents an important additional constraint
on the learning process that can substantially improve
the performance of the network. We think of this in-
hibitory competition as another kind of model-learning
constraint (in addition to Hebbian learning), because it
Search WWH ::




Custom Search