Information Technology Reference
In-Depth Information
Feedback
a)
b)
To speed up and simplify our simulations, we can
summarize the effects of inhibitory interneurons by
computing an inhibition function directly as a func-
tion of the amount of excitation in a layer, without the
need to explicitly simulate the inhibitory interneurons
themselves. The simplest and most effective inhibi-
tion functions are two forms of a k-winners-take-all
(kWTA) function, described later. These functions im-
pose a thermostat-like set point type of inhibition by
ensuring that only k (or less) out of n total units in a
layer are allowed to be strongly active.
We next discuss some of the particularly useful func-
tional properties of inhibition, then explore the in-
hibitory dynamics of a cortical-like network with feed-
forward and feedback inhibition, and then introduce and
explore the kWTA inhibition functions.
Hidden
Inhib
Hidden
Inhib
Feed−
Forward
Input
Input
Figure 3.19: Two basic types of inhibitory connectivity (ex-
citation is shown with the open triangular connections, and
inhibition with the filled circular ones). a) Shows feedfor-
ward inhibition driven by the input layer activity, which antic-
ipates and compensates for excitation coming into the layer.
b) Shows feedback inhibition driven by the same layer that
is being inhibited, which reacts to excitation within the layer.
Inhibitory interneurons typically inhibit themselves as well.
Feedforward inhibition occurs when the inhibitory
interneurons in a hidden layer are driven directly by the
inputs to that layer, and then send inhibition to the prin-
cipal (excitatory) hidden layer neurons (figure 3.19a).
Thus, these hidden layer neurons receive an amount of
inhibition that is a function of the level of activity in
the input layer (which also projects excitatory connec-
tions into the hidden layer). This form of inhibition
anticipates and counterbalances the excitation coming
into a given layer from other layers. The anticipation
effect is like having your thermostat take into account
the temperature outside in deciding how much AC to
provide inside . As a result, a hidden layer excitatory
neuron will receive roughly proportional and offsetting
amounts of excitation and inhibition. You might think
this would prevent the neurons from ever getting active
in the first place — instead, it acts as a kind of filter, be-
cause only those neurons that have particularly strong
excitatory weights for the current input pattern will be
able to overcome the feedforward inhibition.
Feedback inhibition occurs when the same layer that
is being inhibited excites the inhibitory neurons, pro-
ducing a negative feedback loop (figure 3.19b). Thus,
feedback inhibition reacts to the level of excitation
within the layer itself, and prevents the excitation from
exploding (spreading uncontrollably to all units) as was
observed in the previous section. This is like the usual
thermostat that samples the same indoor air it regulates.
3.5.1
General Functional Benefits of Inhibition
There are several important general functional conse-
quences of inhibition. First, inhibition leads to a form
of competition between neurons. In the case of feedfor-
ward inhibition, only the most strongly activated neu-
rons are able to overcome the inhibition, and in the case
of feedback inhibition, these strongly active neurons are
better able to withstand the inhibitory feedback, and
their activity contributes to the inhibition of the other
neurons. This competition is a very healthy thing for
the network — it provides a mechanism for selection :
finding the most appropriate representations for the cur-
rent input pattern. This selection process is akin to nat-
ural selection , also based on competition (for natural
resources), which results in the evolution of life itself!
The selection process in a network occurs both on a
moment-by-moment on-line basis, and over longer time
periods through interaction with the learning mecha-
nisms described in the next chapter. It is in this learning
context that competition produces something akin to the
evolution of representations.
The value of competition has long been recognized
in artificial neural network models (Kohonen, 1984;
McClelland & Rumelhart, 1981; Rumelhart & Zipser,
1986; Grossberg, 1976). Also, McNaughton and Mor-
ris (1987) showed how feedforward inhibition can re-
Search WWH ::




Custom Search