Information Technology Reference
In-Depth Information
sult in a form of feedforward pattern completion —we
will return to this in chapter 9. Furthermore, some have
tried to make a very strong mapping between evolution-
ary selection and that which takes place in a neural net-
work (e.g., Edelman, 1987), but we are inclined to rely
on more basic competition and learning mechanisms in
our understanding of this process.
Inhibition can lead to sparse distributed represen-
tations, which are distributed representations having a
relatively small percentage of active units (e.g., 10-
25%). Such representations achieve a balance between
the benefits of distributed representations (section 3.3.2)
and the benefits of inhibitory competition, which makes
the representations sparse (i.e., having a relatively small
percentage of active units).
Several theorists (e.g., Barlow, 1989; Field, 1994)
have argued that the use of sparse distributed repre-
sentations is particularly appropriate given the general
structure of the natural environment. For example, in
visual processing, a given object can be defined along
a set of feature dimensions (e.g., shape, size, color, tex-
ture), with a large number of different values along each
dimension (i.e., many different possible shapes, sizes,
colors, textures, etc.). Assuming that the individual
units in a distributed representation encode these feature
values, a representation of a given object will only acti-
vate a small subset of units (i.e., the representations will
be sparse). More generally, it seems as though the world
can be usefully represented in terms of a large number
of categories with a large number of exemplars per cate-
gory (animals, furniture, trees, etc.). If we again assume
that only a relatively few such exemplars are processed
at a given time, a bias favoring sparse representations is
appropriate.
Inhibition provides a built-in propensity or bias to
produce sparse distributed representations, which can
greatly benefit the development (learning) of useful rep-
resentations of a world like ours where such representa-
tions are appropriate (of course, one could imagine hy-
pothetical alternate universes where such a bias would
be inappropriate). We will pick up on this theme again
in chapter 4.
Finally, another way of viewing inhibition and sparse
distributed representations is in terms of a balance be-
tween competition and the cooperation that needs to
Hidden
Inhib
Input
Figure 3.20: Network for exploring inhibition, with feedfor-
ward and feedback connections as in the previous figure.
take place in a distributed representation where mul-
tiple units contribute to represent a given thing. The
extremes of complete competition (e.g., a localist rep-
resentation with only one unit active) or complete co-
operation (e.g., a fully distributed representation where
each unit participates in virtually every pattern) are gen-
erally not as good as having a balance between the two
(e.g., Dayan & Zemel, 1995; Hinton & Ghahramani,
1997).
3.5.2
Exploration of Feedforward and Feedback
Inhibition
Open the project inhib.proj.gz in chapter_3
to begin.
You will see the usual three windows, including a
inhib_ctrl overall control panel. The network con-
tains a 10x10 input layer, which projects to both the
hidden layer of excitatory units, and a layer of
20 inhibitory neurons (figure 3.20). These inhibitory
neurons will regulate the activation level of the hidden
layer units, and should be thought of as the inhibitory
units for the hidden layer (even though they are in their
own layer for the purposes of this simulation). The
ratio of 20 inhibitory units to 120 total hidden units
(17 percent) is like that found in the cortex, which is
commonly cited as roughly 15 percent (White, 1989a;
Zilles, 1990). The inhibitory neurons are just like the
excitatory neurons, except that their outputs contribute
Search WWH ::




Custom Search