Information Technology Reference
In-Depth Information
Noise is added to the membrane potentials of the V1
units during settling, which is important for facilitating
the constraint-satisfaction settling process that must bal-
ance the effects of the lateral topography inducing con-
nections with the feedforward connections from the in-
put patterns. Noise is useful here for the same reasons
it was useful in the Necker cube example in chapter 3
— it facilitates rapid settling when there are many rel-
atively equally good states of the network, which is the
case with the lateral connectivity because each unit has
the same lateral weights, and so every point in the hid-
den unit space is trying to create a little bump of activity
there. Noise is needed to break all these ties and enable
the “best” bump (or bumps) of activity to persist.
Finally, because the V1 receptive fields need to repre-
sent graded, coarse-coded tuning functions, it is not ap-
propriate to use the default weight contrast settings that
are useful for more binary kinds of representations (e.g.,
where the input/output patterns are all binary-valued).
Thus, the results shown below are for a weight gain pa-
rameter of 1 (i.e., no weight contrast) instead of the de-
fault of 6. Because weight gain and the weight offset
interact, with a higher offset needed with less gain, the
offset is set to 2 from the default of 1.25, to encourage
units to represent only the strongest correlations present
in the input (see section 4.7 in chapter 4 for details).
cular neighborhood of lateral excitatory connectivity
needed for inducing topographic representations.
Select act again in the network window. Locate
the v1rf_ctrl control panel, and press the LoadEnv
button.
This loads a single preprocessed 512x512 image (ten
such images were loaded in training the network, but
we load only one in the interest of saving memory and
time).
Now, do StepTrain in the control panel, and ob-
serve the activations as the network settles in response
to a sampled input pattern.
The hidden units will initially have a somewhat ran-
dom and sparse pattern of activity in response to the
input images.
You should observe that the on- and off-center input
patterns have complementary activity patterns. That is,
where there is activity in one, there is no activity in the
other, and vice versa. This complementarity reflects the
fact that an on-center cell will be excited when the im-
age is brighter in the middle than the edges of its recep-
tive field, and an off-center cell will be excited when the
image is brighter in the edges than in the middle. Both
cannot be true, so only one is active per image location.
Keep in mind that the off-center units are active (i.e.,
with positive activations) to the extent that the image
contains a relatively dark region in the location coded
by that unit. Thus, they do not actually have negative
activations to encode darkness.
8.3.2
Exploring the Model
[Note: this simulation requires a minimum of 64Mb of
RAM to run.]
Continue to do StepTrain for several more input
patterns.
Open the project v1rf.proj.gz in chapter_8 to
begin.
You will notice that the network (figure 8.7) has the
two input layers, each 12x12 in size, one representing a
small patch of on-center LGN neurons ( Input_pos ),
and the other representing a similar patch of off-center
LGN neurons ( Input_neg ). Specific input patterns
are produced by randomly sampling a 12 x 12 patch from
a set of ten larger ( 512 x 512 pixels) images of natural
scenes. The single hidden layer is 14 x 14 in size.
Question 8.1 (a) What would you expect to see if
the lateral, topography-inducing weights were play-
ing a dominant role in determining the activities of
the hidden units? (b) Are the effects of these lateral
weights particularly evident in the hidden unit activ-
ity patterns? Now, increase the control panel param-
eter lat_wt_scale to .2 from the default of .04 and
continue to StepTrain . This will increase the effec-
tive strength of the lateral (recurrent) weights within the
hidden layer. (c) How does this change the hidden unit
activation patterns? Why?
Let's examine the weights of the network by clicking
on r.wt and then on a hidden unit.
You should observe that the unit is fully, randomly
connected with the input layers, and that it has the cir-
Set lat_wt_scale back to .04.
Search WWH ::




Custom Search