Information Technology Reference
In-Depth Information
cially important for self-organizing learning, as we will
see in the next section.
Now, let'susethe wt_off parameter to encourage
the network to pay attention to only the strongest of cor-
relations in the input.
and renormalization can play an important role in deter-
mining what the unit tends to detect.
These simulations demonstrate how the correction
factors of renormalization and contrast enhancement
can increase the effectiveness of the CPCA algorithm.
These correction factors represent quantitative adjust-
ments to the CPCA algorithm to address its limitations
of dynamic range and selectivity, while preserving the
basic computation performed by the algorithm to stay
true to its biological and computational motivations.
Leaving wt_gain at 6, change wt_off to 1.25,
and do PlotEffWt to see how this affects the effective
weight function. You may have to go back and forth be-
tween 1 and 1.25 a couple of times to be able to see the
difference.
, !
With wt_off set to 1.25, Run the network.
Go to the PDP++Root window. To continue on to
the next simulation, close this project first by selecting
.projects/Remove/Project_0 . Or, if you wish to
stop now, quit by selecting Object/Quit .
, !
, !
Question 4.4 (a) How does this change the results
compared to the case where wt_off is 1? (b) Ex-
plain why this occurs. (c) Find a value of wt_off
that makes the non-central (non-overlapping) units of
the right lines (i.e., the 4 units in the lower left corner
and the 4 units in the upper right corner) have weights
around .1 or less. (d) Do the resulting weights accu-
rately reflect the correlations present in any single input
pattern? Explain your answer. (e) Can you imagine
why this representation might be useful in some cases?
4.8
Self-Organizing Model Learning
Up to this point, we have focused on a single receiv-
ing unit with artificially specified activations to more
clearly understand how the weights are adjusted in the
CPCA Hebbian learning rule. However, this is obvi-
ously not a very realistic model of learning in the cor-
tex. In this section, we move beyond these more limited
demonstrations by exploring a network having multiple
receiving units that compete with each other under the
kWTA inhibitory function. The result is self-organizing
model learning, where the interaction between activa-
tion dynamics (especially inhibitory competition) and
Hebbian learning result in the development of repre-
sentations that capture important aspects of the environ-
mental structure.
In the context of the CPCA learning algorithm, self-
organization amounts to the use of competition between
a set of receiving units as a way of conditionalizing the
responses of these units. Thus, a given unit will be-
come active to the extent that it is more strongly acti-
vated by the current input pattern than other units are
— this can only happen if the weights into this unit are
sufficiently well tuned to that input pattern. Thus, be-
cause the CPCA learning algorithm causes tuning of the
weights to those input units that are co-active with the
receiving unit, there is effectively a positive feedback
system here — any initial selectivity for a set of input
patterns will become reinforced by the learning algo-
rithm, producing even greater selectivity.
An alternative way to accomplish some of the effects
of the wt_off parameter is to set the savg_cor pa-
rameter to a value of less than 1. As described above,
this will make the units more selective because weak
correlations will not be renormalized to as high a weight
value.
Set wt_off back to 1, and set savg_cor to .7.
, !
Question 4.5 (a) What effect does this have on the
learned weight values? (b) How does this compare
with the wt_off parameter you found in the previous
question?
This last question shows that because the contrast
enhancement from wt_gain magnifies differences
around .5 (with wt_off =1), the savg_cor can have a
big effect by changing the amount of correlated activity
necessary to achieve this .5 value. A lower savg_cor
will result in smaller weight values for more weakly
correlated inputs — when the wt_gain parameter is
large, then these smaller values get pushed down toward
zero, causing the unit to essentially ignore these inputs.
Thus, these interactions between contrast enhancement
Search WWH ::




Custom Search