Information Technology Reference
In-Depth Information
Another way to look at the development of the
weights over learning is to use a graph log.
1−
0.8−
wt_right
Do View , GRAPH_LOG to pull up the graph log, then
do Run again.
The graph log (figure 4.11) displays the value of one
of the weights from a unit in the right-leaning diago-
nal line ( wt_right , in red), and from a unit in the
left-leaning diagonal line ( wt_left , in orange). You
should notice that as learning proceeds, the weights
from the units active in the Right event will hover
right around .7 (with the exception of the central unit,
which is present in both events and will have a weight
of around 1), while the weights for the Left event will
hover around .3. Thus, as expected, the CPCA learn-
ing rule causes the weights to reflect the conditional
probability that the input unit is active given that the
receiver was active. Experiment with different values
of p_right , and verify that this holds for all sorts of
different probabilities.
The parameter lrate in the control panel, which
corresponds to ￿ in the CPCA learning rule (equa-
tion 4.12), determines how rapidly the weights are up-
dated after each event.
0.6−
, !
0.4−
0.2−
wt_left
0−
0
5
10
15
20
25
Figure 4.11: Plot of the weights from the rightward line
( wt right ) and the leftward line ( wt left ).
Again, these absolute probabilities of presenting
these lines actually correspond to conditional probabili-
ties, because we are ignoring all the other possible cases
where the receiving unit is inactive — we are implicitly
conditioning the entire simulation on the receiving unit
being active (so that it is indeed always active for every
input pattern).
The parameter p_right in the control panel deter-
mines the frequencies of the events in the environment,
with the Right event being set to p_right and Left
to 1-p_right .
Set p_right to .7, and hit Apply — youwillseethe
FreqEvent::freq values updated to .7 and .3. Then,
go ahead and iconify the environment window before
continuing.
Keep in mind as we do these exercises that this sin-
gle receiving unit will ordinarily just be one of multiple
such receiving units looking at the same input patterns.
Thus, we want this unit to specialize on representing
one of the correlated features in the environment (i.e.,
1 of the 2 lines in this case). We can manipulate this
specialization by making the conditional probabilities
weighted more toward one event over the other.
Change lrate to .1 and Run .
, !
, !
Question 4.1 (a) How does this change in the learning
rate affect the general character of the weight updates
as displayed in the network window? (b) Explain why
this happens. (c) Explain the relevance (if any) this
might have for the importance of integrating over mul-
tiple experiences (events) in learning.
Set the lrate parameter back to .005.
When you explored different values of p_right
previously, you were effectively manipulating how se-
lective the receiving unit was for one type of event over
another. Thus, you were taking advantage of the condi-
tional aspect of CPCA Hebbian learning by effectively
conditionalizing its representation of the input environ-
ment. As we stated earlier, instead of manipulating the
frequency with which the two events occurred in the en-
vironment, you should think of this as manipulating the
frequency with which the receiving unit was co-active
with these events, because the receiving unit is always
active for these inputs.
, !
Now, press the Run button in the control panel.
This will run the network through 20 sets (epochs) of
100 randomly ordered event presentations, with 70 of
these presentations being the Right event, and 30 be-
ing the Left event given a p_right value of .7. The
CPCA Hebbian learning rule (equation 4.12) is applied
after each event presentation and the weights updated
accordingly. You will see the display of the weights in
the network window being updated after each of these
20 epochs.
, !
Search WWH ::




Custom Search