Information Technology Reference
In-Depth Information
of calcium influx, which should lead to LTD. It is
also possible that postsynaptic activity will activate
other voltage-gated calcium channels, which could
provide the weak concentrations of calcium neces-
sary to induce LTD without any presynaptic activity
at all.
3. When the receiving unit is not active, the likelihood
(and/or magnitude) of any weight change goes to
zero. This can be explained by the Mg + blocking
of the NMDA channels, and also by the lack of ac-
tivation of voltage-gated calcium channels, both of
which lead to no influx of postsynaptic calcium, and
thus no weight changes.
Finally, the effect of CPCA learning with different
values of weights can be summarized as follows: when
the weight is large (near 1), further increases will hap-
pen less frequently (as it becomes less likely that x i is
larger than the weight) and will be smaller in magni-
tude, while decreases will show the opposite pattern.
Conversely, when the weight is small, increases be-
come more likely and larger in magnitude and the op-
posite holds for decreases. This general pattern is ex-
actly what is observed empirically in LTP/LTD studies,
and amounts to the observation that both LTP and LTD
saturate at upper and lower bounds, respectively. This
can be thought of as a form of soft weight bounding ,
where the upper and lower bounds (1 and 0 in this case)
are enforced in a “soft” manner by slowing the weight
changes exponentially as the bounds are approached.
We will return to this issue in the context of task learn-
ing in a later section.
the self-organizing case that is explored in a subsequent
section.
Open the project hebb_correl.proj.gz in
chapter_4 to begin (if it is still open from the previous
exercises, you will want to close and reopen it to start
with a clean slate).
As before, we will want to watch the weights of the
hidden unit as it learns.
, !
Select r.wt as the variable to view in the network
window, and click on the hidden unit. Now, select View
and EVENTS in the hebb_correl_ctrl control panel.
You should see an environment window with 2
events, one having a right-leaning diagonal line, and the
other having a left leaning one. These are the two sets
of correlations that exist in this simple environment.
To keep things simple in this simulation, we will ma-
nipulate the percentage of time that the receiving unit is
active in conjunction with each of these events to alter
the conditional probabilities that drive learning in the
CPCA algorithm. Thus, we are only simulating those
events that happen when the receiver is active — when
the receiver is not active, no learning occurs, so we can
just ignore all these other events for the present pur-
poses. As a result, what we think of as conditional prob-
abilities actually appear in the simulation as just plain
unconditional probabilities —- we are ignoring every-
thing outside the conditional (where the unit is inac-
tive). In later simulations, we will explore the more re-
alistic case of multiple receiving units that are activated
by different events, and we will see how more plausi-
ble ways of conditional probability learning can arise
through self-organizing learning.
We will next view the probabilities or frequencies as-
sociated with each event.
, !
4.6
Exploration of Hebbian Model Learning
Locate the Evt Label parameter in the en-
vironment window (upper left hand side), and select
FreqEvent::freq to be displayed as the label below
each event in the display on the right side of the window.
You should see frequency: 1 below the Right
event, and 0 below the Left one, indicating that the
receiving unit will be active all of the time in conjunc-
tion with the right-leaning diagonal line, and none of
the time with the left-leaning one (this was the default
for the initial exploration from before).
We now revisit the simulation we ran at the beginning of
this chapter, and see how a single unit learns in response
to different patterns of correlation between its activity
and a set of input patterns. This exploration will illus-
trate how conditionalizing the activity of the receiving
unit can shape the resulting weights to emphasize a fea-
ture present in only a subset of input patterns. However,
we will find that we need to introduce some additional
factors in the learning rule to make this emphasis really
effective. These factors will be even more important for
, !
Search WWH ::




Custom Search