Biomedical Engineering Reference
In-Depth Information
and
2 A + τ
τ + τ
τ + + τ
m
y
=
.
(11.33)
m
For
τ c needs to be less than approx-
imately 250 ms for the rule to be Hebbian. As discussed before, Miller and MacKay
[61] noted that normalization of the postsynaptic weights is essential for Hebbian
learning to function properly and suggested two general ways that normalization
could be achieved, multiplicative and subtractive normalizations. They defined sub-
tractive normalization to be
=
6ms,
τ + = τ =
20ms, and A /
A + =
1
.
05,
m
dw i
( w
dt = j
C ij w j
e
) .
(11.34)
Therefore, STDP could be viewed as a form of subtractive constraint if we define
e
( w
)= ( A A + )
GH
j w j and drop the last two terms, which does not depend on input
correlations. If all the inputs do not have the same average rate, the normalization
is not exact and STDP seems to act to keep the postsynaptic neuron at a constant
firing rate (See also [42]). However, there is an important difference between STDP
and rate based Hebbian rule with subtractive constraint. As shown in the previous
section, under subtractive constraint, if a group of synapses have correlations above
a threshold, they tend to go to maximal synaptic strengths and synapses with corre-
lations below threshold tend to have zero strength. Individual synapses under STDP
still tend to adopt maximal or zero strength. However, as a small group, the average
synaptic strengths can show gradation as the correlation is varied as demonstrated in
Figure 11.2D.
11.4.4
Equilibrium synaptic strengths
To analytically calculate the resulting synaptic weight distribution from the STDP
rule and the neuron model, several authors have taken a Fokker-Planck approach
[16, 68, 89]. Here we adapted their results to fit the present formalism. Any single
synapse continuously undergoes weight changes according to the plasticity rule and
the timing relationships between the presynaptic and postsynaptic neurons. It is
therefore most appropriate to write down a probabilistic equation for its strength.
We denote its distribution with P
, where t denotes the time and w denotes its
weight. If we assume the changes in synaptic weights is small during each time
step, the weight of the synapse can be described as a biased random walk. We can
write the following partial differential equation in the synaptic weight distribution
P
(
w
,
t
)
,which basically counts the influx and outflux of weights for a given bin in the
histogram of synaptic strengths:
(
w
,
t
)
2
P
(
w
,
t
)
=
)]+
w [
(
)
(
,
A
w
P
w
t
w 2 [
D
(
w
)
P
(
w
,
t
)] .
(11.35)
t
The drift term A(w) indicates the net weight 'force field' experienced by an individ-
ual synapse, which is calculated in the previous section as dw
/
dt (Equation 11.29).
 
Search WWH ::




Custom Search