Information Technology Reference
In-Depth Information
grow arbitrarily to establish a good balance across these
different inputs. Thus, by automatically normalizing
this baseline difference away, the default is that all pro-
jections have roughly the same level of influence.
In most cases, we can then compute the overall ex-
citatory conductance g e (t) , which we also refer to as
the net input ( net in the simulator) by analogy with
the simpler ANN formalisms, as an average of the
projection-level conductances together with the bias
weight ￿ ( bias.wt in the simulator), with a time-
averaging time constant dt net ( 0 <dt net < 1 , dt_net
in the simulator) for integrating g e (t) over time:
1
N
Σ
g e
β
a
a+b
1
α
b
a+b
1
α
<
<
>
>
s
x
w
s
x
w
ij
ij
x
w
x
w
ij
ij
A
B
Projections
Figure 2.11: Computing the excitatory synaptic input. Indi-
vidual weighted inputs at each synapse ( x i w ij ) coming from
the same projection (A or B, as represented by the branches of
the dendritic tree), are averaged together ( hx i w ij i ). This av-
erage is normalized by the expected sending activity level for
the projection ( ￿ ), and scaled by arbitrary constants (absolute
scale s and relative scales a and b ). The bias input ￿ (shown
as a property of the soma) is treated like another projection,
and is scaled by one over the total number of inputs N ,mak-
ing it equivalent to one input value. All the projection values
(including bias) are then added up to get the overall excitatory
conductance g e (with the time-averaging also factored in).
(2.16)
where n p is the number of projections.
The default
where s k ( wt_scale.abs in the simulator) provides
an absolute scaling parameter for projection k ,and r k
( wt_scale.rel in the simulator) provides a relative
scaling parameter that is normalized relative to the scal-
ing parameters for all the other projections. When these
parameters are all set to 1, as is typically the case, the
equation reduces to equation 2.15. When they are used,
relative scaling is almost always used because it main-
tains the same overall level of input to the neuron. How-
ever, absolute scaling can be useful for temporarily “le-
sioning” a projection (by setting s k = 0 ) without af-
fecting the contributions from other projections. Fig-
ure 2.11 shows a schematic for computing the excita-
tory input with these scaling constants.
net value is .7, making for relatively fast temporal in-
tegration. Note that because the bias input is treated es-
sentially as just another projection, it would have a dis-
proportionately large impact relative to the other synap-
tic inputs if it were not scaled appropriately. We achieve
this scaling by dividing by the total number of input
connections N , which gives the bias weight roughly the
same impact as one normal synaptic input.
Differential Projection-Level Scaling
In some cases, we need to introduce scaling constants
that alter the balance of influence among the different
projections. In cortical neurons for example, some pro-
jections may connect with the more distal (distant from
the cell body) parts of the dendrites, and thus have a
weaker overall impact on the neuron than more proxi-
mal (near to the cell body) inputs. We implement scal-
ing constants by altering equation 2.15 as follows:
How Much of Dendritic Integration in Real
Neurons Does Our Model Capture?
The way we compute the excitatory input to our simu-
lated neurons incorporates some of the important prop-
erties of dendritic integration in real neurons in a way
that is not usually done with simplified point neuron
(2.17)
Search WWH ::




Custom Search