Information Technology Reference
In-Depth Information
potential V m , which reflects a balance between the ag-
gregated excitatory and inhibitory inputs. There is then
a second step that produces an activation output as a
function of V m . From a computational perspective, we
will see in the next chapter that by computing an exci-
tatory/inhibitory balance in this way, the point neuron
function makes it easier to implement inhibitory com-
petition dynamics among neurons efficiently.
In an actual neuron, we know that the output consists
of a spike produced when V m exceeds the firing thresh-
old. We provide a simple implementation of this kind of
discrete spiking output function in Leabra, which sum-
marizes all of the biological machinery for producing a
spike with a single binary activation value (1 if V m is
over threshold, and 0 otherwise).
However, most of our models use a rate code ap-
proximation to discrete spiking. Here, the output of the
neuron is a continuous, real-valued number that reflects
the instantaneous rate at which an otherwise equivalent
spiking neuron would produce spikes. In the context
of the scaling issues discussed in the introduction (sec-
tion 1.2.4), we can think of this rate code output as rep-
resenting the average output of a population of similarly
configured spiking neurons (i.e., something like the pro-
portion of neurons that are spiking over some relatively
small time interval). Section 2.8 provides further justi-
fication for using this approximation.
The principal computational advantage of the rate
code output is that it smoothes over the noise that is
otherwise present with discrete spiking — we will see
that the thresholded nature of the spike output makes the
detailed timing of spike outputs very sensitive to even
small fluctuations in membrane potential, and this sen-
sitivity is manifest as noise. This kind of noise would
tend to get averaged out with thousands of neurons, but
not in the smaller-scale models that we often use.
We will see below that a thresholded, sigmoidal func-
tion provides a good continuous-valued approximation
to the spiking rate produced by the discrete spiking ver-
sion of our model. Thus, we can see here the link be-
tween the continuous-valued sigmoidal activation func-
tion used in the ANN models and the discrete spiking
characteristic of biological neurons. Indeed, we will oc-
casionally use the simpler ANN equations for the pur-
poses of analysis, because they have been extensively
studied and have some important mathematical proper-
ties. Because one can switch between using either a dis-
crete spiking or continuous rate code output in Leabra,
the impact of the rate code approximation can be empir-
ically evaluated more easily.
In the following subsections, we describe in greater
detail the computation of the excitatory input ( g e (t) ),
the parameters used for the point neuron membrane
potential update equation, the discrete spiking output
function, and the continuous rate code output function.
Then we will go on to explore the neural activation
function in action!
2.5.1
Computing Input Conductances
We showed in equation 2.13 that the excitatory input
conductance is essentially an average over the weighted
inputs. However, there are some practical and biologi-
cal details that require us to compute this average in a
somewhat more elaborated fashion.
In a typical cortical neuron, excitatory synaptic in-
puts come from synaptic channels located all over the
dendrites. There can be up to ten thousand or more
synaptic inputs onto a single neuron, with each synap-
tic input having many individual Na + channels! Typi-
cally, a neuron receives inputs from a number of differ-
ent brain areas. These different groups of inputs are
called projections . Inputs from different projections
are often grouped together on different parts of the den-
dritic tree. The way we compute the excitatory input is
sensitive to this projection-level structure, allowing for
different projections to have different levels of overall
impact on the neuron, and allowing for differences in
expected activity level in different projections (which is
often the case in our models) to be automatically com-
pensated for.
Another important component of the excitatory input
in the model comes from the bias input ,whichsum-
marizes the baseline differences in excitability between
different neurons. It is likely that neurons have indi-
vidual differences in their leak current levels or other
differences (of which there are many candidates in the
biology) that could give rise to such differences or bi-
ases in overall level of excitability (e.g., Desai, Ruther-
ford, & Turrigiano, 1999). Thus, some neurons may re-
Search WWH ::




Custom Search