Information Technology Reference
In-Depth Information
called the axon hillock ) gets above a certain critical
value, called the threshold . Thresholding means that
neurons only communicate when they have detected
something with some level of confidence, which is bio-
logically and computationally efficient.
simulations. One is accommodation , which causes a
neuron which has been active for a while to “get tired”
and become less and less active for the same amount of
excitatory input. The other is hysteresis , which causes
a neuron that has been active to remain active for some
period of time even when its excitatory input disap-
pears. These two mechanisms are obviously in conflict
with each other, but this is not a problem because hys-
teresis operates over a shorter time period, and accom-
modation over a longer one.
Computational Implementation of the Neural
Activation Function
Leabra uses a simple point neuron activation function
based on shrinking the geometry of the neuron to a
point, but retaining some of the properties of the den-
dritic structure in the way the net input is computed.
The resulting membrane potential update equation is
taken straight from the biology, and produces the same
equilibrium potential values as an equation derived
from first principles based on the computational level
detector model of a neuron. To compute an output ac-
tivation value as a function of the membrane potential,
we can either use a very simple spiking mechanism,
or a rate code function, which provides a real-valued
number representing the instantaneous frequency (rate)
with which the cell would produce spikes based on a
given membrane potential. The rate code also repre-
sents a scaling assumption where individual units in
the model represent a number of roughly similar neu-
rons, such that the average impact of many such spik-
ing neurons is approximated by the rate code value.
The spike rate function has a saturating nonlinearity ,
which is biologically determined by a number of fac-
tors including the refractory period just after a spike
(where it is essentially impossible to fire another), and
the rate at which the synapses can release neurotrans-
mitter. The nonlinearity of the activation function is im-
portant for the computational power of neural networks,
and for producing stable activation states in interactive
networks.
2.11
Further Reading
Johnston and Wu (1995) provides a very detailed and
mathematically sophisticated treatment of neurophysi-
ology, starting from basic physical principles and cov-
ering issues of dendritic morphology, cable properties,
active channels, synaptic release, and the like.
Koch and Segev (1998) provides a recent update to
a watershed collection of papers on biophysical models
of the neuron.
Shepherd (1990) has a strong synaptic focus, and rep-
resents the “complex neuron” hypothesis, in contrast to
the “simple neuron” hypothesis developed here.
The Neuron (Hines & Carnevale, 1997) and Gen-
esis (Bower & Beeman, 1994) simulators are two of
the most popular computational tools used to construct
complex, biophysically realistic neuron-level models.
Self-Regulation
Neurons have a number of voltage gated and calcium
gated channels that affect the way the neuron responds
based on its prior history of activity. Two such mech-
anisms are included in Leabra, though they are not es-
sential aspects of the algorithm and are not used in most
Search WWH ::




Custom Search