Information Technology Reference
In-Depth Information
). Then, let's assume that all three chan-
nels have the same conductance level (i.e., g e g e = 1 ,
2.5
Computational Implementation of the Neural
Activation Function
,and g l g l = 1 ). So, the total current is 3,
and excitation makes up only 1=3 of this total. Thus,
the neuron will move 1=3 of the way toward maximal
excitation (i.e., V m = :333:: ).
Perhaps the most important point to take away from
equation 2.10 is that the neuron's membrane potential
reflects a balance between excitation on the one hand,
and leak plus inhibition on the other. Thus, both leak
and inhibition provide counterweights to excitation. We
will see in the next chapter how important these coun-
terweights are for enabling a network of neurons to per-
form effectively.
In this section, we describe how the Leabra framework
implements a simplified approximation to the biologi-
cal mechanisms discussed in the previous sections. As
mentioned previously, the major challenge we confront
in developing this computational framework is striking
an appropriate balance between the facts of biology on
one hand, and simplicity and efficiency on the other. We
first highlight some of the main features of the frame-
work first, and then cover the details in subsequent sub-
sections.
To contextualize properly our neural activation func-
tion, we first describe the most commonly used activa-
tion function for more abstract artificial neural networks
(ANNs). This function is very simple, yet shares some
basic properties with our more biologically based func-
tion. Thus, we view our activation function as provid-
ing a bridge between the more abstract, computationally
motivated artificial activation function and the way that
neurons actually do things.
There are only two equations in the abstract ANN
activation function. The first defines the net input to
the unit, which is just a sum of the individual weighted
inputs from other units:
2.4.7
Summary
By elaborating and exploring the electrical principles
that underlie neural information processing, we have
been able to develop a relatively simple equation (equa-
tion 2.8) that approximates the response of a neuron to
excitatory and inhibitory inputs from other neurons. In
the next section, we will see that we can use this equa-
tion (together with some additional equations for the
thresholded firing process) to describe the behavior of
the basic unit in our computational models of cogni-
tion. The fact that cognitive function can be so directly
related in this way to the actions of charged atoms and
basic physical principles of electricity and diffusion is a
very exciting and important aspect of our overall physi-
cal reductionist approach to cognition.
Of course, we will find that we cannot actually de-
scribe cognitive phenomena directly in terms of ions
and channels — we will see that there are a number
of important emergent phenomena that arise from the
interactions of individual neurons, and it is these higher
levels of analysis that then provide a more useful lan-
guage for describing cognition. Nevertheless, these
higher levels of emergent phenomena can ultimately be
traced back to the underlying ions, and our computer
simulations of reading, word meaning, memory, and the
like will all spend most of their time computing equa-
tion 2.8 and other associated equations!
(2.11)
where ￿ j is the net input for receiving unit j , x i is the
activation value for sending unit i ,and w ij is the weight
value for that input into the receiving unit. The other
equation transforms this net input value into an activa-
tion value, which is then sent on to other units:
(2.12)
where y j is the activation value of the receiving unit
, and the form of this equation is known as a sig-
moid (more specifically the logistic ), because it is an
S-shaped function as shown in box 2.1.
Among the most important computational properties
of the sigmoid function is the fact that it has a saturat-
ing nonlinearity , such that ever stronger excitatory net
Search WWH ::




Custom Search