Biomedical Engineering Reference
In-Depth Information
in biological systems, although possible mechanisms for biologically realizable
back-propagation have been suggested (46,47). Most biological learning models
have instead been based on the proposal of Hebb (48) that "When an axon of
cell A ... excite[s] cell B and repeatedly or persistently takes part in firing it,
some growth process or metabolic change takes place in one or both cells so that
A's efficiency as one of the cells firing B is increased" (quoted from (3, p.
1020)). This is often paraphrased as "neurons that fire together wire together,"
leading to a mathematical formulation such as
 
¯
 
¯
%
c
=
s
¸
R
s
R
° ,
[4]
¡
°
¡
° ¡
ij
i
i
j
j
where % c ij is the change in the strength, c , of the connection between neurons i
and j ;   ¡° represents a negative cutoff function (   ¡° = x if x > 0, otherwise   ¡° =
0); s i and s j are the activity levels (here rate-coded) of cells i and j , respectively;
and R i and R j are threshold activity levels for cells i and j , respectively. Although
Hebb based his proposal on behavioral data, the rule is now usually related to
the phenomenon of long-term potentiation (LTP), which can be observed in nu-
merous neural tissues and cell types (49-51). However, in fact, the Hebb rule as
initially stated is unworkable for several reasons.
First, the Hebb rule, naively applied, leads to network instability—it pro-
vides for the strengthening, but not the weakening, of connections. (See Part II,
chapter 2 [by Socolar], of this volume for methods of analyzing stability, hys-
teresis, and oscillations in nonlinear dynamical systems.) The stability problem
is easily overcome either by normalizing the total strength of all the synapses
onto a single cell or network region to a fixed sum, or by replacing the negative
cutoff functions in Eq. [4] with any of a variety of specific rules that allow for
weakening connections. Such rules are often based on data for long-term depres-
sion (LTD), the apparent counterpart to LTP that weakens synapses under cer-
tain conditions. The details are complex and depend on cell type and the relative
timing of the pre- and postsynaptic signals, but cases have been reported (see
references in (52)) in which strong connections that are inactive or only weakly
active when the postsynaptic cell is activated are weakened, or, alternatively
(53), connections that are active in the absence of postsynaptic activation are
weakened. A refinement of these ideas that automatically assures network stabil-
ity is the Bienenstock-Cooper-Munro (BCM) rule (54), which postulates that
% c ij is zero when s i is zero, negative for small values of s i , and positive for val-
ues of s i above a variable crossover threshold (akin to R i in Eq. [4]). The thresh-
old is adjusted on a slow time scale in such a way that strengthening is favored
when average activity becomes low, while weakening is favored when average
activity is high.
More importantly, the Hebb rule does not take into account whether the
perception or action produced by the neural circuit where the activity occurs is
in fact of use to the organism as a whole. In order for learning to be adaptive, the
Search WWH ::




Custom Search