Biomedical Engineering Reference
In-Depth Information
the cells firing B is increased. — Donald Hebb, 1949
In his famous book published a little more than half a century ago, The Organi-
zation of Behavior , Hebb proposed a hypothetical mechanism by which 'cell assem-
blies', which are a group of cells that could act like a form of 'short term memory'
and support self-sustaining reveberatory activity outlasting the input, could be con-
structed [37]. These suggestions were later extended into other areas and now serve
as the basis for a large body of thinking concerning activity-dependent processes
in development, learning, and memory [7, 13, 28, 35, 52, 74, 100]. What Hebb
proposed was an elegant way for correlated, i.e., interesting, features of an input
stimulus to become permanently imprinted in the architecture of neural circuits to
alter subsequent behavior, which is the hallmark of learning. It is similar in form to
classical conditioning in the psychology literature. Many models have subsequently
been constructed based on extensions of this simple rule, now commonly called the
Hebbian rule. These models have given reasonable accounts of many aspects of
development and learning [44, 48, 58, 59, 62, 70, 73, 75, 77, 90, 95, 97]. In this
chapter, we will not attempt to review the literature on Hebbian learning exhaus-
tively. Instead, we will try to review some relevant facts from the Hebbian learning
literature and discuss their connections to spike-timing-dependent plasticity (STDP),
which are based on recent experimental data. To discuss Hebbian learning and STDP
in a coherent mathematical framework, we need to introduce some formalism. Let us
consider one neuron receiving many inputs labelled 1 to N and denote the instanta-
neous rate for the ith input as r in i and the output as ( r out ). The integration performed
by the neuron could be written as
dr out
)
dt =
(
t
i
w i r in
G
(
t
) ,
(11.1)
m
i
where r out
is the instantaneous firing rate of the output neuron at time t, G is a
constant gain factor for the neuron, w i is the synaptic strength of the i th input, and
r in
i
(
t
)
is the instantaneous firing rate of the i th input at time t . Solving the differential
equation, we have
(
t
)
G
0
r out
dt K
t ) i
w i r in
t ) θ
(
)=
(
(
,
t
t
(11.2)
i
with
1
e t / τ
K
(
t
)=
.
(11.3)
m
m
K
is a kernel function used to simulate the membrane integration performed by
the neuron and
(
t
)
is the threshold. Therefore, the rate of a given neuron is linearly
dependent on the total amount of input into the neuron over the recent past, with
exponentially more emphasis on the most recent inputs. For the sake of simplicity,
we do not include the rectifying nonlinearity introduced by the threshold and only
consider the regime above threshold. If we assume that plasticity is on a slower time
Search WWH ::




Custom Search