Biomedical Engineering Reference
In-Depth Information
16.2.4.1
The generic model of a continuous attractor network
The generic model of a continuous attractor is as follows. (The model is described
in the context of head direction cells, which represent the head direction of rats [50,
109] and macaques [72], and can be reset by visual inputs after gradual drift in
darkness.) The model is a recurrent attractor network with global inhibition. It is
different from a Hopfield attractor network [33] primarily in that there are no discrete
attractors formed by associative learning of discrete patterns. Instead there is a set
of neurons that are connected to each other by synaptic weights w ij that are a simple
function, for example Gaussian, of the distance between the states of the agent in
the physical world (e.g., head directions) represented by the neurons. Neurons that
represent similar states (locations in the state space) of the agent in the physical world
have strong synaptic connections, which can be set up by an associative learning rule,
as described in Section 16.2.4.2. The network updates its firing rates by the following
'leaky-integrator' dynamical equations. The continuously changing activation h HD
i
of each head direction cell i is governed by the Equation
dh HD
i
(
t
)
)+ φ 0
C HD
h HD
i
j ( w ij w inh
r HD
j
I i
=
(
t
)
(
t
)+
,
(16.3)
dt
where r H j is the firing rate of head direction cell j , w ij is the excitatory (positive)
synaptic weight from head direction cell j to cell i , w inh is a global constant describ-
ing the effect of inhibitory interneurons, and
is the time constant of the system .
h HD
i
The term
indicates the amount by which the activation decays (in the leaky
integrator neuron) at time t . (The network is updated in a typical simulation at much
smaller timesteps than the time constant of the system,
(
t
)
.) The next term in Equation
(16.3) is the input from other neurons in the network r HD
j
weighted by the recurrent
0 and C HD which is the
number of synaptic connections received by each head direction cell from other head
direction cells in the continuous attractor). The term I i represents a visual input to
head direction cell i . Each term I i is set to have a Gaussian response profile in most
continuous attractor networks, and this sets the firing of the cells in the continuous
attractor to have Gaussian response profiles as a function of where the agent is lo-
cated in the state space (see e.g., Figure 16.4) , but the Gaussian assumption is not
crucial. (It is known that the firing rates of head direction cells in both rats [50, 109]
and macaques [72] is approximately Gaussian.) When the agent is operating without
visual input, in memory mode, then the term I i
collateral synaptic connections w ij (scaled by a constant
is set to zero. The firing rate r HD
i
of
cell i is determined from the activation h HD
i
and the sigmoid function
1
r HD
i
(
t
)=
( t ) α ) ,
(16.4)
e ( h HD
1
+
i
where
and
are the sigmoid threshold and slope, respectively.
Note that here I use r rather than y to refer to the firing rates of the neurons in the network, remembering
that, because this is a recurrently connected network (see Figure 16.5), the output from a neuron y i might
Search WWH ::




Custom Search