Biomedical Engineering Reference
In-Depth Information
Figure 16.5
The architecture of an attractor neural network.
16.2.4.2
Learning the synaptic strengths between the neurons that implement
a continuous attractor network
So far we have said that the neurons in the continuous attractor network are con-
nected to each other by synaptic weights w ij that are a simple function, for example
Gaussian, of the distance between the states of the agent in the physical world (e.g.,
head directions, spatial views etc.) represented by the neurons. In many simulations,
the weights are set by formula to have weights with these appropriate Gaussian val-
ues. However, [101] showed how the appropriate weights could be set up by learning.
They started with the fact that since the neurons have broad tuning that may be Gaus-
sian in shape, nearby neurons in the state space will have overlapping spatial fields,
and will thus be co-active to a degree that depends on the distance between them.
They postulated that therefore the synaptic weights could be set up by associative
learning based on the co-activity of the neurons produced by external stimuli as the
animal moved in the state space. For example, head direction cells are forced to fire
during learning by visual cues in the environment that produce Gaussian firing as a
function of head direction from an optimal head direction for each cell. The learning
rule is simply that the weights w ij from head direction cell j with firing rate r HD
j
to head direction cell i with firing rate r HD
i
are updated according to an associative
be the input x j to another neuron.
 
Search WWH ::




Custom Search