Biomedical Engineering Reference
In-Depth Information
receptors by allowing the synapses to modify only after strong postsynaptic firing
is present, then the synaptic strengths are still close to a Gaussian function of the
distance between the connected cells in head direction space (see Figure 16.6 , left).
They showed that after training, the continuous attractor network can support stable
activity packets in the absence of visual inputs (see Figure 16.6, right) provided
that global inhibition is used to prevent all the neurons becoming activated. (The
exact stability conditions for such networks have been analyzed by [3]). Thus [101]
demonstrated biologically plausible mechanisms for training the synaptic weights in
a continuous attractor using a biologically plausible local learning rule.
So far, we have considered how spatial representations could be stored in contin-
uous attractor networks, and how the activity can be maintained at any location in
the state space in a form of short-term memory when the external (e.g., visual) input
is removed. However, many networks with spatial representations in the brain can
be updated by internal, self-motion (i.e., idiothetic), cues even when there is no ex-
ternal (e.g., visual) input. Examples are head direction cells in the presubiculum of
rats and macaques, place cells in the rat hippocampus, and spatial view cells in the
primate hippocampus (see Section 16.2). The major question arises about how such
idiothetic inputs could drive the activity packet in a continuous attractor network,
and in particular, how such a system could be set up biologically by self-organizing
learning.
One approach to simulating the movement of an activity packet produced by id-
iothetic cues (which is a form of path integration whereby the current location is
calculated from recent movements) is to employ a look-up table that stores (taking
head direction cells as an example), for every possible head direction and head rota-
tional velocity input generated by the vestibular system, the corresponding new head
direction [95]. Another approach involves modulating the strengths of the recurrent
synaptic weights in the continuous attractor on one but not the other side of a cur-
rently represented position, so that the stable position of the packet of activity, which
requires symmetric connections in different directions from each node, is lost, and
the packet moves in the direction of the temporarily increased weights, although no
possible biological implementation was proposed of how the appropriate dynamic
synaptic weight changes might be achieved [119]. Another mechanism (for head
direction cells) [97] relies on a set of cells, termed (head) rotation cells, which are
co-activated by head direction cells and vestibular cells and drive the activity of the
attractor network by anatomically distinct connections for clockwise and counter-
clockwise rotation cells, in what is effectively a look-up table. However, no proposal
was made about how this could be achieved by a biologically plausible learning pro-
cess, and this has been the case until recently for most approaches to path integration
in continuous attractor networks, which rely heavily on rather artificial pre-set synap-
tic connectivities.
[101] introduced a proposal with more biological plausibility about how the synap-
tic connections from idiothetic inputs to a continuous attractor network can be learned
by a self-organizing learning process.
The essence of the hypothesis is described
16.7 . The continuous attractor synaptic weights w RC
with Figure
are set up under
the influence of the external visual inputs I V
as described in Section 16.2.4.2. At
Search WWH ::




Custom Search