Biomedical Engineering Reference
In-Depth Information
the stimulus is no longer present. Even more than this, the dorsolateral prefrontal
cortex networks to which the parietal networks project have the capability to main-
tain spatial representations active for many seconds or minutes during short-term
memory tasks, when the stimulus is no longer present (see below).
A class of network that can maintain the firing of its neurons to represent any loca-
tion along a continuous physical dimension such as spatial position, head direction,
etc is a 'Continuous Attractor' neural network (CANN). It uses excitatory recurrent
collateral connections between the neurons to reflect the distance between the neu-
rons in the state space of the animal (e.g., head direction space). These networks can
maintain the bubble of neural activity constant for long periods wherever it is started
to represent the current state (head direction, position, etc) of the animal, and are
likely to be involved in many aspects of spatial processing and memory, including
spatial vision. Global inhibition is used to keep the number of neurons in a bubble or
packet of actively firing neurons relatively constant, and to help to ensure that there
is only one activity packet. Continuous attractor networks can be thought of as very
similar to autoassociation or discrete attractor networks (see [82]), and have the same
architecture, as illustrated in Figure 16.5. The main difference is that the patterns
stored in a CANN are continuous patterns, with each neuron having broadly tuned
firing which decreases with for example a Gaussian function as the distance from
the optimal firing location of the cell is varied, and with different neurons having
tuning that overlaps throughout the space. Such tuning is illustrated in Figure 16.4.
For comparison, autoassociation networks normally have discrete (separate) patterns
(each pattern implemented by the firing of a particular subset of the neurons), with
no continuous distribution of the patterns throughout the space (see Figure 16.4).
A consequent difference is that the CANN can maintain its firing at any location in
the trained continuous space, whereas a discrete attractor or autoassociation network
moves its population of active neurons towards one of the previously learned attrac-
tor states, and thus implements the recall of a particular previously learned pattern
from an incomplete or noisy (distorted) version of one of the previously learned pat-
terns. The energy landscape of a discrete attractor network (see [82]) has separate
energy minima, each one of which corresponds to a learned pattern, whereas the en-
ergy landscape of a continuous attractor network is flat, so that the activity packet
remains stable with continuous firing wherever it is started in the state space. (The
state space refers to set of possible spatial states of the animal in its environment,
e.g., the set of possible head directions.) I next describe the operation and properties
of continuous attractor networks, which have been studied by for example [3], [111],
and [119], and then, following [101], address four key issues about the biological
application of continuous attractor network models.
One key issue in such continuous attractor neural networks is how the synaptic
strengths between the neurons in the continuous attractor network could be learned
in biological systems (Section 16.2.4.2).
A second key issue in such Continuous Attractor neural networks is how the bub-
ble of neuronal firing representing one location in the continuous state space should
be updated based on non-visual cues to represent a new location in state space. This
is essentially the problem of path integration: how a system that represents a mem-
Search WWH ::




Custom Search