Biomedical Engineering Reference
In-Depth Information
where
is a parameter set in the interval [0,1] which determines the contribution of
the current firing and the previous trace), and the head rotation cell input with firing
r I k ; and the learning rule can be written
w I ijk =
kr HD
i
r HD
j
r I k
,
(16.8)
where k is the learning rate associated with this type of synaptic connection. The
head rotation cell firing ( r I k ) could be as simple as one set of cells that fire for
clockwise head rotation (for which k might be 1), and a second set of cells that fire
for anticlockwise head rotation (for which k might be 2).
After learning, the firing of the head direction cells would be updated in the dark
(when I i =
0) by idiothetic head rotation cell firing r I k
as follows
dh HD
i
(
t
)
)+ φ 0
C HD
h HD
i
j ( w ij w inh
r HD
j
I i
=
(
t
)
(
t
)+
dt
1
C HD × ID
j , k w I ijk r HD
r I k
+ φ 1 (
) .
(16.9)
j
Equation 16.9 is similar to equation 16.3, except for the last term, which introduces
the effects of the idiothetic synaptic weights w I ijk , which effectively specify that the
current firing of head direction cell i , r H i , must be updated by the previously learned
combination of the particular head rotation now occurring indicated by r I k , and the
current head direction indicated by the firings of the other head direction cells r HD
j
indexed through j . This makes it clear that the idiothetic synapses operate using
combinations of inputs, in this case of two inputs. Neurons that sum the effects
of such local products are termed Sigma-Pi neurons. Although such synapses are
more complicated than the two-term synapses used throughout the rest of this topic,
such three-term synapses appear to be useful to solve the computational problem of
updating representations based on idiothetic inputs in the way described. Synapses
that operate according to Sigma-Pi rules might be implemented in the brain by a
number of mechanisms described by [38] (Section 21.1.1), [36], and [101], including
having two inputs close together on a thin dendrite, so that local synaptic interactions
would be emphasized.
Simulations demonstrating the operation of this self-organizing learning to pro-
duce movement of the location being represented in a continuous attractor network
were described by [101], and one example of the operation is shown in Figure 16.8 .
They also showed that, after training with just one value of the head rotation cell
firing, the network showed the desirable property of moving the head direction be-
ing represented in the continuous attractor by an amount that was proportional to the
value of the head rotation cell firing. [101] also describe a related model of the idio-
thetic cell update of the location represented in a continuous attractor, in which the
The term φ 1 / C HD × ID is a scaling factor that reflects the number C HD × ID of inputs to these synapses, and
enables the overall magnitude of the idiothetic input to each head direction cell to remain approximately
the same as the number of idiothetic connections received by each head direction cell is varied.
Search WWH ::




Custom Search