Information Technology Reference
In-Depth Information
2.2 Neural Organization of the Robot
As previously mentioned, the robot is controlled by the activity of a four-neurons'
neural-network. This fully interconnected network is composed of four biologi-
cally plausible excitatory-rate code neurons. Only the connection between neu-
rons has modifiable synapses (weights). Recurrent synapses (connected to the
same neuron) are not modifiable. We arbitrarily assigned a value of 0.7 to the
weights of these synapses. The effect of recurrent connections is that the neurons'
activation decays over time as it happens with cortical pyramidal neurons.
It is remarkable that Temple Fay envisioned that cybernetic principles were
important to understand the central nervous system. In fact these principles are
at the heart of neuronal functioning and are determinant for allowing learning
in our robotic structure. These principles act in the modulation of synaptic
weights (synaptic plasticity) and in the adjustment of the neurons firing interval
(intrinsic plasticity).
Synaptic Plasticity. Synaptic plasticity refers to the modulation of the ecacy
of information transmission between neurons, being related to the regulation of
the number of ionic channels in synapses.
The first model of synaptic plasticity was postulated by Hebb and is known
as the Hebb rule, that may be stated as follows: when two neurons fire together
they wire together or, in other words, the synaptic strength between neurons
with correlated firing tends to increase[6]. Mathematically the change in the
synaptic strength (synaptic weight) between neurons i and j is calculated by the
product of the output of neuron I, O i , and the input I j (which corresponds to
the output of neuron j ) multiplied by a learning constant.
ω ij = εO i I j
(1)
Despite its importance, this formulation has a limitation: because in the equa-
tion all variables are positive, it leads synaptic weights to grow without bound.
For allowing synaptic depression, alternative equations that take into account
more recent biological studies have been formulated[5]. The equation that was
adopted for our simulation of synaptic plasticity, due to its biological plausibility,
is Grossberg's presynaptic learning rule which is as follows:
ω ij = εI j ( α i
ω ij )
(2)
Where α is the activation given by the sum of synaptic contributions. In this
equation, the subtractive term is equivalent to the negative feedback of a cyber-
netic loop. This negative feedback allows the artificial neuron to exhibit meta-
plasticity, a very important characteristic of biological neurons[2][8]. Metaplas-
ticity slows down the process of weight increment or decrement, making it more
dicult for the neuron to become either saturated or inactive. In the case of the
equation, it is easy to understand that the increment of weight is less for big
initial weights than for smaller weights.
Search WWH ::




Custom Search