Biomedical Engineering Reference
In-Depth Information
vector of the winning model is updated. Specifically, if the j th model wins competition, the update
rule for the weight vector w j ( n ) of that model is given by
e
(
n
)
x
(
n
)
j
,
(3.58)
w
(
n
+
1
=
w
(
n
)
+
η
j
j
2
γ
+
x
(
n
)
where e j ( n ) is the instantaneous error and x ( n ) is the current input vector. η represents a learning
rate and γ is a small positive constant used for normalization. If soft competition is used, a Gaussian
weighting function centered at the winning model is applied to all competing models. Every model
is then updated proportional to the weight assigned to that model by this Gaussian weighting func-
tion such that
Λ
(
n
)
e
(
n
)
x
(
n
)
i
,
j
i
w
(
n
+
1
=
w
(
n
)
+
η
,
i
=
1
,
M
,
(3.59)
L
i
i
2
γ
+
x
(
n
)
where w i is the weight vector of the i ith model. Assuming the j ith model wins competition, Λ i , j ( n ) is
the weighting function defined by
2
d
ij
Λ
( )
n
,
(3.60)
=
exp
i , j
2
2
σ
( )
n
where d ij is the Euclidean distance between indices i and j , which is equal to | j i |, η( n ) is the an-
nealed learning rate, and σ  2 ( n ) is the Gaussian kernel width decreasing exponentially as n increases.
The learning rate also exponentially decreases with n .
Soft competition preserves the topology of the input space, updating the models neighbor-
ing the winner; thus, it is expected to result in smoother transitions between models specializing
in topologically neighboring regions (of the state space). However, the empirical comparison using
BMI data between hard and soft competition update rules shows no significant difference in terms
of model performance (possibly because of the nature of the data set). Therefore, we prefer to utilize
hard competition rule because of its simplicity.
With the competitive training procedure, each model can specialize in local regions in the
joint space. Figure 3.18 demonstrates the specialization of 10 trained models by plotting their out-
puts (black dots) with the common input data (40 sec long) in the 3D hand trajectory space. Each
model's outputs are simultaneously plotted on top of the actual hand trajectory (red lines) synchro-
nized with the common input. This figure shows that the input-output mappings learned by each
model display some degree of localization, although overlaps are still present. These overlaps may
be consistent with a neuronal multiplexing effect as depicted in Carmena et al. [ 26 ], which suggests
that the same neurons modulate for more than one motor parameter (the x and y coordinates of
hand position, velocity, and griping force).
 
Search WWH ::




Custom Search