Biomedical Engineering Reference
In-Depth Information
immediately direct the choice of topology, it does not explain why the number is appropriate for a
given problem. We later came to find out that the number of hidden PEs should be chosen to span
the space of the desired trajectory [ 34 ]. In the context of BMIs, this knowledge can help to avoid
the computational and time costs of the brute-force approach.
The RMLP presented here was trained with BPTT [ 33 , 39 ] using the NeuroSolutions soft-
ware package [ 40 ]. Training was stopped using the method of CV (batch size of 1000 points)
(Figure 3.11 ) to maximize the generalization of the network [ 41 ]. The BPTT training procedure in-
volves unfolding the recurrent network into an equivalent feedforward topology over a fixed interval,
or trajectory. For our BMI applications, we are interested in learning the dynamics of the trajectories
of hand movement; therefore, for each task, the trajectory was chosen to match a complete move-
ment. For the reaching task, this length was, on average, 30 samples. In Table 3.2 , this selection of
trajectory was compared again with a brute-force scanning of the testing performance as a function
of the trajectory length (samples per exemplar). Included in the BPTT algorithm is the option to
update the weights in online (every trajectory), semibatch, or batch mode. Choosing to update the
network weights in semibatch mode can protect against noisy stochastic gradients that can cause
the network to become unstable by averaging the gradients over several trajectories. Table 3.2 also
provides the average testing correlation coefficient as a function of the frequency of updates. We can
see that a range of good choices [15-30 samples/exemplar (s/e) and 5-15 exemplar/update (e/u)]
exist for the trajectory length and update rule.
FIgURE 3.11: RMLP learning curve. MSE (upper curve) and CV (lower curve).
 
Search WWH ::




Custom Search