Biomedical Engineering Reference
In-Depth Information
state. The feedback of the state allows for continuous representations on multiple timescales and
effectively implements a short-term memory mechanism of the state values. Here, f (.) is a sigmoid
nonlinearity (in this case, tanh), and the weight matrices w 1 , w 2 , and w f , as well as the bias vec-
tors b 1 and b 2 are again trained using synchronized neural activity and hand position data. Again,
as in the TDNN, each of the hidden PEs outputs can be thought of as a nonlinear adaptive basis
partially created from the input space and utilized to project the high-dimensional input data. These
projections are then linearly combined to form the outputs of the RMLP that will predict the de-
sired hand movements as shown in ( 3.30 ). It should be now apparent the elegance of feedback in
decreasing the number of free parameters when compared with the input-based TDNN. Indeed,
the state feedback in the RMLP allows for an input layer weights defined by the number of inputs
while requiring only K 2 w f weights, where K is the number of hidden PEs for a total of N + 2 K 2
total weights per output. One of the disadvantages of the RMLP when compared with the TDNN
is that the RMLP cannot be trained with backpropagation and requires either BPTT or real-time
recurrent learning [ 33 ]. Either of these algorithms is much more time-consuming than standard
backpropagation, and on top of this, the dynamics of learning are harder to control with the step-
size, requiring extra care and more experience for good results.
,
y
(
t
)
=
f
(
W
x
(
t
)
+
W
y
(
t
1
+
b
)
(3.29)
1
1
f
1
1
y
(
t
)
=
W
y
(
t
)
+
b
,
(3.30)
2
2
1
2
The performance of RMLP input-output models in BMI experiments is dependent upon
the choice of the network topology, learning rules, and initial conditions. Therefore, a heuristic
exploration of these settings is necessary to evaluate the performance.
The first consideration in any neural network implementation is the choice of the number
of PEs. Because the RMLP topology studied here always consisted of only a single hidden layer of
tanh PEs, the design question becomes one of how many PEs are required to solve the neural to
motor mapping. The first approach to this problem was a brute-force scan of the performance across
a range of PEs, as shown in Table 3.1 . It can be seen for a reaching task that the topology with five
PEs produced the highest testing correlation coefficients. Although the brute-force approach can
TaBlE 3.1: RMLP performance as a function of the number of hidden PEs
1 PE
5 PE
10 PE
20 PE
Average CC testing
0.7099
0.7510
0.7316
0.6310
 
Search WWH ::




Custom Search