Biomedical Engineering Reference
In-Depth Information
fewer number of free parameters at the input layer. Finally, a test for generalization is crucial to the
successful production of useful models.
3.2.2 Recursive MlPs
Here, we continue evaluation of nonlinear dynamical models for BMIs by introducing the recursive
MLP (RMLP). First, we would like to discuss in detail the list below indicating why the RMLP is
an appropriate choice for BMI applications.
RMLP topology has the desired biological plausibility needed for BMI design
The use of nonlinearity and dynamics gives the RMLP a powerful approximating capability
Although the RMLP may first appear to be an “off-the-shelf ” black box model, its dynamic
hidden layer architecture can be compared with a mechanistic model of motor control proposed
by Todorov [ 18 , 37 ], revealing that it has the desired biological plausibility needed for BMI de-
sign. The formulation for the RMLP is similar to a general (possibly nonlinear) state space model
implementation that corresponds to the representation interpretation of Todorov's model for neural
control. The RMLP architecture in Figure 3.10 consists of an input layer with N neuronal input
channels, a fully connected hidden layer of nonlinear PEs, (in this case tanh), and an output layer of
linear PEs, one for each output. The RMLP has been successfully utilized for difficult control ap-
plications [ 38 ], although here the more traditional backpropagation-through-time (BPTT) train-
ing [ 39 ] was utilized.
Each hidden layer PE is connected to every other hidden PE using a unit time delay. In the
input layer equation of ( 3.29 ), the state produced at the output of the first hidden layer is a non-
linear function of a weighted combination (including a bias) of the current input and the previous
y 1
y 2
PE
x
PE
PE
W 2
W 1
W f
FIgURE 3.10: Fully connected, state recurrent neural network.
 
Search WWH ::




Custom Search