Biomedical Engineering Reference
In-Depth Information
errors. On the other extreme, models with too many degrees of freedom tend to overfit the function
to be estimated. In terms of BMIs, models tend to err on the latter because of the large dimension-
ality of the input.
The first approach we presented for reducing the number of free parameters involved modi-
fications to the model topology itself. More parsimonious models such as the recurrent multilayer
perceptron (RMLP) [ 2-4 ] were studied. The topology of this model significantly reduced the num-
ber of free parameters by implementing feedback memory structures in hidden network layers in-
stead of in the input where the dimensionality is large.
Besides modifying the model topology, a second statistical approach can be taken during
model training. Regularization techniques [ 5 ] attempt to reduce 4the value of unimportant weights
to zero, and effectively prune the size of the model topology. Regularized least squares (LS), sub-
space projections using partial least squares (PLS), or special memory structures can be used to
reduce the number of free parameters [ 6 ]. These approaches are strictly statistical, and require lots
of data and computation, are not trivial to use, and do not necessarily provide information about
the importance of neurons. As an alternative, the number of inputs given to the models could be
manually pruned using neurophysiological analysis of the correlation of neuronal function with
behavior; however, it is difficult to know how it will affect BMI model performance. To overcome
this issue, neural selection has also been attempted using sensitivity analysis and variable selection
procedures [ 4 , 7 ].
This chapter presents our efforts to deal with these problems which are basically divided
between regularization of the optimal solution and channel selection. Moreover, once we have a
well-tuned model, then the model itself can help us learn about the neurophysiology of the motor
cortex. We also show the quantification of neural responses that is possible by “looking inside” the
trained models.
4.1 lEaST SQUaRES aNd REgUlaRIZaTIoN ThEoRy
The nature of the neural systems used in BMI model architectures creates MIMO systems with
a large input space. Even with the LS solution, an issue of well-posedness follows immediately
because the algorithm is formulated in very high dimensional spaces with finite training data avail-
able. The concept of well-posedness was proposed by Hadamard [ 8 ]. Regularization as a remedy
for ill-posedness became widely known because of the work of Tikhonov [ 9 ], and also from Bayes-
ian Learning for Neural Networks [ 10 ]. In solving LS problems, the Tikhonov regularization is
essentially a trade-off between fitting training data and reducing solution norms. Consequently, it
reduces the sensitivity of the solution to small changes in the training data and imposes stability on
the ill-posed problem. Moreover, the significance of well-posedness related to generalization ability
has been also revealed recently in statistical learning theory [ 11 ].
 
Search WWH ::




Custom Search