Biomedical Engineering Reference
In-Depth Information
C H A P T E R 4
Regularization Techniques
for BMI Models
additional Contributor: Sung-Phil Kim
In Chapter 3 , we have demonstrated the design of linear and nonlinear filters which can be adapted
for BMI applications. Despite the intrinsic sophistication of the BMI system, early tests with the
simple linear filter (which merely combines the weighted bin count inputs) showed reasonable es-
timation of hand position (HP) from neuronal modulation. Therefore, it makes sense to fine-tune
these linear models by means of advanced learning techniques. There are three major issues in the
design of BMIs: the large number of parameters, irrelevant inputs, and the noise in the data due to
spike sorting and nonstationarities. They all affect model performance in different ways. The first
two can be handled using sophisticated signal processing techniques during offline analysis of the
data. The latter issue of spike sorting is usually handled during the time of data collection, and is
considered as a preprocessing step. Here we assume that the data have been appropriately spike
sorted, and will focus on the signal processing techniques used in offline analysis.
When training BMI models the first challenge one encounters is how to deal with model
overfitting when hundreds of neurons are used as model inputs. In the examples, we showed that
the introduction of extra degrees of freedom not related to the mapping can result in poor general-
ization, especially in topologies where tap-delay memory structures are implemented in the neural
input layer (i.e., the TDNN topology). The problem occurs with each additional memory delay ele-
ment that scales the number of free parameters by the number of input neurons. This explosion in
the number of free parameters also puts a computational burden on computing an optimal solution
especially when the goal is to implement the BMI in low-power, portable hardware. A table of the
number of free parameters for the topologies is described in Chapter 3 .
The generalization of the model can be explained in terms of the bias-variance dilemma of
machine learning [ 1 ], which is related to the number of free parameters of a model. The MIMO
structure of BMIs built for the neuronal data presented here can have as few as several hundred to
as many as several thousand free parameters. On one extreme, if the model does not contain enough
parameters, there are too few degrees of freedom to fit the to be estimated, which results in bias
 
Search WWH ::




Custom Search