Biomedical Engineering Reference
In-Depth Information
In this model-based framework, can better BMIs be built using a subset of important
neurons?
It is well known that neurons vary in their involvement in a given task [ 25 ]. However, quantify-
ing neuronal involvement for BMI applications is still an ongoing area of research. This is where BMI
modeling is an asset, because once trained, the model implicitly contains the information of how cells
contribute for the mapping. The difficulty is that the assessment is in principle dependent on the type
of model chosen to predict the kinematic variables and its performance (model-dependent). We will
first compare the linear FIR model and nonlinear RMLP. Our second question quantifies the change
in performance when only a small subset of cells is used to build the BMI. In principle, one could
think that any subset of cells will perform worse than the whole ensemble, but because of the poor
generalization of large models, performance may in fact be better in a test set with a reduced number
of important cells. Of course, this also makes BMIs more dependent on the stability over time of these
cells, and in the long run we have shown that performance can either worsen or improve. The ultimate
goal is to improve understanding of how cells encode kinematic parameters so that better “gray-box”
models can be built using the underlying information in neural recordings.
assumptions for ranking the importance of a neuron. We would like to obtain an automatic
measure of each cell's contribution to encoding motor parameters for a given task, which we call the
cell importance. For this reason, a structured approach is taken to ascertaining the importance of
neurons with the three methods described earlier. Our methodological choices, however, are not free
from assumptions. First, the methods assume stationarity in the data. A snapshot of neural activity
is taken and importance is ascertained without addressing time variability in the recordings, which is
a shortcoming. Second, despite the highly interconnected nature of neural structures, importance is
often computed independently for each individual neuron. With this independence assumption, it
is difficult to quantify the importance of pairs, triples, etc., of cells. In contrast, the model sensitivity
analysis considers covariations in firing rate among groups of cells in the neural ensemble, but depends
on the type of model utilized. Third, a technique may only consider the instantaneous neural activ-
ity, whereas other techniques include memory structures (tap delay lines). Finally, each technique for
ascertaining importance focuses on different neuronal firing features.
4.2.1 Sensitivity-Based Pruning
With the weights of the trained linear and nonlinear networks, we have a tool with which we can
identify the neurons that affect the output the most. A sensitivity analysis, using the Jacobian of
the output vector with respect to the input vector, tells how each neuron's spike counts affect the
output given the data of the training set. Because the model topology can affect the interpretation
 
Search WWH ::




Custom Search