Biomedical Engineering Reference
In-Depth Information
C H A P T E R 5
Neural decoding Using generative
BMI Models
additional Contributors: yiwen wang and Shalom darmanjian
This chapter will address generative models for BMIs, which are a more realistic modeling ap-
proach because they take into consideration some of the known features of motor cortex neurons.
This chapter still uses neuronal firing rates as inputs to the model, but the mathematical founda-
tions can also be applied, with appropriate adaptations, to spike trains as will be discussed in the
next chapter. Generative models are more in tune with the physical neural systems that produce
the data and therefore are examples of “gray box” models. Three examples of “gray box” models can
be found in the BMI literature. One of the most common examples is Georgopoulos population
vector algorithm (PVA) [ 1 ] as we briefly mentioned in Chapter 3 . Using observations that cortical
neuronal firing rates were dependent on the direction of arm movement, a model was formulated
to incorporate the weighted sum of the neuronal firing rates. The weights of the model are then
determined from the neural and behavioral recordings. A second example is given by Todorov's
work who extended the PVA by observed multiple correlations of M1 firing with movement posi-
tion, velocity, acceleration, force exerted on an object, visual target position, movement preparation,
and joint configuration [ 2-13 ]. With these observations, Todorov proposed a minimal, linear model
that relates the delayed firings in M1 to the sum of many mechanistic variables (position, velocity,
acceleration, and force of the hand) [ 14 ]. Todorov's model is intrinsically a generative model [ 15, 16 ].
Using knowledge about the relationship between arm kinematics and neural activity, the states (pref-
erably the feature space of Todorov) of linear or nonlinear dynamical systems can be assigned. This
methodology is supported by a well known training procedure developed by Kalman for the linear
case [ 17 ], and has been recently extended to the nonlinear case under particle filters [ 18 ] and other
graphical models or Bayesian network frameworks [ 19 ]. Because the formulation of generative
models is recursive in nature, the model is well suited for learning about motor systems because the
states are all intrinsically related in time. The third modeling approach is the hidden Markov model
(HMM), a graphical model where the dependencies over time are contained in the present state,
 
Search WWH ::




Custom Search