Biomedical Engineering Reference
In-Depth Information
classified as moving or nonmoving based solely on neural recordings using two HMMs and a
winner-take-all strategy. The HMMs act like a switch and allow the neural data belonging to each
class to be fed into a feed-forward model to it it to the respective arm trajectory. To train such a sys-
tem, we partition the neural recordings into two groups: the first group contains data where the arm
appears to be stationary, whereas the second group should contain data where the arm appears to
be moving. We use a simple threshold to achieve this grouping: if the instantaneous velocity of the
arm is below the noise threshold of the sensor (determined by inspecting the velocity data visually),
the corresponding neural data are classified as stationary ; otherwise, the neural data are classified as
moving [ 43 ]. More specifically, the classifier works as follows:
1) At time index t , the neural binned vector v t of length 104 (equal to number of neurons)
is converted into a discrete symbol O t in preparation for discrete output HMM evaluation.
The multidimensional neural recording must be converted to a sequence of discrete symbols.
This process involves vector quantizing the input-space vectors to discrete symbols in order to use
discrete-output HMMs. We choose the well-known LBG VQ algorithm [ 42 ], which iteratively
generates vector codebooks of size
m m
, { , , and can be stopped at an appropriate level
of discretization (represented by m ), as determined by the amount of available data. For our example
experiment, we varied L from 8 prototype vectors to 256. This optimization seemed to be a good
trade off given the 10 000 samples available for training and the 104 dimensional input vector (with
each neural bin having about 20 possible bin counts or 20 104 possible vectors). By optimizing the
vector codebook on the neural recordings, we seek to minimize the amount of distortion introduced
by the VQ process.
2)
L
=
2
0 1
. . .}
Next, the conditional probabilities P ( O | λ s ) and P ( O | λ m ) are evaluated where
O = [ O t − N + 1 ,O t − N + 2 ,O t − 1 , O t ] , N > 1,
(5.53)
and λ s and λ m denote HMMs that correspond to the two possible states of the arm (stationary vs.
moving).
The vector quantized input generates discrete symbols as input to a left-to-right (or Bakis)
HMM chain. Given that we expect the monkey' arm movement to be dependent not only on cur-
rent neural firings, but also on a recent time history of firings, we train each of the HMM models
on observation sequences of length N . During run-time evaluation of P ( O | λ s ) and P ( O | λ m ), we
use the same value of N as was used during training. Based on other experimental paradigms [44],
we varied N from 5 to 10 to correspond to a half of second of data to a second of data. The HMM
was trained with the Baum-Welch method on average with five iterations despite our convergence
criterion of 0.000001 being met much earlier (because we set the minimum number of iterations to
be five). The number of hidden states for the HMM were varied from 2 to 8 as to not exceed the
observation sequence length.
 
Search WWH ::




Custom Search