Biomedical Engineering Reference
In-Depth Information
is explored by adopting particle filter to estimate the posterior distribution recursively. When there is
enough neural recordings, the particle filtering method can generate a more complete coverage of the
hand movement space. The algorithm can be described as follows:
Initialization N state samples x i 0 , i = 1, … , N
For all the time index k = 1: T
Draw new samples from
i
k
i
k
by setting
with a process
x
~
p
(
x
|
x
)
x
=
f
(
x
,
v
)
k
1
k
k
k
1
k
1
v
Calculate weights
noise sample
i
k
~
p
(
v
)
1
v
k
1
i
k
i
k
w =
Normalize the weights
p
(
z
|
x
)
k
N
i
k
i
k
i
k
(
Resample [ x i k , w i k ] to reduce the degeneracy
Pick out
w
=
p
z
|
x
)
/
p
(
z
|
x
)
k
k
i
=
1
k x as the estimation state that maximize the posterior density
j
j
k
.
*
p
x
|
z
(
)
k
5.5 hIddEN MaRKoV ModElS
The final generative model discussed in this chapter is the HMM. HMMs are famous for captur-
ing the statistical dependencies in time series under the mild assumption that the past of the data is
fully described by the present state. For this important reason, HMMs are the leading technology to
recognize speech and have even been applied to modeling open-loop human actions, and analyzing
similarity between human control strategies. In past efforts, HMMs were trained on neural data
to determine the state transition of animal behavior rather than using the likelihood of particular
actions [ 37 , 38 ]. By using HMMs to find likelihoods of simpler movement actions or “movemes”
to construct complex motor actions, an unsupervised way to model neural recordings is possible.
Here we demonstrate that HMMs are capable of recognizing neural patterns corresponding to two
diverse kinematic states (arm at rest or moving) just by analyzing the spatiotemporal characteristics
of neural ensemble modulation (i.e., without using a desired signal).
An HMM is a probabilistic model of the joint probability of a collection of random variables
[ O 1 , … , O T , X 1 , … , X T ] (we are going to substitute O for Z to adhere to the notation in the HMM
literature). The O i variables are either continuous or discrete observations, and the Xi i variables are
hidden and discrete . Under an HMM, there are two conditional independence assumptions made
about these random variables that make associated algorithms tractable. These independence as-
sumptions are:
The t th hidden variable, given the ( t - 1) st hidden variable, is independent of previous
variables
1.
(
|
, . . . ,
)
=
(
|
1 )
P X t
X t
X
P X t
X t
(5.30)
-
-
1
1
2.
The t th observation, given the t th hidden variable, is independent of other variables
P  ( O t | X T , O T , X T- 1 , O T- 1 , … , X t ,O t- 1 , X t- 1 , … , X 1 , O 1 ) = P ( O t | X t )
(5.31)
 
Search WWH ::




Custom Search