Biomedical Engineering Reference
In-Depth Information
5.2 SEQUENTIal ESTIMaTIoN
A second class of models that has been used for BMIs uses a generative, state space approach
instead of the input-output modeling described in Chapters 3 and 4 [ 5 , 16 ]. Generative models
utilize a Bayesian formulation and they offer a general framework to estimate goal directed behavior
encoded in multichannel spike trains. The movement kinematics are defined as state variables of a
neural dynamical system from a sequence of noisy neuronal modulation observations. In this ap-
proach, the data are analyzed in the neural space instead of using an input-output model to map the
neural inputs and desired kinematics. The generative approach creates an observation model that
incorporates information from measurements using a recursive algorithm to construct the posterior
PDF of the kinematics given the observations at each time. By estimating the expectation of the
posterior density (or by maximum likelihood estimation), the movement kinematic estimate can be
recovered probabilistically from the multichannel neural recordings.
Bayesian learning assumes that all forms of uncertainty should be expressed and measured by
probabilities. At the center of the Bayesian formulation is a simple and very rich expression known
as Bayes' rule. Given a data set D = [ u 1:N , z 1:N ], and a set of possible descriptive data models M k ,
k = 0, . . . , K , the Bayes' rule becomes
p D M
p D M p M
(
|
)
i
p M D
(
|
)
=
p M
(
)
(5.4)
i
i
(
|
) (
)
k
k
k
This rule simply states that the posterior distribution ( p ( M|D )) can be computed as the ratio of the
= å
likelihood ( p ( D|M )) over the evidence ( p D
(
)
p D
(
|
M
) (
p M
)
) times the prior probability
k
k
k
( p ( M )). The resulting posterior distribution incorporates both a priori knowledge and the informa-
tion conveyed by the data.
This approach can be very useful in parametric model building ( f ( u,w )), where u is the input
and w are system parameters because one can assign prior probabilities not only to the parameters of
the system but also to model order k as p ( w,k ) and even to the noise. Once the data are passed thru
the system we can build the posterior distribution p ( w,k| u 1:N , z 1:N ) using Bayes' rule, where z are the
system outputs. Because the posterior embodies all the statistical information about the parameters
and their number given the measurements and the prior, one can theoretically obtain all features
of interest (marginal densities, model selection, and parameter estimation) by standard probability
marginalization and transformation techniques. For instance, we can predict the value of the next
system output by ( 5.5 )
ˆ ( ,
=
(5.5)
(
|
E z
u
,
z
)
f k w u
,
) ( ,
p k w u
|
,
z
)d
k d w
N
+
1
1
:
N
+
1
1
:
N
N
+
1
1
:
N
1
:
N
 
Search WWH ::




Custom Search