Biomedical Engineering Reference
In-Depth Information
Note that the predictions must be based on all possible values of the network parameters
weighted by their probability in view of the training data. Moreover, note that Bayesian learning is
fundamentally based on integration over the unknown quantities. In ( 5.5 ), because we want to find
the expected value of the output, which is a function of the model parameters and model order, we
will have to integrate over these unknown quantities using all the available data. Because these in-
tegrals are normally multidimensional, this poses a serious problem that was only recently mitigated
through the development of very efficient sampling-based evaluation of integrals called Monte Carlo
sampling [ 25 ]. From a Bayesian perspective, Monte Carlo methods allow one to compute the full
posterior probability distribution.
In BMIs we are interested in problems where the noisy neuronal recordings can be thought as
the output of a system controlled by its state vector x , which could include kinematics such as posi-
tion, velocity, acceleration, and force. Moreover, we assume that the system state vector may change
over time as shown in Figure 5.2 .
The state space representation of a discrete stochastic dynamical system is given by
x
+ = +
x
v
t
1
ˆ
t
t
z
f u x
(
,
) +
n
(5.6)
=
t
t
t
t
t
where the noise term v t is called the process noise and n t the measurement noise. The first equa-
tion defines a first-order Markov process p ( x t + 1 | x t ) (also called the system model), whereas the
second defines the likelihood of the observations p ( z t | x t ) (also called the measurement model). The
Time-series model
State x
Observation z
Prediction
P(state|observation)
update
FIgURE 5.2: Block diagram of the state machine for BMIs.
 
Search WWH ::




Custom Search