Biomedical Engineering Reference
In-Depth Information
As expected, the Gaussian assumption allows an analytical solution for ( 6.5 ) and therefore,
for a closed-form solution of ( 6.3 ) as ( 6.7 ).
Although the above set of equations may seem daunting, they can be interpreted quite eas-
ily. First, ( 6.7 a) establishes a prediction for the state based on the previous value. Then, ( 6.7 b) and
( 6.7 c) are used in ( 6.7 d) to correct or refine the previous estimate, after which the recursive calcula-
tion is repeated.
6.2 MoNTE CaRlo SEQUENTIal ESTIMaTIoN FoR PoINT
PRoCESSES
The Gaussian assumption applied to the posterior distribution in the algorithm just described may
not be true in general. Therefore, for the discrete observations case, a nonparametric approach is
developed here that poses no constraints on the form of the posterior density.
Suppose at time instant k the previous system state is
k x . Recall that because the parameter
θ was embedded in the state, all we need is the estimation of the state from the conditional intensity
function, because the nonlinear relation ( f is assumed known. Random state samples are gener-
ated using Monte Carlo simulations [ 12 ] in the neighborhood of the previous state according to
( 6.6 ). Then, weighted Parzen windowing [ 13 ] can be used with a Gaussian kernel to estimate the
posterior density. Because of the linearity of the integral in the Chapman-Kolmogorov equation
and the weighted sum of Gaussians centered at the samples we are still able to evaluate directly
from samples the integral. The process is recursively repeated for each time instant propagating the
estimate of the posterior density, and the state itself, based on the discrete events over time. Notice
that because of the recursive approach, the algorithm not only depends on the previous observation,
but also on the whole path of the spike observation events.
Let
-1
{
x
i
,
w
i
} N
denote a random measure [ 14 ] in the posterior density
p
(
x
|
N
)
, where
S
0:
k
k
i
=
1
0:
k
1:
k
i
is the set of all state samples up to time k with associated normalized weights
{
x 0
,
i
= L S
1
,
,
N
}
:
k
{
= 1 L and N is the number of samples generated at each time index. Then, the poste-
rior density at time k can be approximated by a weighted convolution of the samples with a Gauss-
ian kernel as
w i
i
,
,
,
N
}
k
S
N
S
å
p
(
x
|
N
)
»
w
i
×
k
(
x
-
x
i
,
s
)
(6.8)
0:
k
1:
k
k
0:
k
0:
k
i
=
1
where 1: N is the spike observation events up to time k modeled by an inhomogeneous Poisson
process in previous section, and k x
σ is the Gaussian kernel in term of x with mean x and
covariance s . By generating samples from a proposed density
(
x
,
)
q
(
x
|
N
)
according to the prin-
0:
k
1:
k
 
Search WWH ::




Custom Search