Biomedical Engineering Reference
In-Depth Information
In this chapter, the whole time series data y 1 ,
y 2 ,...,
y K
is collectively denoted y ,
and the whole time series data x 1 ,
x 2 ,...,
x K are collectively denoted x . We assume
that the noise
ʵ
is Gaussian and is identically and independently distributed across
time, i.e.,
, ʛ 1
ʵ N( ʵ |
0
),
(B.13)
where we omit the notation of the time index k from
is a diagonal
precision matrix of which the j th diagonal entry is equal to the noise precision for the
j th observation data. Then, using Eqs. ( B.13 ) and ( C.3 ), the conditional probability
p
ʵ
.InEq.( B.13 ),
ʛ
(
y k |
x k )
is obtained as
Hx k , ʛ 1
p
(
y k |
x k ) = N(
y k |
).
(B.14)
The conditional probability of the whole time series of y k given the whole time series
of x k is given by
p
(
y
|
x
) =
p
(
y 1 ,...,
y K |
x 1 ,...,
x K )
K
K
Hx k , ʛ 1
=
p
(
y k |
x k ) =
1 N(
y k |
).
(B.15)
k
=
1
k
=
The prior distribution of x k is assumed to be Gaussian and independent across
time:
, ʦ 1
p
(
x k ) = N(
x k |
0
).
(B.16)
The prior distribution for the whole time series of x k is expressed as
K
K
, ʦ 1
p
(
x
) =
p
(
x 1 ,...,
x K ) =
p
(
x k ) =
1 N(
x k |
0
).
(B.17)
k
=
1
k
=
In this case, the posterior probability is independent across time, and given by
K
p
(
x
|
y
) =
p
(
x 1 ,...,
x K |
y 1 ,...,
y K ) =
p
(
x k |
y k ).
(B.18)
k
=
1
(
x k |
y k )
The posterior probability p
can be derived by substituting Eqs. ( B.16 ) and
( B.14 ) into Bayes' rule:
p
(
x k |
y k )
p
(
y k |
x k )
p
(
x k ).
(B.19)
is performed in the following manner. Since we
know that the posterior distribution is also Gaussian, the posterior distribution is
assumed to be
Actual computation of p
(
x k |
y k )
x k , ʓ 1
p
(
x k |
y k ) = N(
x k
),
(B.20)
Search WWH ::




Custom Search