Biomedical Engineering Reference
In-Depth Information
The column vector x
is expressed as
= A
) T
x
(
1
),...,
A
(
1
),...,
A
(
P
),...,
A
(
P
.
(8.111)
,
1
,
q
,
1
,
q
The residual vector e
is given by
] T
e
=
[ e
(
P
+
1
),...,
e
(
K
)
.
(8.112)
Equation ( 8.108 ) is called the Yule-Walker equation. The least-squares estimate of
x
,
x
, is then obtained using,
G T G
) 1 G T y
x = (
.
(8.113)
8.7.2 Sparse Bayesian ( Champagne ) Algorithm
Since causality analysis is generally performed using the source time series esti-
mated from non-averaged data, the estimated time series inevitably contains a large
influence of noise, which may cause errors in the MVAR-coefficient estimation. One
approach to reduce such errors is to impose a sparsity constraint when estimating the
MVAR coefficients. The key assumption here is that true brain interaction causes a
small number of MVAR coefficients to have non-zero values, and most of the MVAR
coefficients remain zero. If this is true, the sparsity constraint should be able to pre-
vent MVAR coefficients that must be equal to zero from having erroneous non-zero
values.
We can apply a simpler version of the Champagne algorithm described in Chap. 4
to this MVAR coefficient estimation. In the Champagne algorithm, the prior proba-
bility distribution of x
, p
(
x
)
, is assumed to be Gaussian:
, ʦ 1
p
(
x
) = N(
x
|
0
),
(8.114)
where
ʦ
is a diagonal precision matrix. The probability distribution of y
given x
,
p
(
y
|
x )
, is also assumed to be Gaussian:
Gx , ʛ 1
p
(
y
|
x ) = N(
y
|
),
(8.115)
where
ʛ
is a diagonal noise precision matrix. Then, the posterior distribution of x ,
p
(
x |
y
)
, is shown to be Gaussian, and it is expressed as
x , ʓ 1
(
x |
) = N(
x | ¯
).
p
y
(8.116)
ʛ
can be derived using the expectation-maximization (EM) algorithm. This is because,
Unlike the Champagne algorithm in Chap. 4 , the update equations for
ʦ
and
 
Search WWH ::




Custom Search