Biomedical Engineering Reference
In-Depth Information
ʱ
(
| ʱ )
(
| ʱ )
holds,
is obtained as the one that maximizes p
y
.This p
y
is referred to
as the data evidence or the marginal likelihood.
Let us summarize the procedure to estimate the source distribution x .First,we
estimate the hyperparameter
ʱ
by maximizing the marginal likelihood function,
ʱ =
argmax
ʱ
p
(
y
| ʱ ).
Next, this
ʱ
is substituted into the posterior distribution p
(
x
|
y
, ʱ )
to obtain
. When this posterior is the Gaussian distribution in Eq. ( 4.12 ), the pre-
cision and mean are obtained by substituting
p
(
x
|
y
, ʱ )
into Eqs. ( 4.13 ) and
( 4.14 ). The voxel time courses are reconstructed by computing
ʦ =
diag
( ʱ )
x k in Eq. ( 4.14 )for
¯
k
=
1
,...,
K .
4.3 Cost Function for Marginal Likelihood Maximization
As described in the preceding section, the hyperparameter
ʱ
is estimated bymaximiz-
ing the marginal likelihood p
. In this section, we describe the maximization
of the marginal likelihood, and to do so, let us derive an explicit form of the log
marginal likelihood, log p
(
y
| ʱ )
. Substituting 3
(
y
| ʱ )
exp
| ʦ |
K
K
K
1
/
2
1
2
x k ʦ
p
(
x
| ʱ ) =
p
(
x k | ʱ ) =
x k
,
(4.17)
N
/
2
(
2
ˀ)
k
=
1
k
=
1
and
2
2 K
exp
2
K
K
2
p
(
y
|
x
) =
p
(
y k |
x k ) =
1
y k
Hx k
,
(4.18)
ˀ
k
=
1
k
=
into
p
p
p
(
y
| ʱ ) =
(
y
,
x
| ʱ )
d x
=
(
y
|
x
)
p
(
x
| ʱ )
d x
,
we obtain
M / 2 K
2
exp [
1
/
2
| ʦ |
p
(
y
| ʱ ) =
D ] d x
,
(4.19)
N
/
2
(
2
ˀ)
ˀ
where
K
K
= 2
1
2
2
x k ʦ
1
y k
Hx k
+
x k ,
D
(4.20)
k
=
k
=
1
3 Note that M and N are respectively the sizes of y k and x k .
 
 
Search WWH ::




Custom Search