Information Technology Reference
In-Depth Information
constraints. For example, in the low resolution brain electromagnetic tomography
(LORETA) method (i.e., spatial Laplacian minimization),
Σ
J
=
W
T
D
T
DW
−
1
,
where
W
, and
D
is the discrete spatial Laplacian operator [66].
To obtain more focal estimates, MAP estimation can be performed using super-
Gaussian priors such as the Laplacian pdf, which is equivalent to obtaining minimum-
l
1
-norm solutions, often called minimum current estimates (MCE) [48, 94]. These
are traditionally computed using linear programming, but can alternatively be ob-
tained more efficiently using an expectation maximization (EM) algorithm by pa-
rameterizing the prior as a Gaussian scale mixture. This approach can be used to
find MAP estimates with generalized Gaussian prior pdfs defined by
p
=
diag
(
L
:
i
2
)
≤
2(the
Laplacian being the special case
p
1).
These source priors can be formulated within a hierarchical Bayes framework, in
which each
J
i
:
has a Gaussian prior,
p
=
|
α
−
1
i
,
α
−
1
i
(
)=
N
(
|
)
J
i
:
J
i
:
0
I
, with zero mean,
α
−
1
i
α
−
1
i
(
α
−
1
i
|
γ
)
and covariance
that controls the
shape of the pdf. The variances are integrated out to obtain the prior
I
, and each
has a hyperprior
p
p
J
i
:
|
α
−
1
(
α
−
1
i
α
−
1
i
p
(
J
i
:
|
γ
)=
(
)
p
|
γ
)
d
.
(8.15)
i
Different priors can be obtained by assuming different hyperpriors. For exam-
ple, the Laplacian prior is obtained with an exponential hyperprior
p
(
α
−
1
i
|
γ
)=
2
exp
2
α
−
i
, and the Jeffreys prior
p
J
i
:
−
F
is obtained with the nonin-
(
J
i
:
)=
(
α
−
1
i
formative Jeffreys hyperprior
p
)=
α
i
, which has the advantage of being scale
invariant and parameter free.
The EM algorithm minimizes the negative log posterior by alternating between
two steps. In the E-step, the conditional expectation of the inverse source variances
at the
k
th iteration,
A
(
k
)
=
2
(
k
)
(
α
(
k
)
)
,given
B
,
J
(
k
)
, and
diag
σ
is computed
ϒ
1
d
v
i
:
p
−
2
2
2
[
α
(
k
)
i
)
ϒ
]=
2
(
k
J
(
k
)
J
(
k
)
,
σ
E
|
.
(8.16)
F
In the M-step, the noise variance and the current density estimates are computed
d
b
d
v
1
−
LJ
(
k
)
p
2
2
F
/
2
(
k
+
1
)
σ
ˆ
=
B
−
,
(8.17)
ϒ
J
(
k
+
1
)
=
Σ
(
k
J
L
T
L
−
1
Σ
(
k
J
L
T
+
Σ
(
k
+
1
)
ϒ
B
,
(8.18)
Σ
(
k
)
J
)
ϒ
]
−
1
2
(
k
Σ
(
k
+
1
)
ϒ
2
(
k
+
1
)
A
(
k
)
|
J
(
k
)
,
σ
where
I
are the source and noise
covariance matrices. Note that the noise variance update rule implements MAP es-
timation with a non-Gaussian prior on ˆ
=
E
[
and
=
σ
ˆ
ϒ
2
ϒ
. In practice, the discrepancy principle is
often used based on some reasonable expected representation error to avoid under-
regularizing. When
J
(
k
+
1
)
=
σ
2
(
k
+
1
)
2
(
k
)
J
(
k
)
and
, the algorithm has converged
and the MAP inverse operator for this generalized Gaussian prior (e.g.,
p
σ
=
σ
ϒ
ϒ
=
1) can