Biomedical Engineering Reference
In-Depth Information
In a general setting Eq. ( 6.22 ) can be a difficult optimization problem and fur-
thermore, the nature of the underlying cost function is not immediately transpar-
ent. Consequently, we advocate an indirect alternative utilizing the pseudo-source
decomposition given by s described previously, which leads to an efficient EM imple-
mentation and a readily interpretable cost function. It also demonstrates that both
FOCUSS and MCE can be viewed as EM algorithms that are readily generalized to
handle more complex spatio-temporal constraints. Explicitly, we will minimize
2log
L(
s
)
p
(
y
|
s
)
p
(
s
| ʳ)
p
(ʳ)
d
ʳ
=−
2log p
(
y
|
s
)
p
(
s
)
(6.23)
d ʳ
L
2
Σ 1
2
≡||
||
+
1 g i ( ||
||
F ),
y
s
s
i
=
where
g i (.)
is defined as
2log
2
g i ( ||
s
||
F )
p
(
s i | ʳ i )
p i i )
d
ʳ i .
(6.24)
may not be available
in closed form. Moreover, it is often more convenient and transparent to directly
assume the form of
For many choices of the hyperprior, the associated
g i (.)
rather than infer its value from some postulated hyperprior.
Virtually any non-decreasing, concave function
g i (.)
g i (.)
of interest can be generated by
the proposed hierarchical model. In other words, there will always exist some p i i )
,
possibly improper, such that the stated Gaussian mixture representation will produce
any desired concave
. For example, a generalized version of MCE and FOCUSS
can be produced from the selection
g i (.)
c i z p / 2 , which is concave and amenable
to a Gaussian scale-mixture representation for any p
g i (
z
) =
0.
Presumably, there are a variety of ways to optimize Eq. ( 6.23 ). One particularly
straightforward and convenient method exploits the hierarchical structure inherent
in the assumed Bayesian model. This leads to simple and efficient EM-based update
rules. It also demonstrates that the canonical FOCUSS iterations are equivalent to
principled EM updates. Likewise, regularized MCE solutions can also be obtained
in the same manner.
Ultimately, we would like to estimate each
(
0
,
2
]
and constant c i
>
s i , which in turn gives us the true
sources S . If we knew the values of the hyperparameters
this would be straightfor-
ward; however, these are of course unknown. Consequently, in the EM framework,
ʳ
ʳ
is treated as hidden data whose distribution (or relevant expectation) is computed
during the E-step. The M-step then computes the MAP estimate of
s assuming that
ʳ
equals the appropriate expectation. For the
(
k
+
1
)
th E-step, the expected value
ʳ 1
i
s ( k ) )
of each
under the distribution p
|
y
,
is required (see the M-step below)
and can be computed analytically assuming
g i (.)
is differentiable, regardless of the
z p / 2 , it can be shown that
underlying form of p
(ʳ)
. Assuming
g i (
z
)
Search WWH ::




Custom Search