Biomedical Engineering Reference
In-Depth Information
ʳ
sible, extend three broad classes of Bayesian inference methods:
- MAP , which
involves integrating out the unknown sources and optimizing the hyperparameters,
and s - MAP , which integrates out the hyperparameters and directly optimizes over
the sources, and variational approximation methods, which attempt to account for
uncertainty in all unknowns. Together, these three encompass a surprisingly wide
range of existing source reconstruction approaches, which makes general theoretical
analyses and algorithmic extensions/improvements pertaining to the particularly rel-
evant. Thus far, we have attempted to relate and extend three large classes of Bayesian
inverse methods, all of which turn out to be performing covariance component esti-
mation/pruning using different sparsity promoting regularization procedures. We
now provide some summary points related to connections to existing methods.
1. s - MAP ,
- MAP , and VB can be viewed as procedures for learning a source
covariance model using a set of predetermined symmetric, positive semi-definite
covariance components. The number of components in this set, each of which
acts as a constraint on the source space, can be extremely large, potentially much
larger than the number of sensors. However, a natural pruning mechanism effec-
tively discards components that are unsupported by the data. This occurs because
of an intrinsic sparsity preference in the Gaussian scale mixture model, which is
manifested in an explicit sparsityinducing regularization term. Consequently, it
is not crucial that the user/analyst manually determine an optimal set of compo-
nents a priori; many components can be included initially allowing the learning
process to remove superfluous ones.
2. The wide variety of Bayesian source localization methods that fall under this
framework can be differentiated by the following factors: (1) Selection of covari-
ance component regularization term; (2) Choice of initial covariance compo-
nent set C; (3) Optimization method/ Update rules; and (4) Approximation to
log p
ʳ
(
y
)
; this determines whether we are ultimately performing s - MAP ,
ʳ
- MAP ,
or VB .
3. Covariance component possibilities include geodesic neural basis functions for
estimating distributed sources [34], spatial smoothing factors [24], indicator
matrices to couple dipole components or learn flexible orientations [36], fMRI-
based factors [31], and temporal and spectral constraints [21].
4. With large numbers of covariance components, s - MAP ,
- MAP , and VB provably
remove or prune a certain number of components which are not necessary for
representing the observed data.
5. In principle, the noise-plus-interference covariance can be jointly estimated as
well, competing with all the other components to model the data. However,
identifiability issues can be a concern here and so we consider it wiser to estimate
Σ via other means (e.g., using VBFA applied to prestimulus data as described
in Chap. 5 ) .
6. The latent structure inherent to the Gaussian scale-mixture model leads to an
efficient, principled family of update rules for s - MAP ,
ʳ
- MAP , and VB .This
facilitates the estimation of complex covariance structures modulated by very
large numbers of hyperparameters (e.g., 10 5
ʳ
+
) with relatively little difficulty.
Search WWH ::




Custom Search