Biomedical Engineering Reference
In-Depth Information
interesting covariance component terms have been used to effect spatial smoothness,
depth bias compensation, and candidate locations of likely activity. With regard to
the latter, it has been suggested that prior information about a source location can be
coded by including a second term C 2 i.e. C
C 1 C 2 , with all zeros except a patch of
1's along the diagonal signifying a location of probable source activity, perhaps based
on fMRI data. For s - MAP ,
=
- MAP , VB , we will obtain a source estimate representing
a trade-off (modulated by the relative values of the associated
ʳ
ʳ 2 ) between
honoring the prior information imposed by C 2 and the smoothness implied by C 1 .The
limitation of this proposal is that we generally do not know, a priori, the regions where
activity is occurring with both high spatial and temporal resolution. Therefore, we
cannot reliably known how to choose an appropriate location-prior term C 2 in many
situations. A potential solution to this dilemma is to try out many different (or even all
possible) combinations of location priors. For example, if we assume the underlying
source currents are formed from a collection of dipolar point sources located at each
vertex of the leadfield grid, then we may choose C
ʳ 1 and
e i e i
d S , where each
e i is a standard indexing vector of zeros with a “1” for the ith element (and so C i =
e i e i encodes a prior preference for a dipolar source at location i ). This specification
for the prior involves the counterintuitive addition of an unknown hyperparameter for
every candidate source location which, on casual analysis may seem prone to severe
overfitting. As suggested previously however, s - MAP ,
=
i
=
1
,...,
- MAP , VB all possess an
intrinsic, sparsity-based regularization mechanism. This ameliorates the overfitting
problem substantially and effectively reduces the space of possible active source
locations by choosing a small relevant subset of active dipolar locations. In general,
the methodology is quite flexible and other prior specifications can be included as
well, such as temporal and spectral constraints.
In summary, there are two senses with which to understand the notion of covari-
ance component selection. First, there is the selection of which components to include
in the model before any estimation takes place, i.e., the choice of C . Second, there is
the selection that occurs within C as a natural byproduct of many hyperparameters
being driven to zero during the learning process. Such components are necessarily
pruned by the model; those that remain have therefore been 'selected' in some sense.
Here we have argued that in many cases the later, data-driven selection can be used
to ease the burden of often ad hoc user-specified selections.
ʳ
6.5 Discussion
The efficacy of modern Bayesian techniques for quantifying uncertainty and explic-
itly accounting for prior assumptions make them attractive candidates for source
localization. However, it is not always transparent how these methods relate, nor
how they can be extended to handle more challenging problems, nor which ones
should be expected to performbest in various situations relevant toMEG/EEG source
imaging.APtarting from a hierarchical Bayesian model constructed using Gaussian
scale mixtures with flexible covariance components, we analyze and, where pos-
 
Search WWH ::




Custom Search