Biomedical Engineering Reference
In-Depth Information
schemes that we proposed, and results obtained on both simulated and real
data.
The first step in using MCMC methods is to define a probability distri-
bution for the inverse problem solutions. Assuming a variable number
n
of
localized dipolar sources and single time measurements, the posterior proba-
bility of solutions (
q
,
n
) knowing the measurements can be computed using
Bayes theorem:
p
(
q
,n
|
b
obs
)
∝
p
(
b
obs
|
q
,n
)
·
p
(
q
|
n
)
·
p
(
n
)
,
(3.85)
where
is a vector of
n
dipoles
q
1
,... ,q
n
,with
q
i
being defined by its position
(
x
i
,y
i
,z
i
), its direction (
θ
i
,ϕ
i
), and its intensity
j
i
.
The first term is the likelihood of the observed measurements, given the
current distribution
q
.Wehaveassumedan
m
-dimensional Gaussian noise
model, with mean 0 and covariance matrix
Σ
, so that the likelihood is given
by
q
exp
)
,
1
(2
π
)
m/
2
1
2
(
Δ
)
T
Σ
−
1
(
Δ
p
(
b
obs
|
q
,n
)=
1
/
2
×
−
b
b
(3.86)
|
Σ
|
with
Δ
).
For real data, the covariance matrix
Σ
was estimated from pre-stimuli
measurements. The theoretical measurements
b
being equal to
b
obs
−
b
(
q
b
(
q
) obtained from the dipole
distribution
were computed using the Sarvas formula [50,21].
The second term represents the a priori knowledge about the number
and characteristics of the neuromagnetic sources. The source positions were
limited to the brain volume by setting a null probability for sources outside
thebrain.Wefavoredsourcesinthecerebralcortexbysettinga1to100
probability ratio for sources in the cortex. Dipole directions were constrained
to be tangential by assuming a normal law of mean 0 and standard deviation
π/
10 for the angle between the position and the direction vectors. The source
intensity was assumed to follow a constant law in a predefined interval. For
variable numbers of sources, a Poisson law was assumed.
To sample the posterior distribution
p
(
q
|
b
obs
), we used parallel tem-
pering (PT) [51] to prevent being trapped in local modes and to speed up
q
,n
convergence. A number
k
of Markov chains
C
i
=
,
are con-
(1)
i
(
n
)
i
q
,... ,
q
(
j
)
i
structed, with each realization
q
being a set of
n
dipoles. The chains
C
i
are
constructed in an iterative process, with each realization
(
j
+1)
i
being deter-
mined from the previous state of the chains with a probability distribution
p
(
j
)
i
q
.Thechain
C
1
is called the principal chain; it is constructed so that the
distribution
p
(
n
)
1
converges to the posterior distribution:
p
(
n
)
1
lim
n→∞
=
p
(
q
|
b
obs
)
.
(3.87)