Information Technology Reference
In-Depth Information
be computed by
Ω ( map L p ) = Σ ( k J L T L
1
Σ ( k J L T
+ Σ ( k )
ϒ
.
(8.19)
In practice, the iterations are usually carried out until a threshold of change is
reached (e.g.,
J ( k ) 2 ε
J ( k + 1 )
). Also, to accelerate convergence one usually
truncates source points from all equations for which the current is smaller than a
very small threshold, but this can have a negative effect on the minimization of the
cost function. If the cost is not minimized at one iteration due to this thresholding,
a smaller threshold value should be used. Also, to compensate for depth bias, the
lead field matrix should be weighted as explained earlier in the context of weighted
minimum- l 2 -norm algorithms, but in this case it should be weighted before the start
of MAP optimization, and the final solution can be unweighted after convergence
by multiplying with the original weight factors.
These update equations are equivalent to a generalized form of the FOCal Unde-
termined System Solver (FOCUSS) algorithm, which was developed as a recursive
weighted minimum-norm algorithm for p
0, but was later derived as a Bayesian
MAP algorithm using generalized Gaussian prior pdfs [26, 25, 75, 74, 14, 69]. For
the case of p
=
0, truncation of the rows of J with smallest norms is usually imple-
mented so that the minimization involves the count of nonzero rows. When p
=
2,
the magnetic field tomography (MFT) algorithm is recovered if the update rule is
based on the current modulus, there is only one iteration, and the a priori weight
matrix is a 3D Gaussian used for depth bias compensation [38, 76, 87]. If one is not
sure whether one should use a Gaussian or Laplacian prior, one can use MCMC
methods to learn which l p -norm is optimal for that particular data set [5].
To simultaneously identify the generators of a long data time series, the matrix
BB T can be decomposed efficiently using the SVD, and B in (8.17) and (8.18) can
be replaced with the matrix US 1 / 2 , where U and S are the left singular vectors and
singular values matrices, respectively [99].
=
8.6.2 Dynamic Statistical Parametric Mapping (dSPM)
Another approach directly related to the MNE is the noise normalized dynamic sta-
tistical parametric mapping (dSPM) technique, which normalizes the MNE by the
noise sensitivity at each location, thereby producing statistical activity maps [15,41].
This extra step helps compensate for depth bias. First, the linear inverse operator is
computed by (8.14). This operator is equivalent to that used in Wiener filtering or in
weighted minimum- l 2 -norm estimation assuming correlated noise. Then the noise-
normalized operator is computed, which in the case of fixed dipole orientations
yields:
Ω ( dspm ) =
) 1 / 2
Ω ( map L 2 ) ,
diag
(
v
(8.20)
Search WWH ::




Custom Search