Biomedical Engineering Reference
In-Depth Information
Chapter 4
Sparse Bayesian (Champagne) Algorithm
4.1 Introduction
In this chapter, we provide a detailed description of an algorithm for electromagnetic
brain imaging, called the Champagne algorithm [ 1 , 2 ]. The Champagne algorithm
is formulated based on an empirical Bayesian schema, and can provide a sparse
solution, since the sparsity constraint is embedded in the algorithm. The algorithm is
free from the problems that cannot be avoided in other sparse-solution methods, such
as the L 1 -regularized minimum-norm method. Such problems include the difficulty
in reconstructing voxel time courses or the difficulty in incorporating the source-
orientation estimation.
In Sect. 2.10.2 , we show that the L 2 -regularizedminimum-normmethod is derived
using the Gaussian prior for the j th voxel value, 1
, ʱ 1
x j
N(
x j |
0
),
(4.1)
where the precision
is common to all x j . In this chapter, we use the Gaussian prior
whose precision (variance) is specific to each x j , i.e.,
ʱ
, ʱ 1
j
x j
N(
x j |
0
).
(4.2)
We show that this “slightly different” prior distribution gives a solution totally dif-
ferent from the L 2 -norm solution. Actually, the prior distribution in Eq. ( 4.2 ) leads to
a sparse solution. The estimation method based on the prior in Eq. ( 4.2 ) is called the
sparse Bayesian learning in the field of machine learning [ 3 , 4 ], and the source recon-
struction algorithm derived using Eq. ( 4.2 ) is called the Champagne algorithm [ 1 ].
In this chapter, we formulate the source reconstruction problem as the spatiotem-
poral reconstruction, i.e., the voxel time series x 1 ,
x 2 ,...,
x K is reconstructed using
the sensor time series y 1 ,
are denoted y k and
x k . We use collective expressions x and y , indicating the whole voxel time series
y 2 ,...,
y K where y
(
t k )
and x
(
t k )
1 We use the notational convenience N (
variable
|
mean
,
covariance matrix
)
throughout this topic.
 
Search WWH ::




Custom Search