Digital Signal Processing Reference
In-Depth Information
could be computationally demanding for large-scale high-dimensional problems. In
Section 9.4.3, we will see that under appropriate assumptions, GMCA can be accel-
erated, yielding a simple and much faster algorithm that enables handling of very
large scale problems.
9.4.1.2 The Thresholding Strategy
Hard or Soft Thresholding?
In practice, it was observed that hard thresholding leads to better results (Bobin
et al. 2006, 2007a). Furthermore, if A is known and no noise contaminates the data,
GMCA with hard thresholding will enjoy the sparse recovery guarantees given in
Section 8.5.4, with the proviso that the morphological components are contrasted
and sparse in a sufficiently incoherent multichannel dictionary A
.
Handling Additive Gaussian Noise
The GMCA algorithm is well suited to deal with data contaminated with additive
Gaussian noise (see the next section for a Bayesian interpretation). For instance,
assume that the noise E in equation (9.2) is additive white Gaussian in each chan-
nel, that is, its covariance matrix
σ E be its standard deviation,
supposed equal for all channels, for simplicity. Then, Algorithm 34 can be applied as
described earlier, with
E is diagonal, and let
λ min = τσ E , where
τ
is chosen as in denoising methods, typi-
cally taking its value in the range [3
4]. This attribute of GMCA makes it a suitable
choice for use in noisy BSS. GMCA not only manages to separate the sources, but
also succeeds in removing additive noise as a by-product.
,
9.4.2 The Bayesian Perspective
GMCA can be interpreted from a Bayesian standpoint. For instance, let us as-
sume that the entries of the mixtures ( y i ) i = 1 ,..., N c , the mixing matrix A , the sources
( s i ) i = 1 ,..., N s , and the noise matrix E are random processes. We assume that the noise
E is zero-mean Gaussian, where the noise vector
ε i in each channel is white but the
noise between channels is possibly correlated with known covariance matrix
E .
This means that the log-likelihood function takes the form
trace X T
E X
LL ( Y S
1
2
2
E
2
E
,
A
, E )
=
Y
AS
,
where
X
=
.
We further assume that the uniform prior is imposed on entries of A . Other pri-
ors on A could be imposed, for example, a known fixed column. As far as the sources
are concerned, they are known from equation (9.18) to be sparse in the dictionary
.
α 1 ,...,α N s ] T will be assumed as drawn independently
from a leptokurtic pdf with a heavy tail such as the generalized Gaussian distribution
form
Thus their coefficients
α =
[
exp
N s
K
p i , k
pdf
(
α 1 , 1 ,...,α N s , K )
λ i , k α i
,
α
i
=
1
k
=
1
0
p i , k <
2
( i
,
k )
∈{
1
,...,
N s }×{
1
,...,
K
} .
(9.25)
Search WWH ::




Custom Search