Digital Signal Processing Reference
In-Depth Information
Putting together the log-likelihood function and the priors on A and
α
, the MAP
estimator leads to the following optimization problem:
N s
K
2 Y
T
k = 1 λ i , k α i , k
1
2
E +
p i , k
p i , k .
min
A
α
(9.26)
A
1 , 1 ,...,α N s , K
i = 1
This problem has strong similarity with that of equation (9.19). More precisely, if the
noise is homoscedastic and decorrelated between channels (i.e.,
2
E I ), if the
shape parameters p i , k of the generalized Gaussian distribution prior are all equal to
p and the scale parameters are all taken as
E = σ
E , and if the columns of A are
assumed uniform on the unit sphere, then equation (9.26) is exactly equation (9.19).
Note that in the preceding development, the independence assumption in equation
(9.25) does not necessarily entail independence of the sources; rather, it means that
there are no a priori assumptions that indicate any dependency between the sources.
λ i , k = λ/σ
9.4.3 The Fast Generalized Morphological Component
Analysis Algorithm
The goal here is to speed up the GMCA algorithm. As a warm-up, assume that the
dictionary
is no longer redundant and reduces to a single orthobasis (i.e., K
=
1).
Let us denote
, the matrix where each of its rows stores the coefficients of
each channel y i . The optimization problem (9.19) then becomes (we omit the
β =
Y
2
constraint on A to lighten the notation)
N s
1
2 β
2
F
p
p
α
+ λ
1 α
,
min
A , α
A
(9.27)
i
i
=
where p
1. The GMCA algorithm no longer needs to apply the analysis
and synthesis operators at each iteration as only the channels Y have to be trans-
formed once in
=
0or p
=
. Clearly this case is computationally much cheaper.
However, this is rigorously valid only for an orthobasis dictionary, and no or-
thonormal basis is able to sparsely represent a large variety of signals, and yet we
would like to use very sparse signal representations, which motivated the use of
redundancy in the first place. Arguments supporting the substitution of equation
(9.27) for equation (9.19) for a redundant dictionary
were given by Bobin et al.
(2007a, 2008b). The idea is first to compute the sparsest representation of each chan-
nel y i in the redundant dictionary
using an appropriate (nonlinear) decomposi-
tion algorithm (e.g., BP, MCA). Now,
denotes the matrix in which each row con-
tains the sparse decomposition of the corresponding channel. Because the channels
are linear mixtures of the sources via the mixing matrix A , the key argument de-
veloped by Bobin et al. (2007a) is that the sparse decomposition algorithm must
preserve linear mixtures. Descriptively, the sparsest decomposition provided by the
algorithm when applied to each channel must be equal to the linear combination of
the sparsest decompositions of the sources. This statement is valid if the sources and
channels are identifiable, meaning that they verify sufficient conditions so that their
unique sparsest representation can be recovered by the decomposition algorithm.
For instance, if MCA is used, then, following Section 8.5.4, it is sufficient that the
channels and sources be sparse enough in an incoherent dictionary
β
and that their
Search WWH ::




Custom Search