Digital Signal Processing Reference
In-Depth Information
As equation (9.14) is a vectorized monochannel form of equation (9.12), what
we are trying to do is find the sparsest solution of a monochannel underdetermined
system of linear equations where the solution is sparse in an overcomplete tensor
product dictionary. Recovery properties of monochannel sparse decomposition by
1 minimization were overviewed in Section 8.1.1. Therefore, if we are able to trans-
late those identifiability criteria into the language of tensor product dictionaries,
then we are done.
In particular, the coherence-based sparse recovery criterion (8.4) is trivial to
adapt, owing to equation (9.15). Indeed, if Y is k -sparse in the multichannel dictio-
nary
μ 1
with k
<
C (
+
>
=
/
2), and the dictionary
is sufficiently incoherent (both spectrally and spatially), then the solution of equa-
tion (9.17) is unique and is a point of equivalence of equations (9.16) and (9.17), and
the recovery is stable to bounded noise on Y .
Earlier, we addressed the multichannel sparse decomposition problem without
assuming any constraint on the sparsity pattern of the different sources. It is worth
pointing out, however, that sparse recovery conditions from multichannel mea-
surements can be refined if some structured sparsity is hypothesized. For instance,
for structured multichannel representation (e.g., sources with disjoint supports),
Gribonval and Nielsen (2008) provided coherence-based sufficient recovery condi-
tions by solving equation (9.17). One should note that despite apparent similarities,
the multichannel sparse decomposition problem discussed here is conceptually
different from the one targeting simultaneous sparse recovery of multiple mea-
surements vectors (MMV) considered by several authors; see, for example,
Cotter et al. (2005), Malioutov et al. (2005), Tropp (2006), Chen and Huo (2006),
Argyriou et al. (2008), Bach (2008), Gribonval et al. (2008), Eldar and Mishali
(2009), Lounici et al. (2009), and Negahban and Wainwright (2009). The latter are
not aware of any mixing process via A , and their goal is to recover
1) for some C
0 (typically, C
1
α
from MMV
T in which the vectors
Y
, have a common sparsity pattern.
However, the MMV model can also be written vect( Y T )
= α
α i , that is, rows of
α
T ), as in
equation (9.14). The most widely used approach to solve the simultaneous sparse
recovery problem with joint sparsity is to minimize a mixed
=
(
I ) vect(
α
p q norm of the
form j = 1 α
j ]
p 1 / q for p
q
[
.,
1
,
0
q
≤+∞
.
9.4 MORPHOLOGICAL DIVERSITY AND BLIND SOURCE SEPARATION
9.4.1 Generalized Morphological Component Analysis
We now turn to the BSS problem, and we highlight the role of sparsity and morpho-
logical diversity as a source of contrast to solve it. Toward this goal, we assume that
the sources are sparse in the spatial dictionary
, that is, the concatenation of K
orthonormal bases (
1 ,..., K ]. The restriction to orthonormal
bases is only formal, and the algorithms to be presented later still work in practice,
even with redundant subdictionaries
k ) k = 1 ,..., K :
=
[
k .
The generalized morphological component analysis (GMCA) framework as-
sumes a priori that each source is modeled as the linear combination of K
Search WWH ::




Custom Search