Digital Signal Processing Reference
In-Depth Information
morphological components, where each component is sparse in a specific basis:
K
K
∀
i
∈{
1
,...,
N
s
}
;
s
i
=
x
i
,
k
=
1
k
α
i
,
k
(9.18)
k
=
1
k
=
α
i
=
α
i
,
K
T
T
i
,
T
=
α
i
where
,...,α
.
1
GMCA seeks an unmixing scheme, through the estimation of
A
, which leads to the
sparsest sources
S
in the dictionary
. This is expressed by the following optimiza-
tion problem, written in the augmented Lagrangian form:
N
s
K
2
Y
T
α
i
,
k
1
2
F
+
λ
p
p
min
−
A
α
A
,α
1
,
1
,...,α
N
s
,
K
i
=
1
k
=
1
s
.
t
.
a
i
2
=
1
∀
i
∈{
1
,...,
N
s
}
,
(9.19)
where, typically,
p
=
0 or its relaxed convex version with
p
=
1, and
X
F
=
trace(
X
T
X
)
1
/
2
is the Frobenius norm. The unit
2
norm constraint on the columns
of
A
avoids the classical scale indeterminacy of the product
AS
in equation (9.2).
The reader may have noticed that the MCA problem (8.18) in Chapter 8 is a special
case of the GMCA problem (9.19) when there is only one source
N
s
=
1 and one
channel
N
c
=
1 (no mixing). Thus GMCA is indeed a multichannel generalization
of MCA.
The program (9.19) is a notoriously difficult nonconvex optimization problem,
even for convex penalties, when
p
1. More conveniently, following equation (9.3),
the product
AS
can be split into
N
s
·
≥
K
multichannel morphological components:
=
i
,
k
a
i
x
i
,
k
=
i
,
k
(
a
i
T
T
AS
k
. On the basis of this decomposition, and inspired
by block-coordinate relaxation, as for MCA, GMCA yields an alternating minimiza-
tion algorithm to estimate iteratively one term at a time (Bobin et al. 2007a). We will
show shortly that the estimation of each morphological component
x
i
,
k
=
k
α
i
,
k
,
assuming that
A
and
x
{
i
,
k
}={
i
,
k
}
are fixed, is obtained by simple hard or soft thresh-
olding for
p
α
i
,
k
)
=
0 and
p
=
1.
Define the (
i
,
k
)th multichannel marginal residual by
a
i
x
i
,
k
R
i
,
k
=
Y
−
(9.20)
i
=
i
k
=
k
as the part of the data
Y
unexplained by the multichannel morphological compo-
nent
a
i
x
i
,
k
. Estimating
x
i
,
k
=
k
α
i
,
k
, and assuming that
A
and the other components
x
(
i
,
k
)
=
(
i
,
k
)
are fixed, leads to the component-wise optimization problem:
2
R
i
,
k
−
T
α
i
,
k
1
2
F
+
λ
p
p
.
T
i
,
k
)
min
x
i
,
k
∈R
(
a
i
α
(9.21)
N
k
is an orthogonal matrix, with calculations similar to those
of Sections 7.3.2.2 and 7.7.3,
2
it can be shown that the unique solution of equa-
tion (9.21) is obtained by a hard (
p
Because here,
=
0) or soft (
p
=
1) thresholding. Hence the
2
The reasoning holds for 0
≤
p
≤
1 from Section 7.7.3.