Digital Signal Processing Reference
In-Depth Information
morphological components be sufficiently contrasted; see Bobin et al. (2007a) and
Bobin et al. (2008b) for details.
Hence, under these circumstances, a fast GMCA algorithm can be designed to
solve equation (9.27) by working in the transform domain after decomposing each
observed channel y i in
using a sparse decomposition algorithm such as MCA.
There is an additional important simplification when substituting problem (9.27)
for (9.19). Indeed, because N c
N s (i.e., overdetermined BSS), it turns out that
equation (9.27) is a multichannel overdetermined least squares fit with
1 spar-
sity penalization. We again use an alternating minimization scheme to solve for
A and
/
0
α
:
Update the coefficients: When A is fixed, because the quadratic term is strictly
convex ( A has full column rank), the marginal optimization problem can be
solved by a general form of the forward-backward splitting iteration (Chen and
Rockafellar 1997):
( t ) )
( t
+
1)
( t )
A T (
α
=
Thresh μλ
α
+ μ
β
A
α
,
(9.28)
A T A )
where
is a relaxation matrix such that the spectral radius of ( I
μ
AA T
is bounded above by 1 and the step size 0
=
( A T A ) 1 ( A T A is nonsingular and a kind of Newton's method ensues) yields
theclosedform
1
/ |||
|||
. Taking
Thresh λ A + β ,
α =
˜
(9.29)
where Thresh λ is a thresholding operator (hard for p
=
0 and soft for p
=
1).
If
α
is fixed, and because
α
is full row rank, the mixing matrix A is given by the
least squares estimate
T αα
T 1
˜ A
= βα + ,
= βα
(9.30)
and the columns of ˜ A are then normalized.
Note that the latter two-step estimation scheme has a flavor of the alternating sparse
coding/dictionary learning algorithm presented by Aharon et al. (2006) and Peyr e
et al. (2007) in a different framework.
This two-stage iterative process leads to the accelerated version of GMCA sum-
marized in Algorithm 35.
In the same vein as in Section 9.4.1, the coarse-to-fine process is also at the heart
of this fast version of GMCA, with the threshold that decreases with increasing
iteration count. This again brings robustness to noise and initialization.
9.4.3.1 Complexity Analysis
When the assumptions discussed earlier for the redundant dictionary case are valid,
the fast GMCA version requires only one application of MCA on each channel,
which is faster than the first version of GMCA (see Section 9.4.1.1). Once MCA
is applied to each channel, and assuming, as before, that N s
T , it can
be easily shown that the rest of the algorithm requires O ( N iter N s T ) operations. In
the case in which only one orthogonal dictionary is used (e.g., Fourier orthogonal
N c
N
Search WWH ::




Custom Search