Digital Signal Processing Reference
In-Depth Information
Using the same arguments as in Section 7.4.2.3, it is sufficient to run Algorithm 20
or Algorithm 21 N t ρ t / ν ,0
1, at each iteration t to ensure convergence.
Another approach to solve analysis prior problems is to operate the change
of variable
<ρ<
α = x
T x (Elad et al. 2007) under the constraint that TT + α = α
=
,
where T + =
) 1
(
is the pseudo-inverse of T . For example, minimizing ( Q λ )is
equivalent to
2 y
T + α
1
2
TT + α = α.
min
α ∈R
+
(
α
)s
.
t
.
(7.58)
N
This has a flavor of the synthesis prior problem ( P λ ) but under an additional con-
straint. This problem can be solved using FB splitting, and now the crucial point
is to compute the proximity operator of
ı TT + α = α , which can be done efficiently
using an inner iteration based on the splitting framework developed here (see Sec-
tion 7.6.2 for some guidelines).
Anyway, because one has to subiterate, it goes without saying that problems with
the analysis prior are more computationally demanding compared to their synthesis
prior counterparts.
Let us focus on ( Q λ
+
is a tight frame with constant c , that is, T + =
c 1
.If
the subiteration is stopped after one iteration, it can be shown that the FB iteration
aiming to solve ( Q λ ) simplifies to
) when
x ( t )
+ μ t H y
H x ( t )
x ( t + 1)
2 )
= c μ t ,
t
(0
,
2
/ |||
H
|||
,
(7.59)
c 1
x ). When
where the operator
c μ t ,
( x )
=
prox c μ t
(
is the
1 norm, this
(ST)
, c μ t λ
x ), which consists of comput-
=
c 1
operator reads
( x )
SoftThresh c μ t λ (
ing the coefficients of x over the dictionary
, soft thresholding the obtained coef-
ficients, and then reconstructing. This has an appealing IST-like form. Nonetheless,
even if iteration (7.59) works in practice, one must be aware that it does not solve
( Q λ ) exactly.
To conclude on the choice of analysis versus synthesis in practice, our view is
that it is very data-dependent. If the solution is strictly k -sparse in
, the synthesis
approach should be better because it really operates on the coefficients
to make
them as sparse as possible. On the contrary, if the solution is positive, it will hardly
be represented with very few atoms in dictionaries such as wavelets or curvelets be-
cause all atoms have a zero mean (except the coarsest-scale atoms). In this case,
which may be closer to most real-life applications, analysis-based priors may be
better.
α
7.6 OTHER SPARSITY-REGULARIZED INVERSE PROBLEMS
Many other sparsity-regularized inverse problems can be tackled under the um-
brella of operator splitting that we developed throughout this chapter. We can cite,
for example, the synthesis and analysis
1 consistency problems:
min
α ∈R
T
y
H
α 1 +
(
α
)
(7.60)
x )
min
x
N
y
H x
1 +
(
.
(7.61)
∈R
Search WWH ::




Custom Search