Digital Signal Processing Reference
In-Depth Information
algorithms amount to iterative thresholding. Furthermore, the computational bur-
den of these algorithms is essentially invested in applications of fast implicit opera-
tors associated with the involved dictionary
and the linear operator H and their
respective adjoints.
7.2 SPARSITY-REGULARIZED LINEAR INVERSE PROBLEMS
Starting from the degradation model expressed by equation (7.1), this chapter fo-
cuses on several optimization problems involved in linear inverse problems where
the solution x
= α,
R
T
→ R
N is assumed to be sparsely represented in an over-
:
complete dictionary
of waveforms (
ϕ i ) i ∈I
,
I ={
1
,...,
T
}
.
is generally a frame
of
N .
When attempting to find sparse solutions to linear inverse problems, the goal in
general is the generic minimization of functions of the form F
R
=
F 1 +
F 2 :
( P ):
min
T F 1 (
α
)
+
F 2 (
α
)
,
(7.2)
α ∈R
where F 1
F 2 are closed convex functions that are not necessarily differentiable, are
not infinite everywhere, and their domains (see Section 7.3.1 for definition) have
nonempty intersection. F 1 is typically a convex sparsity-promoting penalty, and F 2
is a function measuring the consistency or fidelity to data. The preceding problem
can be further extended by forcing the solution to satisfy additional constraints (e.g.,
positivity). This constraint can be absorbed in F 1 by adding an indicator function
(see equation (7.6) for its definition) of some closed convex set reflecting the con-
straint.
Let us mention some special cases covered by problem ( P ). For instance, the
,
1
norm decoder known as basis pursuit (BP) (see Chen et al. (1999) and equation
(8.2)),
min
α ∈R
α 1
s
.
t
.
y
=
H
α =
F
α,
(7.3)
T
in which case F 1 (
α
)
=
x
1 and F 2 (
α
) is the indicator function of the affine subspace
T α =
{ α ∈ R
. When the observation y is contaminated by noise, the equality
constraint must be relaxed to a noise-aware variant. Problem ( P ) becomes, typi-
cally, basis pursuit denoising (BPDN) in its augmented Lagrangian form (Chen et al.
1999) when
F
α }
1
2
2
F 1 (
α
)
= λ α 1
and F 2 (
α
)
=
y
F
α
,
(7.4)
which is also known as the Lasso in the statistics literature after Tibshirani (1996).
Note that this is a slight abuse of terminology as the original Lasso problem is
when the problem (7.4) is in its
1 -constrained form. This augmented Lagrangian
form has received and continues to receive considerable attention; see, for exam-
ple, Chen et al. (1999), Osborne et al. (2000), Efron et al. (2004), Figueiredo and
Nowak (2003), Daubechies et al. (2004), Combettes and Wajs (2005), Kim et al.
(2007), Fornasier (2007), Teschke (2007), Bredies and Lorenz (2008), Bioucas-Dias
and Figueiredo (2007), Figueiredo et al. (2007b), Wright et al. (2009), Yin et al.
(2008), Cai et al. (2009), Beck and Teboulle (2009), and others, not to mention the
Search WWH ::




Custom Search