Digital Signal Processing Reference
In-Depth Information
N
Theorem 2.3.
[28
] Suppos
e
α
∈
R
is any K-sparse vector obeying
δ
+
ϑ
<
2
K
K
,
2
K
2log
1
.
Choose
λ
N
=
(
N
)
in (
2.10
). Then, with large probability, the solution to
(
2.10
),
ˆ
α
obeys
2
C
1
.
(
2
ˆ
α
−
α
≤
2log
(
N
))
.
K
.
σ
,
(2.11)
with
4
C
1
=
−
δ
K
−
ϑ
K
,
2
K
,
1
where
ϑ
K
,
2
K
is the
K
,
2
K
-
restricted orthogonal constant
defined as follows
K
-restricted orthogonality constant
K
≤
Definition 2.3.
The
K
,
ϑ
K
,
K
for
K
+
N
is
defined to be the smallest quantity such that
A
T
v
| ≤
ϑ
K
,
K
v
2
|
A
T
v
,
v
2
(2.12)
T
⊆{
T
|≤
K
.
holds for all disjoint sets
T
,
1
,...,
N
}
of cardinality
|
T
|≤
K
and
|
A similar result also exists for compressible signals (see [28] for more details).
2.3.2
CS Recovery Algorithms
The
1
minimization problem (
2.10
) is a linear program [28] while (
2.9
) is a second-
order cone program (SOCP) [38]. SOCPs can be solved using interior point methods
[74]. Log-barrier and primal dual methods can also be used [15], [3] to solve
SOCPs. Note, the optimization problems (
2.7
), (
2.9
), and (
2.10
) minimize convex
functionals, hence a global minimum is guaranteed.
In what follows, we describe other CS related reconstruction algorithms.
2.3.2.1
Iterative Thresholding Algorithms
A Lagrangian formulation of the problem (
2.9
) is the following
α
2
+
λ
α
1
.
α
=
ˆ
arg min
α
y
−
A
(2.13)
There exists a mapping between
from (
2.9
) so that both
problems (
2.9
)and(
2.13
) are equivalent. Several authors have proposed to solve
(
2.13
) iteratively [12], [45], [11], [9]. This algorithm iteratively performs a soft-
thresholding to decrease the
λ
from (
2.13
)and
ε
1
norm of the coefficients
α
and a gradient descent to
2
. The following iteration is usually used
decrease the value of
y
−
A
α
y
n
+
1
y
n
A
∗
(
α
−
Ay
n
=
T
λ
(
+
))
,
(2.14)
where
T
λ
is the element wise soft-thresholding operator
Search WWH ::
Custom Search