Digital Signal Processing Reference
In-Depth Information
The problem (
2.9
) is often known as Basis Pursuit DeNoising (BPDN) [38]. In [22],
Cand
e
s
at. el.
showed that the solution to (
2.9
) recovers an unknown sparse signal
with an error at most proportional to the noise level.
Theorem 2.1.
[22] Let
A
satisfy RIP of order
4
K with
δ
3
K
+
3
δ
4
K
<
2
.
Then, for
any K sparse signal
α
and any perturbation
η
with
η
2
≤
ε
, the solution
ˆ
α
to
(
2.9
) obeys
α
−
α
2
≤
ε
ˆ
C
K
with a well behaved constant C
K
.
Note that for
K
obeying the condition of the theorem, the reconstruction from
noiseless data is exact. A similar result also holds for stable recovery from imperfect
measurements for approximately sparse signals (i.e compressible signals).
Theorem 2.2.
[22] Let
A
satisfy RIP of order
4
K. Suppose that
α
is an arbitrary
N
vector in
R
and let
α
K
be the truncated vector corresponding to the K largest
in magnitude. Under the hypothesis of Theorem
2.1
, the solution
ˆ
values of
θ
α
to
(
2.9
) obeys
C
2
,
K
α
−
α
K
1
α
−
α
2
≤
ε
ˆ
C
1
,
K
+
√
K
with well behaved constants C
1
,
K
and C
2
,
K
.
If
α
obeys (
2.3
), then
ˆ
α
−
α
K
1
√
K
2
)
C
K
−
(
s
−
≤
.
So in this case
1
2
)
C
K
−
(
s
−
α
−
α
K
2
≤
ˆ
,
and for signal obeying (
2.3
), there are fundamentally no better estimates available.
This, in turn, means that with only
M
measurements, one can achieve an approxima-
tion error which is almost as good as that one obtains by knowing everything about
the signal
α
and selecting its
K
-largest elements [22].
2.3.1.1
The Dantzig selector
2
,
In (
2.8
), if the noise is assumed to be Gaussian with mean zero and variance
σ
2
η
∼ N
(
, then the stable recovery of the signal is also possible by solving a
modified optimization problem
0
,
σ
)
α
α
1
s. t.
A
T
(
y
−
A
α
)
∞
≤
ε
α
=
ˆ
arg min
(2.10)
ε
=
λ
N
σ
where
for some
λ
N
>
0and
.
∞
denotes the
∞
norm. For an
N
dimensional vector
x
,itisdefinedas
x
∞
=
max
(
|
x
1
|,···,|
x
N
|
)
. The above
program is known as the Dantzig Selector [28].
Search WWH ::
Custom Search