Digital Signal Processing Reference
In-Depth Information
Cand es et al. (2006c) have shown that if F satisfies the RIP with a RIC
δ 2 k <
2
α to equation (7.5) obeys
α α
1, then the solution
C 1
k α α k 1 +
C 2 σ,
(11.6)
( 2
where
α k is the restriction of
α
to its k largest e n tries, C 1 =
2((1
+
1)
δ 2 k )
/
( 2
( 2
(4 1
(1
+
1)
δ 2 k )) and C 2 =
+ δ 2 k )
/
(1
+
1)
δ 2 k ). For instance, for
δ 2 k =
0
.
2, C 1 <
4
.
2 and C 2 <
8
.
5; see Candes et al. (2006c). The preceding property is also
known as the
1 instance optimality after Cohen et al. (2009).
This result is worth some comment. First, it says that the reconstruction error by
2
1 minimization is finite. Furthermore, the error is bounded by two terms. The first
one is basically the approximation error that one would expect in the noiseless case
if we knew exactly the locations of the nonzero coefficients (see Section 1.1.2). This
error vanishes if the signal is strictly sparse. For compressible signals, the quality of
the (nonadaptive) CS
1 decoder is within a factor of that of an oracle that knows the
signal perfectly. The second error term is proportional to the noise size and predicts
that the CS decoder will degrade reasonably as the noise increases. The preceding
result is also fully deterministic, and the same sensing matrix will work for any k -
sparse signal with the proviso that it satisfies the RIP condition. In this sense, CS is
a universal encoding strategy.
11.5 DESIGNING GOOD MATRICES: RANDOM SENSING
If we are fortunate to be able to construct sensing matrices H such that F satisfies
the RIP of order 2 k with the appropriate RIC, then we are done. This is a matrix
design problem that requires that equation (11.5) hold for all possible 2 k 2 k -sparse
vectors. This is where randomness and probability theory come into play. 1 Deter-
ministic constructions of matrices satisfying the RIP are not as well developed as
random ones, and only a few of them exist in the literature (DeVore 2007; Gurevich
et al. 2008; Saligrama 2008).
It turns out that random constructions provide matrices obeying the RIP with
overwhelming probability. We give here some useful examples, and we do not delve
into the technical details that are far from obvious. The interested reader may refer
to the review paper of Cand es and Wakin (2008), and references therein, for further
details.
Suppose that the entries of H are independently and identically distributed (iid)
Gaussian with mean zero and variance 1
/
m . Then, with high probability, F
=
H
(recall that
is an orthobasis) satisfies the required RIP condition for equation
(11.6) to hold, provided that
m
Ck log( N
/
k )
.
(11.7)
The proof uses known concentration results about the singular values of Gaussian
matrices. Sensing matrices whose columns are uniform on the unit sphere in
m ,or
whose entries are iid from a sub-Gaussian distribution (e.g., Bernoulli) with columns
R
1 The probabilistic framework has already been used when the result (11.4) was stated.
 
Search WWH ::




Custom Search