Image Processing Reference
In-Depth Information
references therein]. We take f min = 0 and f max = 1 to set up our test problem; we sample X ( t )
at the Nyquist rate for complex signals,  t = 1/ f max =1, to obtain the sampled time series X S
of length N from t = 0 to t = N -1 where N is the number of time samples. As in all
applications of compressive sensing, we make a sparsity assumption, K << N , and mix the
input signal vector X S plus sampled noise n down in dimension to the measured vector y of
dimension M :
y = ( X S + n ), (5)
where Φ is an M x N mixing matrix and K < M << N . Note that in eq. (5) n is added to X S
prior to mixing. In other models noise is added to the mixed version of X S , Φ X S , or even to
Φ itself. We generate the elements of Φ using the pseudo-random number functions in our
software ( Mathematica and Python ) such that they are taken uniformly from the set of nine
complex numbers: {-1 - i, -1, -1 + i, -i, 0, i, 1 - i, 1, 1 + i} or equivalently, the elements are
taken from the sum of random integers drawn from {-1,0,1} plus i times different random
integers drawn from {-1,0,1}. We use a complex mixing matrix because our signal model is
complex. The noise is assumed to be independent and identically distributed (i.i.d.)
Gaussian noise with standard deviation /2 1/2 and is added to the real and the imaginary
part of each element of X S , so that the covariance of n is  2 I , where I is the N x N identity
matrix. If the frequencies lie on the DFT frequency grid associated with the time series t = 0
to t = N -1 , eq. (5) can be solved for the frequencies by writing s = DFT X S , substituting X S =
IDFT s ( IDFT = Inverse DFT) in eq. (5), and solving y = Φ(IDFT s + n ) by minimizing the ell-
1 norm of s subject to the measurement constraint eq. (5) if n = 0 or by minimizing the ell-1
norm penalized by an arbitrary fraction of the constraint (LASSO) in the presence of noise
(Candes & Wakin, 2008; Loris, 2008).
Although the noise is assumed to be i.i.d., the mixing matrix Φ colors the noise in the
observation vector y . The covariance of y is given by
Cov[ y ] =  2 ΦΦ H , (6)
and the standard maximum likelihood estimator requires definition of a weighting matrix W ,
W = ( ΦΦ H ) -1 , (7)
where the superscript H indicates the Hermitian conjugate (see Stoica et al., 2000 and Chan
and So, 2004, for a discussion of weighted estimators in NLS). If the inverse in eq. (7) is ill-
conditioned or does not exist, this indicates a poor choice of mixing matrix Φ and another
one should be chosen. The maximum likelihood estimator (MLE) for X S , C ( Z ) is solved by
finding the vector Z that minimizes the weighted square of the residual given by
C ( Z ) = ( Φ Z - y ) H W ( Φ Z - y ): (8)
Z is a vector taken from the linear subspace spanned by at most K complex sinusoids
sampled over t = 0 to N -1 (see the corresponding equation for determining the amplitude
and frequency of a sum of complex sinusoids in a system that does not have compressive
sensing, Stoica and Moses, 1997, eq. 4.3.6). CS recovery is equivalent to determining the
spectral support (that is, the K unknown frequencies) of the input signal X S , or equivalently
determining the vector Z that minimizes eq. (8) (Duarte & Baraniuk, 2010). In the absence of
noise, weighting with W is unnecessary because the solution is exact and both the weighted
Search WWH ::




Custom Search