Image Processing Reference
In-Depth Information
The ell-1 norm code used in our studies (Loris, 2008) can be used with the frequency grid
subdivided by 8 or more, but the results are not sparse for the test case described above.
More frequencies are returned than in the input signal. Good approximations (consistent
with the OMP estimates) can be obtained by precisely thresholding the recovered vector s in
eq. (2), but the threshold is dependent on the oversampling ratio and on the random seed
used to generate the frequencies.
7. Performance estimates
As discussed above, this study is based on experimental or empirical evaluation (i.e.
numerical simulations) of a proposed technique for recovering compressively sensed
signals. The weakness of such a study is that calculations alone do not provide performance
guarantees while the strength of such a study is that calculations can evaluate practical cases
that would be encountered in real applications. Regardless, it is necessary to know when
and how an algorithm fails for it to be of much use, and we can use prior work on
performance guarantees for CS, OMP and NLS to help us.
Consider first the noise-free case in which the number of sinusoids is known. Here the
difference between success and failure is computationally obvious. If the recovery is
successful, the residual after extraction of the known number of sinusoids collapses to near
the machine precision. If it fails, the residual remains at about the level of the initial
measurement vector y . In the presence of noise the situation is similar except the collapse is
to the system noise level. If the number of sinusoids is unknown, then recovery proceeds
until the system noise level is reached, but statistical testing must be used to determine if the
residual at this threshold is due to noise or incorrectly recovered signals.
Practical use of the OMP/NLS algorithm requires at a minimum empirical knowledge of
where the algorithm fails and ultimately, performance guarantees and complexity estimates
(operation counts). Since this algorithm couples two well known algorithms, in principle we
can rely on previous work. The problem can be divided into 3 parts. First, one has to assess
the compressive sensing part of the problem. Does the mixing matrix Φ satisfy the
appropriate conditions? Is the value of M large enough to recover the K unknowns? Are
the measurements really sparse in the chosen model or even is the model applicable to the
signal of interest? Our empirical observations suggest that it is difficult for a random
number generator to pick a bad mixing matrix. Observations also suggest that the
requirement on M for recovery is on the same order as that derived for grid-based CS, M ~
K log ( N / K ). Second, the sampling in the overcomplete dictionary must be fine enough that
the first frequency found by the argmax of G ( f , r ) in (7) is near a true solution. If this is not
the case due to insufficient granularity, multiple frequencies too close together, or high noise
levels, the OMP cannot start. This issue is not restricted to our work but common to all
matching pursuit algorithms. While we do not have performance guarantees here, we have
noted empirically that lack of convergence is very easy to determine for a known number or
sinusoids and known noise floor. Finally, the NLS must be able to converge. Here we can
rely on the results given by (Stoica et al., 2000; Li et al., 2000; Chan and So, 2004; Christensen
and Jensen, 2006) that the NLS achieves the Cramer Rao Bound. Empirically, we observe
that the dictionary must be sufficiently overcomplete that the NLS is looking for a frequency
solution in one local minimum.
Search WWH ::




Custom Search