Image Processing Reference
In-Depth Information
the accurately extrapolated frequency band to a rather small domain outside
the known data window. While we may argue that further increasing the rate
of sampling might balance the lack of accuracy, this is not the case; the only
way to extend the extrapolated region is to improve the data quality.
It was suggested and later verified that this instability can be interpreted as
being linked to the formation of super-oscillations. In others words, numerical
super-resolution algorithms construct super-oscillating signals, which opti-
mally resemble the object function. We note that this relationship between
super-resolution algorithms and optical super-resolution phenomena was
exploited from a practical point of view by constructing super-resolving filters
with the PDFT algorithm.
The strong dependence of the achievable bandwidth extrapolation on the
SNR of the data also points to a trade-off in the Lukosz sense. The space-
bandwidth product of the measured signal can be calculated straightfor-
wardly as S = B max D and we attempt to extract information about more than S
independent signal features by trading for SNR. This is the same relationship
we found for super-resolving filters and super-oscillations. The exponential
growth of the required SNR as a function of bandwidth can be identified as
the main constraint for super-resolution imaging.
The interpretation of the PDFT algorithm as a “Lukosz trade-off” leaves
a pessimistic outlook on the prospect of calculating super-resolved images.
However, it has been observed that many numerical super-resolution algo-
rithms report significant improvements over the classical diffraction limit.
They account for this by distinguishing between what they call primary and
secondary super-resolution. We interpret primary super-resolution as the gain
in resolution due to exploiting the Lukosz trade-off between signal-to-noise
ratio (SNR) and image resolution. This gain in resolution is essentially inde-
pendent of the object signal and the number of samples and adheres to the
properties of the PDFT algorithm discussed so far. Secondary super-resolu-
tion is defined as any additional gain in image resolution not related to pri-
mary super-resolution. This gain may be related to other Lukosz trade-offs,
particularly in the context of image reconstruction from multiple encoded
image frames. For instance, imaging through atmospheric turbulence and
image reconstruction from multiple aliased images may be interpreted as
extracting the desired information simply from multiple separate measure-
ments. However, in each frame the different frequency bands overlap and the
information gain is transmitted similar to the degrees of freedom accessible
through primary super-resolution. It is not surprising therefore that similar
algorithms are used to recover the image and that in these cases the algorithm
displays superior image resolution simply based on the better SNR of the high-
frequency components.
However, a second source of secondary super-resolution is due to the use
of prior information, and the PDFT algorithm can be used to understand this
type of secondary super-resolution more intuitively as an interrelationship
between the estimator and the object function. In particular, we can charac-
terize the estimator in terms of the prior knowledge we inject into the recon-
struction process. Since we strive to estimate a continuous signal from a
finite set of data, we try to solve an ill-posed problem and always must make
assumptions to select the true object function from the infinite set of possible
signal reconstructions. We interpret secondary super-resolution as the result
Search WWH ::

Custom Search