Image Processing Reference
In-Depth Information
fused image, however, at the cost of computation of the fusionweights whichmight
be computationally expensive, especially when employed on a per pixel basis. We
develop an information theoretic strategy for selection of only a few, but specific
hyperspectral bands, which capture most of the information content in the hyper-
spectral data. We select a subset of hyperspectral bands that aremutually less corre-
latedwith each other in order tominimize the redundancy in the input hyperspectral
data. A particular hyperspectral band is selected for fusion only when it contains a
significant amount of additional information as compared to the previously selected
bands. The band selection scheme is independent of the fusion technique to be
employed, and thus, the subset of selected bands can be fused using any pixel-based
fusion technique. The hyperspectral bands are typically ordered as per their spectral
wavelengths.We develop amodel for the conditional entropy of the bands as a func-
tion of the spectral distance between them. We also discuss a special case of band
selection for this spectrally ordered data. This scheme provides a computationally
efficient and fast way for the selection of subset of bands. An appropriate combina-
tion of these bands provide a fused image with a minimal loss in the visual quality
as compared to the output image formed through fusion of the entire hyperspectral
data using the same fusion technique. We also provide theoretical bounds on how
much one can save in computation as a function of number of bands selected.
￿
An array of observations at a given spatial location is known as the spectral sig-
nature of the corresponding pixel. The reflectance response of the scene elements
being sensitive to the wavelength, some of the scene regions get captured with
high values of intensity in only a fraction of total number of image bands. There-
fore, it may be observed that not every pixel carries an equal amount of informa-
tion required for the visualization-oriented fusion. The pixels that provide visually
important information towards fusion should contribute more. We employ a model
of image formation which relates the input hyperspectral data and the resultant
fused image through a set of parameters referred to as the sensor selectivity factor
that quantifies how well the scene has been captured by the corresponding sensor
elements. We determine the visual quality of the pixel without the availability of
any ground truth. We develop a strategy to compute these parameters of the image
formation model using some of the qualities of the data that relate to the visual
quality. In this technique, we consider the well-exposedness and the sharpness to
be the quality measures along with the constraint of intra-band spatial smoothness
in model parameters.
In order to estimate the fused image, i.e., the true scene, we employ a Bayesian
framework. However, instead of using the L 2 norm-based priors which tend to
reduce the sharpness of the image, we incorporate a total variation (TV) norm
based prior which is based on the L 1 norm of the image gradient which brings
smoothness to the resultant image, and yet preserves the sharp discontinuities,
and thus the edges. The solution provides fused images that are sharp and visually
appealing. This solution provides a two-fold flexibility. First, one may choose from
a variety of quality measures to efficiently capture the pixels important for fusion.
Secondly, the problem of fusion is posed as a statistical estimation problem for
 
Search WWH ::




Custom Search