Image Processing Reference
In-Depth Information
Image fusion is the specific case of the sensor data fusion where all the sensors
refer to the imaging devices. Image fusion is a tool used in multiple disciplines of
technology to maximize the information from a set of input images for a particular
application. It is defined as the process of combining information from two or more
images of a scene into a single composite image that is more informative and is more
suitable for visual perception or computer processing [68].
1.2 Hyperspectral Image Visualization
Consider a hyperspectral data cube consisting of nearly 200 bands. For visualization
purposes, we want to represent most of the information in these bands using a single
(grayscale or color) image. We want to create a single image through a process known
as image fusion, that preserves as many features as possible from the constituent
bands for the purpose of visualization by a human observer. If any specific application
requires a certain set of features to be highlighted, one may accordingly select and
enhance only that subset of hyperspectral bands where these features are clear and
prominent. However, this knowledge regarding the application or the imaging system
limits the scope of the fusion process, andmakes it completely application dependent,
and thus, it cannot be used over a large class of data as a generalized process. We
do not assume any knowledge regarding the parameters of the hyperspectral sensor.
Thus, the fusion system is expected to be designed in a blind way which generates
the output from the hyperspectral data only, for any general class of applications.
Fusion can be performed to accomplish various objectives. These objectives and
applications are primarily responsible for the choice of fusion methodology. Fusion
has often been used as a means for providing a quick visualization to humans [20].
Multiple images capture different aspects of the scene. To visualize this disparate
information, the observer has to view all the images independently, and then manu-
ally combine the different pieces of data across the same spatial location provided
by the constituent images. One can obtain a quick visualization of the scene using
the reduced dataset through fusion. An appropriate fusion scheme generates a sin-
gle (grayscale or color) image which preserves most of the features of the input
images that can be considered to be useful for human visualization. A fusion is
sometimes considered to be a step prior to the classification. The remote sensing
data is often used for mapping of resources such as minerals, vegetation, or land.
This mapping involves classification of the scene pixels into a set of pre-defined
classes. The multiple bands provide complementary characteristics of the data use-
ful for the identification of the underlying materials. We have already explained in
the previous section the usefulness of the hyperspectral images in terms of providing
a dense sampling of the spectral signature of the pixel. However, due to a very large
number of bands, the classification process turns out to be computationally highly
expensive. One would like to retain only those data that contribute towards a better
classification, and discard the remaining redundant part. A classification-oriented
fusion provides an effective solution to this problem. Here the main objective of
 
Search WWH ::




Custom Search