Image Processing Reference
In-Depth Information
analytical system, one would like to combine the multiple sources of information
together in such a way that the final representation contains a higher amount of useful
information than any single input source. The process of combining or merging the
data in a selective manner is known as data fusion .
Multisensor fusion has been defined as the synergistic combination of different
sources of sensory information into a single representational format [203]. The aimof
multisensor fusion is to utilize the information obtained from a number of sensors to
achieve a better representation of the situation than which would have been possible
by using any of the sensors individually. Image fusion represents a specific case of
multisensor data where all the sensors are imaging sensors. Image fusion can be
considered as a subset of a much wider field of data fusion and it has been receiving
a large attention since the past 25years. The need for fusion of images arose from
the availability of multisensor data in various fields such as remote sensing, military
applications, medical imaging, and machine vision. The process of image fusion
refers to the combination of several images depicting the same object, but each of
the images enhancing some particular features of the object. The primary objective
of fusing images is to create an image that is more informative and more relevant
to the particular application. The field of image fusion being very broad, it has been
defined in several contexts. A generalized definition of image fusion can be stated
as—“ the process of combining information from two or more images of a scene into
a single composite image that is more informative and is more suitable for visual
perception or computer processing ” [68].
Objectives of fusion differ with the applications. A single imaging sensor is often
unable to provide a complete information of the scene. The process of fusion aims
at integrating the complementary information provided by the different sensors for a
better interpretation of the scene. Since early days, fusion has beenmainly considered
as a means for presenting images to humans [20]. An initial and the most direct
technique of fusion was to sum and average the constituent input images. However, as
authors in [20] have pointed out, this technique does not produce satisfactory results.
Consider certain features that are prominent in only one of the inputs images. When
averaging-based fusion is applied, such features are rendered with a reduced contrast,
or they suffer from the superimposition of the other features from the remaining input
images. A sharp image is much easier for humans to perceive. It is also desirable in
several machine vision applications. Thus, fusion has been considered as one of the
tools for sharpening of images [33]. The sharpening may also involve an increase
in the spatial resolution of the image. This is one of the widely pursued objectives
of image fusion, especially in the remote sensing community. A high resolution
pan-chromatic image is fused with the low resolution data typically in the form
of multispectral images. This process, known as the pan-sharpening, is a standard
technique employed for enhancing the spatial resolution of the multispectral data.
The multispectral data itself might be available in the form of a color (RGB) and a
thermal image pair. Due to difference in the physics of the sensors, the visible and
thermal images provide complementary views of the scene. A simple surveillance
system incorporating an RGB camera along with the thermal camera can be used to
identify security threats in public places. Fusion of the data streams obtained from
Search WWH ::

Custom Search