Image Processing Reference

In-Depth Information

monograph is limited to remote sensing applications, more specifically hyperspec-

tral images. However, we also provide a brief description of fusion methodologies

pertaining to the other application areas.

Remote sensing has been one of the leading image fusion applications with a

large number of dedicated publications. The research in this area has been contin-

uously growing as more precise and sophisticated imaging devices are being used.

In 1999, Pohl and van Genderen have presented an in depth review of the existing

work in multisensor image fusion till then [140]. This work covers a comprehensive

range of fusion techniques with their objectives. It covers basic arithmetic tech-

niques such as addition or ratio images, computationally demanding subspace-based

techniques based on the principal component analysis (PCA), and wavelet-based

multi-resolution techniques as well. This article introduces a number of applications

of fusion including topographic mapping, land usage, flood monitoring, and geology.

Furthermore, some pre-processing techniques and commonly used fusion schemes

have also been reviewed.

Since a number of groups are actively working in the area of image fusion, the

meaning and taxonomy of the terms differ from one to another. The establishment

of common terms of reference helps the scientific community to express their ideas

using the same words to the industry and the other collaborative communities. Wald

presented a report on the work of establishment of such a lexicon for data fusion in

remote sensing carried out by the European Association of Remote Sensing Labora-

tories (EARSeL) and the French Society for Electricity and Electronics (SEE) [180].

According to this definition, data fusion is a
framework
containing means and tools

for alliance or combination of data originating from different sensors. While fusion

has been aimed at obtaining information of greater quality, the quality itself has been

associated with the application. The definition of the word
fusion
has been compared

with those of integration, merging, and combination. It has been suggested that these

terms are more general, and have a much wider scope than fusion. It has also been

argued that the term pixel does not have a correct interpretation as the pixel is merely

a support of the information or measurement. The suggestions include usage of terms

signal or measurement to describe the level of fusion. In this monograph, however,

we use the term
pixel-level
to describe the same nomenclature as it has been followed

by a large community.

Now, let us define a generalized model for basic pixel-based image fusion. Let

I
1
and
I
2
be two images of dimensions

pixels having the same spatial

resolution. Then the resultant fused image
F
is given by Eq. (
2.1
).

(

X

×

Y

)

F

(

x

,

y

)
=

w
1
(

x

,

y

)

I
1
(

x

,

y

)
+

w
2
(

x

,

y

)

I
2
(

x

,

y

)
+

C

.

(2.1)

The quantities
w
1
and
w
2
indicate the relative importance assigned to the corre-

sponding pixel at

of that image, and these are known as the fusion weights,

or simply weights. As we want the fused image to be composed of the constituent

images
I
1
and
I
2
, we are essentially looking for an
additive
combination of these

images. The fusion weights should also be ideally non-negative in nature. Addition-

ally, the weights are normalized, i.e., the sum of all the weights at any given spatial

(

x

,

y

)