Image Processing Reference
be found for generalized data fusion as well. Let us take a closer look at fusion
categories depending on the level of representation.
1. Pixel (or signal) level,
2. Feature (or region) level, and
3. Decision (or symbolic) level.
A general data can be analyzed and fused at the signal level which is the most basic
and fundamental level of understanding and processing the data. The pixel level,
or the pixel-based fusion regarded as the counterpart of signal level operations in
the field of data fusion, is the lowest level of image fusion. Images from multiple
sensors capture their observations in the form of pixels which are then combined
to produce a single output image. Thus, the pixel-level fusion algorithms operate
directly over raw data.
Feature level fusion requires feature extraction from input images with the use
of advanced image processing operations such as region characterization, seg-
mentation, and morphological operations to locate the features of interest. The
choice of features plays an important role here, which is primarily decided by the
end application. The regions or features are represented using one or more sets of
descriptors. Multiple sets of such descriptors provide complementary information,
which are then combined to form a composite set of features. These techniques
are less sensitive to pixel-level noise .
For a decision level fusion, the input images and/or feature vectors are subjected
to a classification system which assigns each detected object to a particular class
(known as the decision). The classifier systems associate objects to the particular
class from a set of pre-defined classes. Decision fusion combines the available
information for maximizing the probability of correct classification of the objects
in the scene which is achieved using statistical tools such as Bayesian inference.
Figure 2.1 illustrates the relationship among all three levels of fusion according to
their processing hierarchy.
Regardless of the fusion categorization, most of these fusion techniques require the
set of input images to be spatially aligned, i.e., they represent exactly the same scene.
This process of aligning is referred to as image registration which is quite a mature
research field by itself. The discussion on different methods of image registration
is beyond the scope of the monograph. However, throughout this monograph, the
hyperspectral data is assumed to be co-georegistered, i.e., the spectral bands of the
data depict exactly the same area, which is a quite common assumption in image
2.1.2 Pixel-Level Fusion Techniques
This monograph discusses pixel-based fusion techniques where the final result of
fusion is intended for human observation. As stated before, pixel-level fusion tech-