Image Processing Reference
of times and with different times of integration. They may also have different spa-
tial resolutions. All images with an individual set, however, must be geometrically
aligned and have the same spatial resolution, or pixel size (Wald 2002 ). Here the
term image comprises any information that is presented in a raster, or gridded,
format in two dimensions. The grid cell is called a pixel.
Methods and objectives of image fusion vary by application (Wald 2002 ).
A classical example is the classification process in environmental applications
(see Chapter 8). Several images of commensurate or non-commensurate measure-
ments, and possibly of other information, are used as an input to a classifier. In the
case of a supervised classification, a fusion algorithm is included in the classifica-
tion to produce an image of taxons and possibly another image of the related accu-
racy (or plausibility, or probabilities, etc.). In the case of an unsupervised procedure,
the state vectors of the pixels are grouped based on the similarity of certain
properties. The unsupervised classification is usually an iterative fusion process
with successive refinements until a threshold is met. In either type of classification,
the original dimension of the information is reduced and as such, the semantic level
of the fused product is typically higher than that of the original set of images.
Other approaches in image fusion exist. Some include the extraction of features from
each input image and then the fusion of features. For examples, road maps can be pro-
duced by fusing several sets of images, where the final product is a GIS layer of roads.
Other approaches utilize visual analysis and interpretation in the fusion with the aim of
creating a set of images of reduced dimension, which contains all the information
of interest that are present in the original sets of images. Fusion may also be performed
to create new sets of images in various modalities with a better spatial resolution.
This chapter focuses on few methods applied to images and imaging sensors but
also to gridded data and punctual measurements. These include: (1) encrustation of
images within another image; (2) synthesis of images based on the best spatial reso-
lution available in the original sets of images; and (3) fusion of images, gridded
data and punctual measurements. Each method is described in general below and
illustrated by an example. These methods call upon advanced mathematical tools
that are presented in the following section.
Different forms of Data Fusion
Data fusion may be sub-divided into many domains. For example, the military
community uses the term “positional fusion” to denote aspects relevant to the
assessment of the state vector or “identity fusion” when establishing the identity
of the entities is at stake. If observations are provided by sensors and only by
sensors, one will use the term “sensor fusion”. “Image fusion” is a sub-class of
sensor fusion; here the observations are images. If the support of the information is
always a pixel, one may speak of “pixel fusion”. “Evidential fusion” means that the
algorithms behind call upon the evidence theory. Other terms commonly used are
“measurement fusion”, “signal fusion”, “features fusion”, and “decision fusion”.