Image Processing Reference
In-Depth Information
these two complementary sources enhance the semantic capability of the surveillance
systems.
The medical community has greatly benefited from the feature enhancement
characteristics of image fusion. Medical diagnosis can be improved by the use of
complementary information provided by the multimodal images such as computed
tomography (CT), magnetic resonance imaging (MRI), and Positron emission tomog-
raphy (PET). Fusion helps in enhancing features which are impossible to detect from
a single image, and thus improves the reliability of the decisions based on the com-
posite data [45].
Another objective of fusion is to remove the uncertainty through redundancy.
Multiple sensors provide redundant information about the scene, however with dif-
ferent fidelity. The fusion of redundant information reduces the overall uncertainty,
and leads to a compact representation. Thus, fusion can also be explored towards
decision validation.
Through image fusion, we seek to obtain an effective way of integrating the visual
information provided by different sensors. The redundant data should be removed
without any significant or visible loss to the useful spatial details. In a routine case,
the human observer has to go through the sequence of images, or needs to view all
the images simultaneously in order to fully comprehend the features of the scene, and
to understand their spatial correlation within the image and across the sequence of
images. Fusion systems enable one to consider only a single image that preservesmost
of the spatial characteristics of the input. Having to consider only one displayed image
at any one time significantly reduces the workload of the operator. Also, a human
analyst cannot reliably combine the information from a large number of images
by seeing them independently. An enhanced visualization of the scene with more
accurate and reliable information is also one of the important objectives of image
fusion. Furthermore, fusion systems considerably reduce the computation time and
the storage volume for further processing.
2.1.1 Classification of Fusion Techniques
Since a large number of fusion techniques have already been developed, the classifi-
cation of techniques helps in understanding the concepts related to fusion in a better
manner. One may classify the fusion techniques in various ways. We discuss some
of the fusion categories here.
Based on domain: A fusion algorithm can operate over the spatial data, i.e.,
directly over the pixel intensities to produce the final fusion result in the spatial
domain itself. Alternatively, using different transforms such as Fourier transform,
one may transform the set of input images into frequency domain. The fusion
algorithm processes the frequency domain data to produce the result of fusion in
the frequency domain. This result requires a reverse transformation such as inverse
Fourier transform to obtain the fused image.
 
Search WWH ::




Custom Search