Image Processing Reference
In-Depth Information
transform (DT-CWT) to fuse images region by region has been discussed in [103].
The complex wavelet has also been effectively used for medical application related
to fusion of CT and MRI images by Forster et al. [60]. De and Chanda have intro-
duced morphological wavelets for fusion of multi-focus images [47]. The images are
first decomposed using a nonlinear wavelet constructed with morphological opera-
tions. These decomposition operators involve morphological dilation combined with
downsampling, while the reconstruction operators include morphological erosion
combined with the corresponding upsampling. However, such a decomposition is not
invertible. Later, thesewavelet filters have been shown to be useful for CT-MRI fusion
in [195], where the fusion rule is based on the selection of maximum absolute coeffi-
cients. Multiwavelets are the extension of wavelets which have two or more scaling
and wavelet functions. Wang et al. have proposed a discrete multiwavelet transform
(DMWT)-based fusion technique where they convolve the decomposed subbands
with a feature extractor matrix to select salient features of the input images [181].
Multi-resolution-based fusion has been proved to be superior due to its ability in
capturing information at different scales. During the last decade, several improve-
ments have been proposed to capture features in an efficient manner. Scheunders
has introduced the concept of multi-scale fundamental form (MFF) representation
which provides a local measure of contrast for multivalued images [158]. On combi-
nation with a dyadic wavelet transform, it provides a measure of contrast in a multi-
resolution framework for the description of edges. In [159], authors have defined a
fusion rule that selects the maximum of the MFF coefficients of the input images.
Chen et al. have improved this technique by weighting the MFF structure to avoid
the enlargement of wavelet coefficients [35]. Mahmood and Scheunders have also
applied an MFF-based strategy for fusion of hyperspectral images [109]. Petrovic
and Xydeas have proposed a representation of input images using the gradient maps
at each level of decomposition pyramid as they improve the reliability of the feature
selection [131, 132].
The 2-D separable transforms such as wavelets do not prove to be efficient in
capturing the geometry of the images since the edges (or the points of discontinuity)
may lie on a smooth curve, and thus, may not get accurately captured by the piece-
wise linear approximation. Do and Vetterli developed the contourlet transform to
capture the geometrical structure of images such as edges [50]. The contourlet
transform involves the multi-resolution, local, and directional image decomposi-
tion. Fusion techniques based on multi-resolution contourlet transform have been
proposed in [7, 114]. In [114], authors first obtain directional image pyramids up to
certain levels (scales) using the contourlet decomposition. The low frequency coeffi-
cients which are at the top of the image pyramids are fused using the average-based
rule. At the remaining levels, the fusion rule selects the coefficients from the source
image which have higher energy in the local region. The fusion technique devel-
oped by Asmare et al. [7] also uses the averaging for combining the low frequency
coefficients, while for the fusion of high frequency coefficients, it applies a match-
and-activity based rule. The match and activity measures have been used to quantify
the degree of similarity and saliency in the input images, respectively. The output
 
Search WWH ::




Custom Search