Image Processing Reference
In-Depth Information
Bilateral filter has been used in a variety of applications. To name a few, it
has been used for classical problems such as deblurring from blurred/noisy image
pairs [200], super-resolution [199], and optical flow estimation [101]. A 3-D mesh
denoising application using this filter has been explained in [59]. A modified
version- a separable bilateral filter has been proposed for pre-processing of video
data in [134]. Bennett and McMillan have also adopted bilateral filter for video
enhancement [13]. Jiang et al. have discussed medical imaging applications of bilat-
eral filter [81]. Bilateral filter has received a considerable attention from an upcoming
field of computational photography. Durand and Dorsey [54] have used bilateral filter
to split images into detail and base layer to manipulate their contrast for the display
of high dynamic range images. Raman and Chaudhuri [146] have proposed bilat-
eral filtering-based extraction of weak textures from images which are in turn used
to define the fusion weights. In [133], combining information from flash/no-flash
photography using bilateral filter has been discussed.
3.4 Bilateral Filtering-Based Image Fusion
Let us consider a set of hyperspectral bands. We want to fuse these bands to generate
a high contrast resultant image for visualization. We now explain the process of
combining the bands with the help of edge preserving filter. Here we illustrate the
fusion technique using a bilateral filter.
The primary aim of image fusion is to selectively merge the maximum possible
features from the source images to form a single image. Hyperspectral image bands
are the result of sampling a continuous spectrum at narrow wavelength intervals
where the nominal bandwidth of a single band is 10nm. (e.g., AVIRIS). The spectral
response of the scene varies gradually over the spectrum, and thus, the successive
bands in the hyperspectral image have a significant correlation. Therefore, for an
efficient fusion, we should be able to extract the specific information contained in
a particular band. When compared with the image compositing process, our task is
analogous to obtain the mattes, popularly known as
each of the source
images. The mattes define the regions in the image, and the proportion in which they
should be mixed. These mattes act as the fusion weights to generate the final image
having the desired features from the input. The final image F can be represented as
a linear combination of input images I k ,
α
-mattes, for
α
k
=
1to K as shown in Eq. ( 3.4 ).
K
F
(
x
,
y
) =
1 α k (
x
,
y
)
I k (
x
,
y
) (
x
,
y
),
(3.4)
k
=
where
y )inthe k -th observation.
The techniques of automatic compositing employ a matte generating function as the
function of input data itself. We have developed the fusion strategy by choosing the
weights in the spirit of such
α k (
x
,
y
)
is the
α
-matte for the pixel at location ( x
,
α
-mattes. This methodology is similar to the compositing
 
Search WWH ::




Custom Search