Image Processing Reference
In-Depth Information
We expect this factor to be high for pixels having their intensity far from both the
extremes of the dynamic range. For example, if we are working with a normalized
data with grayscale ranging from 0 to 1, the
values for the pixel intensities close
to 0.50 should be higher. To quantify this component, we assign weights depending
upon the distance of the gray value of the pixel from the middle value of 0.50 for the
normalized data.We calculate the distance of the gray value of a pixel from0.50 over a
Gaussian curve having a fixed spread parameter
β
. The weights are, thus, inversely
proportional to the squared difference in the intensity of the pixel from the mid-
intensity value (0.50 in the case of normalized data). A similar quality measure has
been employed for the fusion of multi-exposure images in [89, 113]. Equation ( 5.6 )
provides the evaluation of the first quality measure Q 1 over an observation I k (
σ β
x
,
y
)
.
exp
2
(
I k (
x
,
y
)
0
.
50
)
Q 1 (
I k (
x
,
y
))
.
(5.6)
2
β
2
σ
2
β
The spread (or variance) parameter
controls the width of the Gaussian, and thus,
the relative quality assigned to the pixel. This parameter can be selected depending
upon the requirement of the result. For too small values of
σ
σ β , only the pixels having
gray values close to the half, will contribute towards the fusion result. As
σ β increases,
more pixels get incorporated into the fusion process.
A sharp image is not only visually appealing, but it also offers several practical
advantages. Sharp images facilitate easy interpretation by a human analyst. Also, a
large number of machine vision algorithms can produce better results when input
images have sharp features present. For example, remote sensing images contain an
agglomeration of a large number of small objects, such as residential areas, roads,
etc. For a better understanding and efficient further processing, these objects should
be clearly observable in the fused image. Sharp edges and boundaries enable quick
and accurate identification of such objects. The performance of several processing
algorithms, especially the ones involving segmentation, gets significantly improved
due to presence of strong and sharp features. As the sharper regions bring more visual
and useful content, we expect such regions to have higher contributions towards the
fused image which represents the true (ideal) scene. The sensor selectivity factor
β
for a sharp region in the particular band should be higher. We apply a Laplacian
filter to pixels in the individual hyperspectral bands, and measure its absolute value
which provides the local sharpness. The use of Laplacian filter alone yields zero
output in uniform or homogeneous regions. To circumvent this problem, we define
the second quality measure Q 2 , by adding a small positive constant C to the output
of the Laplacian filter as the following-
2
Q 2 (
I k (
x
,
y
)) ≡|∇
(
I k (
x
,
y
)) |+
C
,
(5.7)
2 denotes the Laplacian operator and C is a small positive constant. When
a pixel lies on some sharply defined objects such as edges, the first term in Eq. ( 5.7 )
is high, producing a high value of the corresponding quality measure Q 2 .For
where
 
Search WWH ::




Custom Search