Image Processing Reference
InDepth Information
is much less as compared to the processing needed for the computation of fusion
weights. The computational savings, thus, are somewhat lesser in practice.
The above discussed case of spectrally ordered data and the corresponding savings
in computation assume the perfect modeling of the conditional entropy as a function
of the spectral distance between the bands. In practice, there might be smaller devi
ations from the exponential model, which act as an additive noise. Now we analyze
this more interesting case of a spectrally ordered hyperspectral data by modeling
the average conditional information by an additional term that corresponds to the
perturbation by an additive noise having a uniform distribution. This is useful when
the model defined by Eq. (
4.5
) is considered partly erroneous. Since the term entropy
involves an expectation operator, the corresponding quantity is a deterministic vari
able. We remove the expectation operator from
H
, and call it average information.
For a given realization of the image,
H
may now be treated as a random variable.
Theorem 4.2.
as defined in Theorem
(4.1) includes a perturbation by a white additive noise uniformly distributed in
If the average conditional information H
(
I
r

I
k
)
,
then the probability of selecting the band r after having selected the band k (with
r
[
0
,Δ
]
k) is given by
min
1
1
.
H
(
I
r
)
e
−
λ
R
(
r
−
k
)
−
>
−
Δ
(κ
+
1
),
Proof:
As defined,
H
(
I
r

I
k
)
is given by,
I
r
)
1
e
−
λ
R
(
r
−
k
)
+
H
(
I
r

I
k
)
=
H
(
−
z
(4.8)
where
z
∼ U[
0
,Δ
]
,
and typically
Δ
H
(
I
r
)
. Substituting the band selection
criteria, we get,
I
r
)
1
e
−
λ
R
(
r
−
k
)
+
H
(
−
z
≥
κ
H
(
I
r
)
I
r
)
(
e
−
λ
R
(
r
−
k
)
+
or
,
H
(
1
−
κ)
−
z
≥
0
I
r
)
κ
+
1
.
e
−
λ
R
(
r
−
k
)
−
or
,
z
≥
H
(
(4.9)
I
r
)
κ
+
1
. Then the probability of selecting the
p
th
e
−
λ
R
(
r
−
k
)
−
We denote
υ
=
H
(
band is given by,
Prob.(band
r
is selected, given
k
is the reference band)
Δ
min
1
1
1
Δ
=
dz
=
Δ
(Δ
−
υ),
υ
min
1
1
H
I
r
)
Δ
(
e
−
λ
R
(
r
−
k
)
−
=
−
(κ
+
1
),
.
(4.10)
The following corollaries may be deduced from Theorem (4.2).
Corollary 1
: In the limit
Δ
→
0, Theorem (4.1) becomes a special case of
Theorem (4.2).