Image Processing Reference
In-Depth Information
of the underlying true scene [164]. The true scene is obtained using a MAP estimator.
Yang and Blum have proposed modifications to this model in a multi-scale decom-
posed framework [196]. Kumar [94], and Kumar and Dass [95] have also demon-
strated effectiveness of the Bayesian technique for generalized image fusion. For the
problem of fusion, the images to be fused act as observations, and the fused image
is the quantity of interest. These two are related to each other using an appropriate
image formation model which defines the corresponding likelihood function. Now
we shall explain in detail the image formation model.
5.3 Model of Image Formation
In the past decade, Sharma et al. introduced a statistical technique for generalized
image fusion. Their technique consists of defining an image formation model, the
perturbation noise modeled with a Gaussian distribution, and a Bayesian technique to
solve the fusion problem [164]. This model of image formation is given by Eq. ( 5.4 ).
I k (
x
,
y
) = β k (
x
,
y
)
F
(
x
,
y
) + η k (
x
,
y
),
(5.4)
where I k (
x
,
y
)
denotes the observation of a true scene pixel F
(
x
,
y
)
captured by the
sensor k .
is known
as the sensor selectivity factor which determines how well the given observation has
captured the true scene. The maximum value that
η k (
x
,
y
)
indicates the noise or disturbance component.
β k (
x
,
y
)
β k (
,
)
can achieve is unity,
which indicates that the particular pixel is exactly same as it should be in the fused
image (in the absence of noise). On the other extreme,
x
y
can have a value of
zero, when the pixel is essentially pure noise without any contribution towards the
final result.
Yang and Blum have proposed a multiscale transform version of this fusion tech-
nique that can also handle non-Gaussian disturbances [196]. In their model the sensor
selectivity factor can assume values of 0 or
β k (
x
,
y
)
1 only. The negative value indicates a
particular case of polarity reversal for IR images [196]. This discrete set of values
brings out only two possibilities- either the sensor can see the object, or it fails to
see it. The
±
β
factor needs to be calculated from the available data only. In [196], the
value of
has been calculated by minimizing a function related to the sensor noise.
Fusion using the statistical model of image formation has been enhanced by
allowing the sensor selectivity factor
β
β
to take continuous values in the range [0, 1]
in [94, 95]. However, the value of
has been assumed to be constant over smaller
blocks of an image [95, 164]. These values of
β
have been computed from the
principal eigenvector of these smaller blocks. The values of
β
, therefore, are constant
over image blocks, but can be totally discontinuous across the adjacent blocks due
to the data dependent nature of eigenvectors. Xu et al. have modeled the sensor
selectivity factor using a Markov random field (MRF), however it can take values
from the discrete set
β
only [192, 193]. They have modeled the fused image
also to be an MRF which acts as the prior for the MAP formulation.
{
0
,
1
}
 
Search WWH ::




Custom Search