Image Processing Reference
In-Depth Information
In fusion, we usually work with highly redundant sources to confirm an uncertain
decision and complementary images to broaden the range of decisions. Complemen-
tary sources can lead to either conflicting or consensual decisions.
In image processing, the concept of entropy has been extended to characterize not
only how spread out the measurements are in the measurement space, but also the
spatial consistency of the measurements, by taking into account occurrence probabil-
ities of certain pixel configurations, in the context of either classification [MAI 94,
MAI 96], or Markov fields [TUP 00, VOL 95].
The concepts of overall entropy are not always well suited for fusion problems and
the concepts of entropy conditional to the classes to recognize are often preferable:
they achieve a finer analysis of the information that each source provides for each
class and are therefore better suited for problems in which a source is better for certain
classes and worse for others. Although the formal definition of such concepts poses
no particular difficulty, they are rarely used in fusion and would probably deserve to
be further investigated.
6.3. Modeling and estimation
The most commonly used theory in other works is by far probability theory, associ-
ated with Bayesian decision theory [DUD 73]. It models information as a conditional
probability, for example, the probability for a pixel to belong to a particular class,
given the images available. Thus, the measurement introduced in section 1.5 can be
written as follows:
M i ( x )= p x
I j .
C i |
[6.6]
This probability is calculated based on characteristics f j ( x ) of the information
extracted from the sources. For example, with images, they can consist of the simplest
of cases of the considered pixel's gray level or of more complex information requiring
preliminary processing. Equation [6.6] then no longer depends on the entire source I j
and is instead written more simply as:
M i ( x )= p x
f j ( x ) .
C i |
[6.7]
In signal and image processing, in the absence of strong functional models for
describing the observed phenomena, the probabilities p ( f j ( x )
|
x
C i ),ormore
generally p ( I j |
C i ) (which represents the probability, conditional to the class
C i , of the information provided by the source I j ), are learned from frequencies of
occurrence on testing areas (or by learning on these areas the parameters of a given
law) which gives us the probabilities in equations [6.6] and [6.7] by applying Bayes'
rule.
x
Search WWH ::




Custom Search