Image Processing Reference
In-Depth Information
6.4. Combination in a Bayesian framework
In the Bayesian model, fusion can be achieved in equivalent ways on two levels:
- either on the modeling level and we then calculate probabilities in the form:
p x
I 1 ,...,I l ,
C i |
[6.8]
using Bayes' rule:
I 1 ,...,I l = p I 1 ,...I l |
C i p x
C i
p x
x
p I 1 ,...,I l
C i |
,
[6.9]
where the different terms are estimated by learning;
- or from Bayes' rule itself, where the information provided by a sensor updates
the information regarding x , which is estimated according to the previous sensors (this
is the only usable form if the elements of information are available one after the other
and not simultaneously):
p x
I 1 ,...,I l
= p I 1 |
C i |
C i p I 2 |
C i ,I 1 ···
p I l |
C i ,I 1 ,...,I l 1 p x
C i
x
x
x
p I 1 p I 2 |
I 1 ···
p I l |
I 1 ,...,I l 1
.
Very often, because of the complexity of learning using several sensors and the
difficulty of gathering enough statistics, these equations are simplified under the inde-
pendence hypothesis. Again, criteria have been suggested for verifying the validity of
these hypotheses. The previous formulae then become:
I 1 ,...,I l = j =1 p I j |
C i p x
C i
p x
x
p I 1 ,...,I l
C i |
.
[6.10]
This equation clearly shows the combination of information as a product, hence
a conjunctive fusion. It is worth noting that the a priori probability plays exactly the
same role in the combination as each of the sources with which it is also combined by
a product.
6.5. Combination as an estimation problem
Another way of seeing probabilistic fusion consists of considering that each source
yields a probability (of belonging to a class, for example) and that fusion consists of
combining these probabilities, in order to find the overall probability of belonging to
 
Search WWH ::




Custom Search