Image Processing Reference
In-Depth Information
can lead in practice to inconsistencies because the user cannot in this case take into
account the entire network of probabilistic dependences (this is the case for logic
loops [PEA 86a]). Learning probability laws require, in addition to the hypotheses, a
significant amount of data. Typically, non-parametric learning of a multi-dimensional
law in images or areas of limited size is not always relevant and we often resort to the
use of parametric models, which, in turn, require hypotheses about the forms of the
laws.
The estimation of a priori probabilities is often difficult and has a major impor-
tance in the cases where little information is available (very flat distributions of the
conditional probabilities). If, in the case of image processing, conditional probabili-
ties can often be estimated by learning based on occurrence frequencies, this usually
is not the case for a priori probabilities. Evaluating them goes beyond the framework
of frequentist probabilities and often requires more subjective concepts. Furthermore,
Bayesian combination is constrained, as is modeling, by the probability axioms and
its use in practice often requires simpler hypotheses (such as independence) that are
rarely verified. Probabilistic and Bayesian theory combines the elements of informa-
tion in a conjunctive way, using products of conditional probabilities, which in practice
often leads to a collapse of the probabilities of events that are obtained from a long
chain of inference.
The additivity constraint may be too strong for certain problems. Let us consider
the example given by Smets [SME 78], in the field of medical diagnosis. If a symptom
s is always present in patients with a pathology A and we observe this symptom s ,
then the probability for the patient to have A increases. The additivity constraint then
imposes that the probability for the patient not to have A must decrease, even though
there is no reason for it (Hempel's paradox), if the symptom s can also be observed in
other pathologies 5 .
Applying Bayesian methods often requires considerable knowledge of the prob-
lem and achieving good operating conditions implies additional considerations for
5. Because of the additivity constraint and Bayes' rule (equation [A.8]), if p ( s | A )=1,
then p ( A | s )= p ( A ) /p ( s ) and therefore p ( A | s ) ≥ p ( A ). On the contrary, p ( A | s )=
1 − p ( A | s ) and therefore p ( A | s ) ≤ p ( A ). Smets's argument refuting this inequality is
debatable, but it can also be interpreted as follows: the additive probability model may be too
simplistic in this case. In particular, the idea of the probability of a pathology A can make sense,
whereas it is not certain that the probability of [ A | s ] does because A does not correspond to
a single pathology, but instead to an infinite, poorly known, imprecise set and it is difficult to
claim that [ A | s ] is a well-defined, binary proposition that accurately represents reality. Thus,
any model that leads to the conclusion p ( A | s ) can easily be disputed. We hope that this
interpretation does not misrepresent Smets' ideas.
Search WWH ::




Custom Search