Image Processing Reference
In-Depth Information
1.3.2.2 Quantification
Probabilitymeasurewas then developed in 1933 byAndreyNikolaevichKolmogorov,
which used classical measure theory and added the measure of 1 assigned to the uni-
versal set. This is thought of as classical probability theory.
The classical probability theory has since become the dominant approach to
examine uncertainty and randomness. Extensive mathematical studies followed and
resulted in highly sophisticated theories. Its foundation rests on the definition of
probability space
, which was Kolmogorov's big achievement. A probability space
is a triplet
is a countable event space containing all possible
outcomes of a random event.
F
is the so-called
(Ω,
F
,
P
)
.Here
Ω
σ
-algebra of
Ω
and it represents all
combinations of the outcomes from
Ω
. Its construction satisfies:
•
It is not empty:
∅∈
F
and
Ω
∈
F
.
F
, then its complement
A
c
•
If a set
A
∈
∈
F
.
F
, then
i
=
1
A
i
and
i
=
1
A
i
•
If sets
A
1
,
A
2
,...,
∈
∈
,
∈
.
F
F
P
is the well known
probability measure
and it is used to assign a real number, i.e.,
the probability, on the occurrence of any outcomes of the events (from
) and their
potential combinations (from
F
). It satisfies the following important and well known
principles.
Ω
1. 0
≤
P
(
A
)
≤
1, for any
A
∈
F
.
2.
P
(Ω)
=
1. That is, the probabilities of all outcomes add up to one.
3. For
A
1
,
A
2
,
··· ∈
F
and
A
i
∩
A
j
=∅
, for any
i
=
j
,
P
∞
∞
A
i
=
P
(
A
i
).
i
=
1
i
=
1
About 50 years later, the additivity requirement became a subject of controversy
in that it was too restrictive to capture the full scope of measurement. For example, it
works well under idealized, error-free measurements, but is not adequate when mea-
surement errors are unavoidable. In 1954, Gustave Choquet developed a (potentially
infinite) family of non-additive measures (capacities), and for each given capac-
ity, there exists a dual “alternating capacity”. An integral based on these measures is
non-additive, can be computed using Riemann or Lebesgue integration and is applied
specifically to membership functions and capacities.
In 1967, Arthur P. Dempster introduced imprecise probabilities based on the
motivation that the precision required in classical probability is not realistic in many
applications. Imprecise probabilities deal with convex sets of probability measures
rather than single measures. For each given convex set of probability measures he
also introduced 2 types of a non-additive measures: lower and upper probabilities,
and super- and supra-additive. This allow probabilities to be represented imprecisely
by intervals of real numbers.
In 1976, Glenn Shafer analyzed special types of lower and upper probabilities
and call then belief and plausibility measures. The theory based on these measures
Search WWH ::
Custom Search