Geoscience Reference
In-Depth Information
3.2 Procedures for Classical and Sequential
Decision-Making
3.2.1 Classical Neyman-Pearson Decision-Making Procedure
The classical decision-making procedure is based on the given volume, n, of the
measurements. The value of n is determined by a priori
information about
the probability density function f a (x 1 ,
, x n ) where the random variables {xi} i } are
the observation data. The hypotheses H 0 and H 1 are that a 0 or a 1 is prescribed,
respectively. The distinction between these hypotheses is based on the synthesis of
the boundary for the optimal critical area, E 1 , in the hyper-surface of the form:
L n ¼ L n x 1 ; ...;
ð
x n
Þ ¼ f a1 x 1 ; ...;
ð
x n
Þ=
f a0 x 1 ; ...;
ð
x n
Þ ¼ C
ð 3
:
1 Þ
where
Þ ¼ Y
n
f a x 1 ; ...;
ð
x n
f a x ðÞ;
i¼1
f a (x) is the probability density function of the variable x with the unknown
parameter a and C is a constant determined under the condition that E 1 has a given
level of the error of the
.
The ratio of conditional probabilities in Eq. ( 3.1 ), which is called likelihood
ratio, provides the
first kind
α
final choice between the above hypotheses:
(1) if L n C then hypothesis H 0 is accepted,
(2) if L n > C then hypothesis H 1 is chosen.
In general, there are many criteria such as Bayesian minimum error, minimax,
etc. Dalton and Dougherty (2011) derived a closed-form analytic representation of
the Bayesian minimum mean-square error estimator
cation
assuming Gaussian models. This is presented in a general framework permitting a
structure on the covariance matrices and a very
for
linear classi
flexible class of prior parameter
distributions with four free parameters. Closed-form solutions are provided for
known scaled identity and arbitrary covariance matrices. Minimax theory is
developed in the framework of game theory (Krapivin 1972; Krapivin and Klimov
1995, 1997; Myerson 1997).
The content of hypotheses H 0 and H 1 depends on the speci
fl
c conditions of the
task. In fact, there are two steps to use for these criteria. The
first step is the
synthesis of the empirical function of the probability distribution for the observed
variable x. The second step is its transformation to f(x) using, for example, the
Neyman-Pearson criterion (Fig. 3.1 ).
Let us consider the case of a uniform data set where the selected values
x i (i =1,
n
having the
density f a0 (x) for the hypothesis H 0 and the density f a1 (x) when the hypothesis H 1 is
, n) are independent realizations of the same casual value
Search WWH ::




Custom Search