Geoscience Reference
In-Depth Information
This is reasonable when the spatial arrangement of the bar-
ren material is predictable.
The conditional variance is independent of the condition-
ing event(s). This is an important consideration that will in-
fluence some of the geostatistical methods to be described
later, and is written as:
2.5
Data Integration and Inference
{
}
(
)
2
2
Var Y X
== −
x σρ
1
Y
XY
,
The prediction of spatial variables requires consideration
of multivariate distributions of values at different loca-
tions. Inference requires the combination of sample data
to estimate at an unknown location. The calculation of
conditional distributions is accomplished by application of
Bayes' Law, one of the most important laws in statistical
theory.
Bayes' Law provides the probability that a certain event
will occur given that (or conditional to) a different event has
already occurred. The mathematical expression for Bayes'
Law can be written as:
For a standard bivariate Gaussian distribution (that is, both
variables, X and Y have a mean = 0 and variance = 1.0) the
parameters are:
{
}
EYX
== ⋅
x
ρ
x
XY
,
{
}
2
Var Y X
= =−
x
1
ρ
XY
,
The extension to multivariate distributions is straightfor-
ward, and can be written as:
1
PE
(
and
E
)
(
)
N
x;
µ
,
=
exp
PE E
(|
)
=
1
2
1 2
1
2
PE
()
(2)
π
d
2
1 (x
−− −
with E 1 and E 2 being the events, and P representing prob-
abilities.
If E 1 and E 2 are independent events, then knowing that E 1
occurred does not give additional information about whether
E 2 will occur:
1
T
µ
)
(x
µ
)
2
where d is the dimensionality of x . Note that µ is a (d × 1)
vector and Σ is a (d × d) positive definite, symmetric vari-
ance-covariance matrix. The expression |Σ| is the determinant 
of Σ.  µ  is the mean of the distribution and Σ is the covariance 
matrix. The i- th element of µ expresses the expected value
of the i- th component in the random vector x; similarly, the
(  i , j ) component of Σ expresses the expected value of x i x j
minus µ i µ j . The diagonal elements of Σ are the variances of 
the corresponding component of x.
The multivariate (N-variate) Gaussian distribution possess-
es some extraordinary properties (Anderson 1958 ; Abramov-
itz and Stegun 1964 ):
1. All lower order N-k marginal and conditional distribu-
tions are Gaussian.
2. All conditional expectations are linear functions of the
conditioning data:
PE E
( |
)
=
PE
()
1
2
1
PE
(
and
E
)
=
PE
(
)
PE
(
)
1
2
1
2
Direct inference of multivariate variables is often difficult,
which leads us to use the multivariate Gaussian model, most-
ly because it is straightforward to extend to higher dimen-
sion. The bivariate Gaussian distribution is defined as:
(
XY
,
)
N
( 0, 1,
ρ
) ,
XY
,
1
) (
)
2
2
x
2
ρ
xy
+
y
(
1
2
21
ρ
f
(, )
xy
=
e
XY
,
2
21
πρ
The relationship between the two variables is defined by a
single parameter, the correlation coefficient, and in the XY
cross-plot the probability contours are elliptical. The condi-
tional expectation of Y given an event for X is a linear func-
tion of the conditioning event:
EX X
{
|
= ,∀ ≠ =
x
j i
}
λ
x
i
j
j
j
j
ji
= ,≠=
ϕ
(
xj i
) []
x
j
i
SK
3. All conditional variances are homoscedastic (data-values-
independent):
σ
{
}
Y
EYX
==+ −
x
m
ρ
(
x m
)
{
}
2
Y
XY
,
x
σ
EX
ϕ
(
xji
, ≠
)
|
X x ji
= ,∀ ≠
X
i
j
j
j
{
}
The conditional expectation follows the equation of a line,
y = mx + b, where m is the slope (correlation coefficient) and
b is the intercept (mean).
2
= − ,≠
E X
ϕ
(
xj i
)
i
j
 
Search WWH ::




Custom Search