Geoscience Reference
In-Depth Information
where [y( u α )] n ִ 1 is the column matrix of the n normal score
conditioning data and [y (l) ( u i )] N ִ 1 is the column matrix of the
N conditionally simulated y values.
Identification of the conditioning data is written as
L 11  ω 1 = [y( u α )]; thus matrix ω 1 is set at:
variables that do not average linearly after the normal score
transform; in this case, Gaussian techniques are inappropri-
ate for data at different scales; and (c) the maximum entropy
characteristic of the Gaussian distribution.
Consider simple kriging at node u = u l with N data values
z( u a ), a = 1,…, N
[
]
ωω
=
=⋅ Lu
1
[
y
(
)]
1
1
11
α
n
1
N
*
z
()
u
=
λ
z
( )
u
The column vector ω 2 (l)  = [ω 2 (l) ] N ִ 1 is a vector of N indepen-
dent standard normal deviates.
Additional realizations, l = 1, …, L, are obtained at very
little additional cost by drawing a new set of normal deviates
ω 2 (l) , then by applying the matrix multiplication. The major
cost and memory requirement is in the upfront LU decom-
position of the large matrix C and in the identification of the
weight matrix ω 1 .
The LU decomposition algorithm requires that all nodes
and data locations be considered simultaneously in a single
covariance matrix C . The current practical limit of the num-
ber (n + N) is no greater than a few hundred.
Implementation variants have been considered, attempt-
ing to relax the previous size limitation by considering over-
lapping neighborhoods of data locations. Unfortunately,
artifact discontinuities appear if the correlation between all
simulated nodes is not fully accounted for.
The LU decomposition algorithm is particularly appropri-
ate when a large number of realizations is needed over a
small volume or block (n + N is small). A typical application
is the evaluation of block ccdf's. Any block V can be dis-
cretized into N points. The normal score values at these N
points are simulated L  times (  l = 1,…, L ) through the LU de-
composition algorithm and back-transformed into simulated
point z values: {z (l) ( u′ i ), i = 1,…, N; u′ i in V }, l = 1,…, L .
Each set of N simulated point values can then be averaged to
yield a simulated block value. The distribution of the L simu-
lated block values z v (l) , l = 1,…,L , provides a numerical ap-
proximation of the probability distribution (ccdf) of the
block average, conditional to the data retained.
α
α
α
=
1
N
2
σ
() 1
u
=−
λρ
(
uu
)
SK
α
α
α
=
1
N
λρ
(
u u uu
−=− =
)
ρ
(
),
α
1, ...,
N
α
β
α
α
β
=
1
A random variable z s ( u ) is drawn from the univariate prob-
ability distribution function (pdf) f( u, z | (N))
*
Z
()
u
=
Z
()
u
+
R
()
u
S
S
With the residuals R s ( u ) drawn from a pdf f l (r) , with mean 0
and variance σ 2 SK ( u ). The critical point is the independence
of Z *( u ) and R S ( u ), linked to the homoscedastic property of
the Gaussian variance σ 2 SK ( u ).
Now consider the next node u = u l + 1 . Simple Kriging
using N + l data, including the previously simulated value
z s ( u ), is written as:
N
*
¢
¢
¢
z
()
u uu uu
=
λ
()( )
z
+
λ
()()
z
α
α
N
+
1
s
α
=
1
N
σ λρ λρ
2
() 1
u
¢
=−
()(
uuu uuu
¢
¢
− −
)
()(
¢
¢
)
SK
α
α
N
+
1
α
=
1
N
λρ λρ ρ α
(
uuu uuu uu
¢
)
(
−+ −=− =
)
(
¢
)
(
)
(
¢
),
1,...,
N
β
β
α
N
+
1
α
α
β
=
1
N
λρ λ ρ
()(
uuu u uu
¢
−+ = −
)
()
¢
(
¢
)
β
β
N
+
1
β
=
1
A simulated value can then be drawn from this distribution:
10.2.4
Direct Sequential Simulation
¢
*
¢
¢
Z
()
u
=
Z
()
u
+
R
()
u
S
S
Direct Sequential Simulation (DSS, Soares 2001 ) is based
on the idea that a non-Gaussian distribution could be con-
sidered in the sequential path, as long as this distribution
has the same mean and variance of the Gaussian distribu-
tion it replaces, and therefore it is seen as an extension of
the more established Gaussian simulation paradigm (Journel
1994 ). In essence, direct sequential simulation is the same as
sequential Gaussian simulation, but without the normal score
transform step.
The reasons to be interested in DSS include: (a) the repro-
duction of the variogram in original units; (b) dealing with
The two kriged values clearly depend on one another and
the kriged value at the second location depends on the first
random value.
It can be shown that the covariance between the two val-
ues is correct. This is the well-established theory of sequen-
tial simulation: it is unbiased, the variance is correct, and the
covariance between all simulated values is correct. However,
there are concerns with DSS: (a) it simply cannot avoid the
influence of Gaussianity; (b) the shape of the R -values dis-
tribution required to preserve the original histogram; (c) the
proportional effect or heteroscedasticity of kriging varianc-
Search WWH ::




Custom Search