Geoscience Reference
In-Depth Information
es, and (d) the multi-scale multivariate data. The shape of the
resulting histogram is difficult to control, and multivariate
spatial features are typically very similar to SGS (Deutsch
2002 ).
While any shape distribution can be used and can be
changed locally, the resulting histogram will be subject to
three influences: the histogram of original data, the chosen
shape of the random distributions, and the Gaussian distribu-
tion that results from the averaging of random components,
i.e., the Central Limit Theorem.
Correction schemes proposed to obtain the correct local
histograms include post-processing of realizations (Journel
and Xu 1994 ; Caers 2000 ); selective sampling; and estab-
lishing a consistent set of distributions (Oz et al. 2003 ). The
latter is based on using the link between Gaussian and real
data units to build the shape of the conditional distributions,
and is the recommended option to solve DSS's issues with
histogram reproduction. This is because, while the link to the
Gaussian model gives the ccdf its expected shape, the results
are not Gaussian. Also, there is no post-processing or ad-hoc
correction, and the block data are reproduced exactly.
Another significant issue is that of proportional effect.
For most variables, the mean and its variance are correlat-
ed. Simulations that use a transformation, such as indicator
simulations and Gaussian techniques are insensitive to the
proportional effect. This is because the transform effective-
ly removes the proportional effect, although the data in the
original space do show a proportional effect.
In DSS, the kriging variance provides the variance of the
local ccdf. This variance depends only on the data configura-
tion and is independent of the data values, which is incorrect
most of the times when dealing with data in original units. The
kriging variance is not a measure of local variability; it only
works well after a Gaussian transformation. However, the
central idea of DSS is to avoid that Gaussian transformation.
The best approach if using DDS is to: (a) use a standardized
variogram; (b) calculate the standardized kriging variance;
and (c) rescale that variance to a local measure of variability.
This requires two additional steps: fitting the proportional ef-
fect and calculating the local mean at each location.
A different approach has been proposed by Gómez-
Hernández ( 1992 ) in the context of simulating hydraulic
conductivity fields. The idea is that, if the block statistics
are known and the point and block distributions are assumed
jointly Gaussian, then a joint sequential Gaussian simula-
tion provides direct block simulated values conditioned to
the original point support data. The inference of the block
covariances can be done using a global change of support
using a permanence of distribution assumption, or, as devel-
oped by Gómez-Hernández, from a training image and the
assumption of a point and block univariate lognormal distri-
bution, that is, the sequential Gaussian simulation performed
on the log of the original data.
Marcotte ( 1993 ) provided another alternative, based on
using Disjunctive Kriging (DK) to obtain the local block
cumulative distribution frequencies. The simulated values
are drawn from this local block cdf. This method has the
substantial disadvantage of using DK, a cumbersome and
theoretically difficult method to implement, which requires
also a strong prior assumption about the distribution of block
grades, but at the same time offers the flexibility of integrat-
ing data types with different support, including drill hole
data, bulk samples, and mined out stopes or areas.
Godoy ( 2002 ) developed an alternative version, the Di-
rect Block Simulation (DBSim) algorithm. The DBSim algo-
rithm is an adaption of the “classical” SGS method.
The main difference between a traditional SGS and the
DBSim methods is that the DBSim simulation is done origi-
nally on nodes, then immediately re-blocked to the specified
block (SMU) size, and the block data (through its discreti-
zation points) is used to condition further nodes and blocks
in the sequential random path used. DBSim works on block
centroids, retaining in memory only the previously simulated
block values, while SGS first obtains the full set of nodes on
the random path. In the case of SGS, re-blocking from node
support to the SMU block size is a separate, independent step.
The implementation of direct block simulations has, as
with direct sequential simulation, pitfalls that must be avoid-
ed. But in cases where the size of the simulated model is
significant, or even impractical if performed at a node scale,
it may be an alternative worth investigating.
10.2.5
Direct Block Simulation
10.2.6
Probability Field Simulation
Direct block simulation is another simulation option that at-
tempts to simplify the simulation process by directly working
at a support other than the original nodes or composites. Jour-
nel and Huijbregts ( 1978 ) originally proposed a direct block
simulation based on separate simulation and conditioning
steps. The method is based on using a global change of sup-
port (Chap. 7), which is based on a permanence of distribution
technique, to correct the point support data to block support.
Then the conditioning happens at the block support level.
The key idea of probability field (P-field) simulation is to
perform the simulation in two separate steps (Froidevaux
1992 ). In the first step the local distributions of uncertainty
are constructed. This is done using only the original data so
that it can be done only once instead of repeatedly for each
realization. The second step is to draw from those distribu-
tions using correlated probability values instead of the ran-
dom numbers used in traditional Monte Carlo simulation.
Search WWH ::




Custom Search