Geoscience Reference
In-Depth Information
11.3 Resampling
There should be established procedures for safe keep-
ing and updating the database such that no errors are in-
troduced. All these procedures should be documented in a
manual always available as part of the audit trail for future
reference. In addition, the protocols and procedures that
have been formally established should be periodically re-
viewed and audited to ensure that they are applied as in-
tended. Record-keeping of these internal reviews provide
a very useful log of the evolution of the stored informa-
tion. Much like an owner's maintenance log of a car, it will
increase the value of the project and the reliability of the
resource model.
Backups of the electronic database should be preserved in
different locations. The database itself should be relational,
and preferably not require specialists for custody and main-
tenance.
Checks against original information should be done by
comparing the original assay certificates, as issued by the
laboratory and properly signed by a laboratory representa-
tive, against the values stored in the database. This check
should also be done against the original geologic logs, the
original down-the-hole information (certificates or photos,
depending on the method used), and the original signed sur-
veyors report for drill hole collar locations. The expected
error rates commonly accepted is 1 % or less of all records
when comparing the original information and the comput-
erized database. Although practitioners take different ap-
proaches, it is common to differentiate consequential and
non-consequential errors. More tolerance is applicable when
dealing with errors that have little impact.
Bulk density data is often forgotten in the validation pro-
cess. Chapter 5 discusses in more detail the importance of
density data, but in all cases, there should be sufficient num-
ber of measurements for each rock type or geologic domain;
their location should be well-documented; and the measure-
ment should be of in-situ density. Measurements on crushed
material (such as the ones performed by metallurgical labo-
ratories) are not adequate for resource estimation. Voids
that may be present in the rock are one of the most common
sources of error, and thus the measurement should be taken
using a wax-coated method. For some types of deposits, such
as massive sulfides or deposits in lateritic or tropical envi-
ronments with high humidity, bulk density is a key variable
that may be a significant source of error.
Details of suggested sample quality assurance and qual-
ity control programs were discussed in Chap. 5. The avail-
able information should be analyzed well in advance of the
completion of the resource model, and while drilling is on-
going. This allows corrective measures, such as re-assaying,
to be completed before the modeling process begins. This
information should be stored as part of the overall project
database.
Cross-validation and jackknife techniques are sometimes
used in an attempt to determine the “best” variogram model
to use in the grade estimation process. Also, kriging plans are
sometimes optimized based on cross validation exercises.
There are several flavors of these methods, the most com-
monly used requiring that a sample be extracted from the
database and its value re-estimated using the remaining sam-
ples and the variogram models being tested. If multiple var-
iogram models and estimation strategies are tested, then the
one that produces the smallest error statistics can be chosen.
As tempting as it sounds, this cross validation method should
not be abused, as discussed below.
A more acceptable alternative, but used little in practice,
is to discard from the dataset a sub-group of data, and re-es-
timate or simulate it using the remaining information and the
variogram models being tested. This method requires using
a well established stationary domain with a good number of
samples, such that about 50 % of them can be taken out and
still the variogram model and other statistical properties are
maintained.
11.3.1
Cross-Validation
This technique, sometimes also called jackknifing, has been
used to validate alternative variogram models. The idea is
to re-estimate each drill hole sample interval z ( x α ) ( α = 1,…, 
n ) ignoring the sample at that location, and using the other
( n − 1 ) samples in the re-estimation. After repeating this pro-
cess for each sample throughout the domain of interest, a set
of n errors [ z* ( x α )  − z ( x α )] is available, where z* ( x α ) are the
re-estimated values at each location, for which the known as-
sayed value z ( x α ) is available. Statistics performed on these
errors give an indication of the goodness of the variogram
model and kriging plan used in the re-estimation. Typical-
ly this method is used to compare two or more alternative
variogram models, or alternative types of kriging (ordinary
kriging, universal kriging, etc.), or different kriging plans.
The validity and usefulness of this type of cross-
validation techniques have been rightly questioned, mainly
because the method is not sensitive enough to detect minor
advantages of, say, one variogram model over another (Clark
1986 ; Davis 1987 ). In addition, the analysis is performed
using the set of samples, which does not allow for any defi-
nite conclusion about the final block estimates. A ranking of
alternative variogram models according to their performance
in re-estimating samples will not necessarily correspond to
the ranking when performing the final estimation run.
Another potential issue when using this technique is
whether the closest samples to the one being re-estimated
Search WWH ::




Custom Search