The plot is a convenient backdrop for
arranging the components of a reservoir model,
and the frequency/amplitude axes can be alterna-
tively labelled for 'reservoir model scale' and
'content'. The reservoir itself exists on all scales
and is represented by the full rectangle, which is
only partially covered by seismic data. The
missing areas are completed by the framework
model at the low frequency end and by core and
log-scale detail at the high frequency end, the
latter potentially a source for probabilistic inver-
sion studies which aim to extend the influence of
the seismic data to the high end of the spectrum.
The only full-frequency data set is a good
outcrop analogue, as it is only in the field that
the reservoir can be accessed on all scales. Well
facilitated excursions to outcrop analogues are
thereby conveniently justified.
Is all the detail necessary? Here we can refer
back to Flora's Rule and the model purpose, which
will inform us how much of the full spectrum is
required to be modelled in any particular case.
In terms of seismic conditioning, it is only in
the case where the portion required for modelling
exactly matches the blue area in Fig. 2.21 that we
can confidently apply hard conditioning using
geobodies in the reservoir model, and this is
rarely the case.
Good introductions to the use of statistics in
geological reservoir modelling can be found in
Yarus and Chambers ( 1994 ), Holden et al.
( 1998 ), Dubrule and Damsleth ( 2001 ), Deutsch
( 2002 ) and Caers ( 2011 ).
Very often the reservoir modeller is con-
founded by complex geostatistical termi-
nology which is difficult to translate into the
modelling process. Take for example this quota-
tion from the excellent but fairly theoretical
Srivastava ( 1989 ):
in an ideal theoretical world the sill is either the
stationary infinite variance of the random function
or the dispersion variance of data volumes within
the volume of the study area
The problem for many of us is that we don't
work in an ideal theoretical world and struggle
with the concepts and terminology that are used
in statistical theory. This section therefore aims
to extract just those statistical concepts which are
essential for an intuitive understanding of what
happens in the statistical engines of reservoir
With the above considered, there can be some
logic as to the way in which deterministic control
is applied to a model, and establishing this is part
of the model design process. The probabilistic
aspects of the model should be clear, to the point
where the modeller can state whether the design
is strongly deterministic or strongly probabilistic
and identify where the deterministic and proba-
bilistic components sit.
Both components are implicitly required in
any model and it is argued here that the road to
happiness lies with strong deterministic control.
The outcome from the probabilistic components
of the model should be largely predictable, and
should be a clear reflection of the input data
combined with the deterministic constraints
imposed on the algorithms.
Disappointment occurs if the modeller
expects the probabilistic aspects of the software
to take on the role of model determination.
2.6.1 Key Geostatistical Concepts
The key concept which must be understood is
that of variance. Variance,
2 , is a measure of the
average difference between individual values
and the mean of the dataset they come from. It
is a measure of the spread of the dataset:
x i μ
x i ¼
N ¼ the number of values in the data set, and
the mean of that data set
Variance-related concepts underlie much of
reservoir modelling. Two such occurrences are
summarised below: the use of correlation
coefficients and the variogram.