Geoscience Reference
In-Depth Information
the horizontal/vertical trends described above,
or derived from a 3D data source, typically a
seismic volume.
Seismic conditioning directly in 3D raises
some issues:
1. The volume needs QC. It is generally easier to
check simpler data elements, so if the desired
trends are separately captured in 2D trend
surfaces and vertical proportion curves then
combination into a 3D trend volume is not
necessary.
2. If conditioning to a 3D seismic volume, the
resolution of the model framework needs to be
consistent with the intervals the seismic attribute
is derived from. For example, if the parameter
being conditioned is the sand content within a
25 m thick interval, it must be assumed that the
seismic data from which the seismic attribute is
derived is also coming from that 25 m interval.
This is unlikely to be the case from a simple
amplitude extraction and a better approach is to
condition from inverted seismic data. The
questions to ask are therefore what the seismic
inversion process was inverting for (was it
indeed the sand content) and, crucially, was the
earth model used for the inversion the same one
as the reservoir model is being built on?
3. If the criteria for using 3D seismic data (2,
above) are met, can a probabilistic seismic
inversion be called upon? This is the ideal
input to condition to.
4. If the criteria in point 2, above, are not met, the
seismic can still be used for soft conditioning,
but will be more artefact-free and easier to QC
if applied as a 2D trend. The noisier the data,
the softer the conditioning will need to be, i.e.
the lower the correlation coefficient.
To illustrate this, an example is given below,
to which alternative algorithms have been
applied. The case is taken from a fluvio-deltaic
reservoir - the Franken Field - based on a type
log with a well-defined conceptual geological
model (Fig. 2.42 ). The main reservoir is the
Shelley, which divides into a clearly fluvial
Lower Shelley characterised by sheetfloods, and
an Upper Shelley, the sedimentology for which is
less clear and can be viewed as either a lower
coastal plain or a river-dominated delta.
Rock model realisations have been built from
element distributions in 19 wells. Cross-sections
taken at the same location through the models
are illustrated in Figs. 2.43 , 2.44 and 2.45 for a
2-, 4- and 7-interval correlation, respectively.
The examples within each layering scheme
explore object vs. pixel (SIS) modelling and the
default model criteria (stationarity maintained)
vs. the use of deterministic trends (stationarity
overwritten).
The models contrast greatly and the following
observations can be made:
1. The more heavily subdivided models are nat-
urally more 'stripey'. This is partly due to the
'binning' of element well picks into zones,
which starts to break down stationarity by
picking up any systematic vertical
organisation of the elements, irrespective of
the algorithm chosen and without separate
application of vertical trends.
2. The stripey architecture is further enhanced in
the 7-zone model because the layering is
based on a flooding surface model, the unit
boundaries for which are preferentially picked
on shales. The unit boundaries are therefore
shale-rich by definition and prone to
generating correlatable shales if the shale
dimension is big enough (for object
modelling) or shale variogram range is long
enough (for SIS).
3. Across all frameworks, the object-based
models are consistently more 'lumpy' and
the SIS-based models consistently more
'spotty', a consequence of the difference
between the algorithms described in the
sections above.
2.7.5 Alternative Rock Modelling
Methods - A Comparison
So which algorithm is the one to use? It will be the
one that best reflects the starting concept - the
architectural sketch - and this may require the
application of more than one algorithm, and almost
certainly the application of deterministic trends.
Search WWH ::




Custom Search