Geology Reference
In-Depth Information
Markov chains
The basic questions to ask such a chain are:
(1) the probability of entering an absorb-
ing state, given that the chain did not start
there; (2) the expected number of times that
the chain will be in any non-absorbing state
before absorption; (3) the expected number of
steps until the chain enters an absorbing state
(Roberts, 1976).
Regular: if all states can be reached from each
Markov models are mathematical models of
probabilistic processes, which generate random
sequences of outcomes to certain probabilities.
Detailed treatments can be found in Kemeny &
Snell (1960), Roberts (1976), Iosifescu (1980) and
examples applied to sedimentology in Krumbein &
Dacey (1969), Miall (1973), Doveton (1971, 1994),
Carle & Fogg (1997), Davis (2002) and many
others. The basic premise of a Markov process
is that if the outcomes of all the fi rst t-n events
of a series of events are known, then the prob-
abilities of outcomes in the t th experiments are
also known. The t th step transition probability is
given by p ij ( t ) = Pr [ f t = s j | f t −1 = s i ], where p ij is
the transition probability ( Pr ) of an event i to j in
which the outcome function f takes a value s j at a
time t that depends only on the directly preceding
outcome function f at time t
(2)
other in the same number of steps (i.e. the
chain's period d = 1), which also means that
P N has no zero entries. Regular chains have a
fi xed-point probability vector with all posit-
ive entries. The basic questions to ask such
a chain are (1) the probability to be in state
u j after t steps if the chain starts in state u i ;
(2) the expected number of steps before return-
ing to a particular state.
Ergodic: if the underlying digraph (see below)
(3)
1 having value s i .
If the set of all possible outcomes is fi nite, it is a
fi nite stochastic process. Markov chains describe
Markov processes if (Kemeny & Snell, 1960):
is strongly connected, i.e. every state in the
chain can be reached by any other state but
the chain's period d is other than 1. Many
of the theorems developed for absorbing and
regular chains are, with modifi cation, applic-
able to ergodic chains. For example, every
Markov chain with an er et has a fi xed
point probability vector with zeros in the tran-
sient states but positive entries in the ergodic
set and it also has a fundamental matrix. This
allows similar questions to be asked regarding
an absorbing chain and a regular chain.
(1)
There is a fi nite set of outcomes depend-
ing only on the outcomes before the t -th set,
i.e. Pr [ f t = s j | ( f t −1 = s i )^ p ] = Pr [ f t = s j |
f t −1 = s i ]. This condition is called the Markov
property.
The probability
(2)
p t+1 ( O ) that outcome O will
occur at trial t is known if we know what
outcome occurred on trial t
1 , i.e. p ij ( t ) =
Pr [ f t = s j | f t −1 = s i ].
The dependence of
Embedded Markov chains (Krumbein & Dacey,
1969; Davis, 2002) are special models that do
not allow self-to-self transitions, i.e. have zero
entries in the dominant diagonal. Markov pro-
cesses can be one-dimensional or act in higher
dimensions and can be conditioned on boundary
conditions or not (Elfeki & Dekking, 2001, 2005).
Furthermore, single-step or multistep processes
can be evaluated. In this paper, emphasis is on
unconditioned, one-dimensional processes with
single-step transitions analogous to the expression
of ecological dynamics (Horn, 1975; Logofet &
Lesnaya, 2000) since the paper is interested in cap-
turing the successions within the landscape in the
most simple and intuitive way. Quantifi cation of
the overall contribution of facies to the composi-
tion of the landscape and the spatial relationships
among facies is attempted. More complex Markov
models can be applied to use the information
gained by our approach to predict facies outside
the known outcrop (Parks et al ., 2000; Elfeki &
Dekking, 2001, 2005).
(3)
p t +1 ( O ) on the previous
outcome is independent of t , i.e. it is the same
for trial 2 as for 1000.
There are several types of Markov chains
(Kemeny & Snell, 1960; Roberts, 1976).
(1)
Absorbing: where all states (the facies in our
studied system are each a 'state' in the Markov
chain) move towards one fi nal state, into
which they are fi nally absorbed and cannot
leave it anymore. The long-term behaviour of
such a chain is dependent on its starting state.
Absorbing chains consist of transient states
(which the chain will only use a few times,
then they will decline to zero) and absorb-
ing states (which will eventually 'absorb' the
transient states, i.e. the chain remains exclu-
sively within the absorbing states). They can
be reduced to a canonical form from which
can be derived their fundamental matrix.
Search WWH ::




Custom Search