Environmental Engineering Reference
In-Depth Information
and the resulting Markov chain is highly correlated. Both of these cases result in the so-
called poorly mixing chains. Well-mixing chains have low correlation and hence resemble
white noise (Case σ q = 2° in Figure 5.9 ) . The sample statistics obtained from the latter case
after discarding the initial 50 samples are ˆ
26 09 and ˆ
µ µ
′′ =
.
°
σ µ
′′=°
144
.
.
These values are close
to the true statistics ′′ =
σ µ 13. . It is noted that if the Markov chain is started
close to the center of the target distribution, then burn-in is typically not required.
µ µ
26 08
.
°
and ′′=°
5.4.2 Sequential Monte Carlo
During the last decade, a number of methods have been developed in the statistical com-
munity for exploring posterior distributions that can be classified under the umbrella term
sequential Monte Carlo (Del Moral et al. 2006). These methods include annealed impor-
tance sampling (Neal 2001) and sequential particle filtering (Chopin 2002). In the engineer-
ing community, this approach is known as transitional MCMC (Ching and Chen 2007).
The basic idea behind all these approaches is to gradually translate samples from the prior
distribution to samples from the posterior distribution through a sequential reweighting
operation. Reweighting is based on importance sampling from a sequence of distributions
{ f i ( x ), i = 1, … , M } constructed such that they gradually approach the target posterior dis-
tribution. The intermediate distributions are chosen as
q i
f
()
x
L
()
x
f
(),
x
(5.42)
i
X
wherein 0 = q 0 < < q M = 1. For i = 0, f 0 ( x ) equals the prior distribution and for i = M , f M ( x )
is proportional to the posterior distribution. Initially, sequential Monte Carlo generates sam-
ples from the prior distribution f
= ′ At each subsequent step i , available samples
from the distribution f i −1 ( x are transformed into weighted samples from the distribution f i ( x )
by application of importance sampling with importance-sampling function equal to f i −1 ( x .
To obtain unweighted samples, a resampling scheme must be applied (Doucet et al. 2001).
The derived unweighted samples are then moved by application of MCMC with invariant
distribution f i ( x ), to decrease the sample correlation. The sequential Monte Carlo algorithm
for generating K s amples from the posterior distribution ′′
0 ()
x
f
().
x
f X () can be summarized as follows.
Sequential Monte Carlo algorithm
1. Generate K samples x 0 () ,
k
k = 1, …, K , from the prior distribution f ′( x ).
2. i = 1.
= (
)
qq
j
j
1
3. Compute the weights w i () , k = 1, …, K , by applying wL
()
k
()
k
x
.
i
i
1
4. Resample: Generate samples x i k
() ,
k
=…
1
,
,,
K
by sampling from the discrete distribu-
tion {
x i k
()
−1
,
w
()
k
}.
i
5. Move: From each sample x i k
() ,
k
=…
1
,
,,
K
perform an MCMC transition with invari-
ant distribution f i ( x ) to obtain samples x i k
() ,
k
=…
1
,
,.
K
6. Stop if i = M , else go to 3.
The parameters { q i , i = 1, …, M } should be chosen such that each pair of consecutive dis-
tributions does not vary considerably. One approach is to compute the q i is adaptively such
that the coefficient of variation (COV) of the weights {
k =1 equals a prescribed
value, for example, 100% (Ching and Chen 2007). The performance of the algorithm is
sensitive to the choice of the proposal distribution of the MCMC transition step. Typically,
statistics of the weighted samples can be used to obtain a distribution that approximates the
target distribution (Chopin 2002; Ching and Chen 2007).
( wk
,
,
,
K
}
i
 
Search WWH ::




Custom Search