Geology Reference
In-Depth Information
Box 6.
k
2
(
θ
θ
σ
)
liϕ
liϕ
1
n
2
2
π θ
1 5
/
(
|
ϕ =
)
e
;
σ
=
1 06
.
n
σ
(30)
li
liϕ
li
si
n
σ
2
π
k
=
1
li
discussed next. This approach is based on exploit-
ing the information in samples of θ from the joint
distribution π ( θ|φ ). Such samples may be obtained
by any appropriate stochastic sampling algorithm,
for example by the accept-reject method (Robert
& Casella, 2004). Furthermore this task may be
integrated within the stochastic analysis: each of
the samples θ liϕ used for the estimation in Equation
23 can be selected as a candidate sample θ c in the
context of the Accept-Reject algorithm. The re-
quired samples from π ( θ|φ ) are obtained at a small
additional computational effort over the risk as-
sessment task, since they require no new simula-
tions for the bridge model response. Projection,
now, of the samples from π ( θ|φ ) to the space of
each of the model parameters provides samples
for the marginal distributions π ( θ liϕ | φ ) for each of
them separately. Thus using the same sample set
this approach provides simultaneously informa-
tion for all model parameters. For scalar quantities,
as in the case of interest here, the relative entropy
may be efficiently calculated (Beirlant, Dudewicz,
Gyorfi, & Van der Meulen, 1997; Mokkadem,
1989) by establishing an analytically approxima-
tion for π ( θ liϕ | φ ), based on the available samples,
through Kernel density estimation. This estimate
will not necessarily have high accuracy, but it can
still provide an adequate approximation for com-
puting the information entropy integral. A Gauss-
ian Kernel density estimator may be used for this
purpose (Martinez & Martinez, 2002). Using the
n available samples for θi, with θ liϕ k denoting the
k -th sample and σ si the standard deviation for these
samples, the approximation for π ( θ liϕ | φ ) would be
(see Box 6).
For establishing better consistency in the
relative information entropy calculation p ( θ liϕ ) may
also be approximated by Equation 30 based on
samples, even when an analytical expression is
available for it. This way, any type of error intro-
duced by the Kernel density estimation is similar
for both of the densities compared. The approxi-
mation for the relative information entropy is then
(Beirlant et al., 1997)
b
i u
,
π θ
(
|
ϕ
)
(
)
θ
D
π θ
(
|
ϕ
) ||
p
( )
θ
π θ
(
|
ϕ
)log
liϕ
d
liϕ
liϕ
liϕ
p
( )
θ
liϕ
b liϕ
liϕ
,
liϕ
(31)
where the last scalar integral can be numerically
evaluated, for example using the trapezoidal rule,
and [ b i,l , b i,u ] is the region for which samples for
π ( θ liϕ | ϕ ) and q ( θ liϕ ) are available. This approach
ultimately leads to an efficient sampling-based
methodology for calculating the relative informa-
tion entropy for different parameters, which can
be performed concurrently with the risk assess-
ment, exploiting the readily available system
model evaluations for minimizing the computa-
tional burden. Comparing the values for this en-
tropy between the various model parameters leads
then to a direct identification of the importance
of each of them in affecting risk. Parameters with
higher value for the relative entropy will have
greater importance. Furthermore direct compari-
son of samples from the distributions π ( θ liϕ | ϕ ) and
p ( θ liϕ ) could provide additional insight about what
specific values for each parameter contribute more
to the risk (Taflanidis & Beck, 2009c).
 
Search WWH ::




Custom Search