Information Technology Reference
In-Depth Information
A finite difference scheme used to compute Jacoby matrix of f(x) requires Nexp=O(n)
simulations, e.g. 2n for central difference scheme plus one experiment at x 0 ,
Nexp=2n+1. The algorithm possesses computational complexity O(nm) and can be
implemented efficiently as reading data from Nexp simultaneously open data streams
and writing CL to a single output data stream. In this way the memory requirements
can be minimized and parallelization can be done straightforwardly.
3.2
Second Order Reliability Method (SORM)
SORM is applicable for slightly non-linear mapping f(x), which can be approximated
by quadratic functions. The distributions
(x) are normal or can be cast to normal
ones by a suitable transformation of parameter. For quadratic approximations CL can
be explicitly computed [6] using main curvatures in the space of normalized variables
z i =(x-x 0 ) i /
ρ
σ xi , i.e. eigenvalues of Hesse matrix H i jk =
2 f i /
z k . These eigenvalues can
be also used to estimate non-linearity of the mapping f(z), by maximizing the 1st and
2nd Taylor's terms over a ball of radius R:
max |z|≤R Jz=|J|R, max |z|≤R |z T Hz/2|=Hmax R 2 /2, (15)
so that the linear term prevails over quadratic one, in this ball, iff |J|>>Hmax R/2.
Here
z j
z j , |J|=( j J ij 2 ) 1/2
(16)
J ij =
f i /
and Hmax is maximal absolute eigenvalue of H. Both this criterion and estimation of
main curvatures require full Hesse matrix, i.e. Nexp=O(n 2 ) simulations. Practically,
the usability of SORM is limited, because strongly non-linear functions would
involve higher order terms and because distributions of simulation parameters can
strongly deviate from normal ones.
3.3
Confidence Limits Determination with Monte Carlo Method (CL-MC)
In the case of non-linear mapping f(x) and arbitrary distribution
(x) general Monte
Carlo method is applicable. The method is based on estimation of probability
P N (y<CL) = num.of (y n <CL)/N (17)
for a finite sample {y 1 ,...,y N }. By the law of large numbers [7], F N =P N (y<CL) is
consistent unbiased estimator for F=P(y<CL), i.e. F N
ρ
F with probability 1, when
N
∞ and <F N > = F for all finite N. By the central limit theorem [7], the error of such
estimation err N =F N -F at large N is distributed normally with zero mean and standard
deviation
(F(1-F)/N) 1/2 . Algorithmically the method consists of three phases:
σ∼
(CL1) generation of N random points in parameter space according to user specified
distribution
(x),
(CL2) numerical simulations for given parameter values,
(CL3) determination of confidence limits by one-pass reading of simulation results,
sorting m samples {y 1 ,...,y N } and selection of k-th item in every sample with
k=[(N-1)F+1] as a representative for CL.
ρ
Search WWH ::




Custom Search