Environmental Engineering Reference
In-Depth Information
This rank corresponds to the number of input variables a polynomial Ψ
α
depends on. For
whereas Ψ
4
, Ψ
7
, and Ψ
8
are of rank 2. Blatman and Sudret (2010) propose to fix
a priori
the
maximum rank
r
max
and define truncation sets as follows:
A
Mpr
,,
M
=∈ ≤
{
α
:
α
p
,
α
≤
r
}.
(6.46)
max
max
0
Another approach that has proven to be more relevant in the context of adaptive algo-
rithms has been introduced in Blatman and Sudret (2011a) and is referred to as the
hyper-
bolic truncation scheme.
Let us define for any multi-index α and 0 <
q
≤ 1 the
q
-norm:
1
q
M
∑
def
q
(6.47)
α
=
α
.
i
q
i
=
1
The hyperbolic truncation scheme corresponds to selecting all multi-indices of
q
-norm
less than or equal to
p
:
A
Mpq
,,
M
=∈ ≤
{
α
:
α
p
}.
(6.48)
q
As shown in
Figure 6.3
,
for a two-dimensional problem (2D) (
M
= 2), such a truncation
set contains all univariate polynomials up to degree
p
(since tuples of the form (0, …, α
i
≠ 0,
…, 0) belong to it as long as α
i
≤
p
). The case
q
= 1 corresponds to the standard trunca-
(corresponding to the blue points that are not on the axes in
Figure 6.3
)
are less numerous
than in
A
M,p
. The gain in the basis size is all the more important when
q
is small and
M
is
large. In the limit when
q
→ 0
+
only univariate polynomials (rank 1, no interaction terms)
are retained leading to an additive surrogate model, that is, a sum of univariate functions.
As shown in Blatman and Sudret (2011a), the size of the truncation set in
Equation 6.48
may be smaller by 2-3 orders of magnitude than that of the standard truncation scheme
for large
M
and
p
≥ 5.
6.3.7 adaptive algorithms
The use of hyperbolic truncation schemes
A
M,p,q
as described above allows one to
a priori
decrease the number of coefficients to be computed in a truncated series expansion. This
automatically reduces the computational cost since the minimal size of the ED X shall be
equal to
k
⋅ card
A
M,p,q
, where
k
= 2-3. However, this may remain too costly when largely
dimensional, highly nonlinear problems are to be addressed.
Moreover, it is often observed
a posteriori
that the nonzero coefficients in the expansion
form a
sparse
subset of
A
M,p,q
. Thus came the idea to build, on-the-fly, the suitable sparse
basis instead of computing useless terms in the expansions that are eventually negligible.
For this purpose, adaptive algorithms have been introduced in Blatman and Sudret (2008)
and further improved in Blatman and Sudret (2010, 2011a). In the latter publication, the
question of finding a suitable truncated basis is interpreted as a
variable selection
problem
(
Figure 6.4
)
. The so-called
least-angle regression
(LAR) algorithm (Efron et al., 2004) has
proven to be remarkably efficient in this respect (see also Hastie et al. (2007)).
Search WWH ::
Custom Search