Information Technology Reference
In-Depth Information
The inequalities in the definition of the set U ad of admissible parameters are meant
component-wise. The functional J may additionally include a regularization term for
the parameters. However, from numerical experiments, it turned out that such a term is
not necessary to ensure a well performing optimization process.
Additional constraints on the state variable
might be necessary, e.g., to ensure non-
negativity of the temperature or of the concentrations of biogeochemical quantities. In
our example model, however, by using appropriate parameter bounds
y
b u , non-
negativity of the state variables can be ensured. This was already observed and used in
[16].
b l and
4
Surrogate-Based Optimization
For many nonlinear optimization problems, a high computational cost of evaluating
the objective function and its sensitivity, and, in some cases, the lack of sensitivity in-
formation, is a major bottleneck. The need for decreasing the computational cost of the
optimization process is especially important while handling complex three-dimensional
models.
Surrogate-based optimization [1,6,9,15] is a methodology that addresses these issues
by replacing the original high-fidelity or fine model
y
by a surrogate, in the following
denoted by
.
Surrogates can be created by approximating sampled fine model data ( functional
surrogates). Popular techniques include polynomial regression, kriging, artificial neural
networks and support vector regression [15,18,19]. Another possibility, exploited in this
work, is to construct the surrogate model through appropriate correction/alignment of a
low-fidelity or coarse model ( physics-based surrogates) [20].
Physics-based surrogates inherit physical characteristics of the original fine model
so that only a few fine model data is necessary to ensure their good alignment with the
fine model. Moreover, generalization capability of the physics-based models is typi-
cally much better than for functional ones. As a results, SBO schemes working with
this type of surrogates normally require small number of fine model evaluations to
yield a satisfactory solution. On the other hand, their transfer to other applications is
less straightforward since the underlying coarse model and chosen correction approach
is rather problem specific. The specific correction technique exploited in this work is
recalled in Section 4.1 (see also [14]).
The surrogate model is updated at each iteration k of the optimization algorithm,
typically using available fine model data from the current and/or also from previous
iterates. The next iterate,
s
, a computationally cheap and yet reasonably accurate representation of
y
u k +1 , is obtained by optimizing the surrogate
s k ,i.e.,
u k +1 =argmin
u ∈U ad
J ( s k ( u )) ,
(7)
where, again U ad denotes the set of admissible parameters. The updated surrogate
s k +1
is determined by re-aligning the coarse model at
u k +1 and optimized again as in (7).
The process of aligning the coarse model to obtain the surrogate and subsequent opti-
mization of this surrogate is repeated until a user-defined termination condition is satis-
fied, which can be based on certain convergence criteria, assumed level of cost function
 
Search WWH ::




Custom Search