Geology Reference
In-Depth Information
of the model; (4) model design and optimization method; and (5) computational
capabilities and complexities of the code.
2.5.5 Sensitivity of a Model
Sensitivity of a model is a major factor which decides its reliability in real situa-
tions. It often refers to the amount of change in model output resulting from a
change in model input. As this topic deals with data-based training models, the
variations in modeling results are assessed with a certain percentage change in each
input data series. Actually, this process tests the robustness of model results of a
model under uncertain inputs. However, for physical models in general, the sen-
sitivity of a model refers to the changes in its individual parameters. The overall
sensitivity would be a cumulative result of the effects of all parameters in the system
model. The general hypothesis is that sensitivity increases with increasing com-
plexity because of the presence of more parameters or links. However, variation of
sensitivity is dependent on many factors, so this hypothesis is much generalized. To
resolve uncertainty-sensitivity issues, different kinds of optimization algorithms
have been developed, namely the variance-based Sobol
'
method [ 69 , 70 ] and the
GLUE procedure [ 14 ]. Sensitivity analyses are valuable tools for identifying
important model parameters to test model conceptualization and model structure.
2.5.6 Predictive Error of a Model
Prediction error is a generalized indicator of the performance of a model. The true
predictive error is the sum of training error and training optimism. It is often
referred to as the quality of the output, and the way the model performance should
be interpreted and assessed. Training optimism is a measure of how bad our model
can be over unseen data in comparison to training data. The more optimistic we are,
the better our training error will be compared to what the true error is, and the worse
our training error will be as an approximation of the true error. The hypothesis is
that highly complex models simulate the real systems and give least prediction
error. The inadequacies of simple models in most cases are because of the presence
of simplifying assumptions.
2.5.7 Identi
ability of a Model
Identi
ned by the model, which is
not directly assessable. This quantity tells the model whether the model
ability is a measure of how well the system is de
'
over-de
nes
'
the system, which normally happens when the degree of freedom of the model is
Search WWH ::




Custom Search