Geoscience Reference
In-Depth Information
Head 1 : Perhaps, the scientific community should focus on simply accepting GP solely for its
ability to fit good curves to our hard-earned datasets. This approach can deliver promising results
but, unfortunately, offers no explanation of the underlying the physical processes involved or, indeed,
of the process by which any such discoveries are arrived at. It is a directed search. Nothing more.
Nothing less. Thus, how can any related findings or discoveries be trusted, and how can one be
assured that it is not producing an apparently right answer, for potentially the wrong reasons. Indeed,
distinguishing between serious scientific investigations and coincidence hunting activities is essential.
Head 2 : Perhaps, the scientific community should focus on better exploiting the transparency of
GP. However, since each solution is completely transparent, other people can easily test it for logic
and physical rationality… perhaps exposing nonsense in your work! Clearly, you should have tested
it yourself and at least in some studies, input and/or output parameter sensitivity analysis is being
reported, allowing readers to get to grips with the effect of input and output model parameterisation.
The flip side of the coin is that such explorations take much longer to plan and execute.
This chapter refers a lot to the transparency of GP solutions and how this feature should be
exploited to show how evolved solutions might relate to the natural systems being modelled. A sug-
gested tool for this is a simple one-at-a-time response function sensitivity analysis. This point is con-
sidered important because GP produces equations that describe environmental, social and economic
data, which if modelled properly have the potential to improve our management of natural resources.
There is an implicit assumption by many authors that GP serves as a tool for experimentation and
hypothesis falsification rather than as a standalone tool for pattern identification and discovery. We
need to start to envisage GP as a computer-based laboratory where in silico techniques complement
more traditional approaches such as field work and associated observation. This would mean making
methods, results and model evaluation considerably more repeatable, understandable and transparent
than they are currently. Conversely, GP is a data-driven modelling tool, which unlike mechanistic
models does not necessarily require a priori assumptions to be made about the problem or the form
that a solution should take, except, as mentioned earlier, for user-defined decisions and/or restrictions
regarding model inputs, mathematical operators and software settings. This approach carries with it
the risk that researchers bury their heads in the sand and avoid addressing questions that are difficult
to answer. A significant number of peer-reviewed papers fail to go beyond using goodness-of-fit statis-
tics and offer simple intermodel competition as justification for model acceptance and approval. If we
do not act, this could become the norm, adversely affecting the wider acceptance of GP as a tool that
can be used for knowledge discovery, which when used appropriately, could ably assist policy makers,
practical decision makers and natural resource planners. The following sections pose three simple
questions that should be considered before embarking on a GP modelling challenge. If nothing else,
they serve to manage operator as well as end-user expectations, and help prevent modellers from tak-
ing routes through the GP maze, that could lead to the production and/or winning of a poisoned chal-
ice, meaning something nasty arising from poor or unfinished scientific scholarship. In most cases,
it simply requires one to steer clear of undertaking fruitless and unnecessary modelling expeditions.
8.5.1 i S i t i MPortant h ow t hingS a re M odelled ?
We have seen how GEP places importance on the way in which things are modelled by including
Sub- ET is in modelling operations and outputs. However, exactly what role these sub-models play is not
certain. In the absence of solid research, we need to take a step back and ask some higher-level ques-
tions. For example, which is better: a model that shows RMSE of 2.50 or one with a value of 2.00? If
your answer is that 2.00 is of course better, then perhaps you do not believe that it matters how things
are modelled. Alternatively, if you asked what does the model look like and does it make any physical
sense, then you probably support a different philosophical standpoint. But should such factors be so
clearly separated? Indeed, surely accuracy, sensitivity, rationality and common sense can and should
all be used to help elucidate model functionality - leading to an informed decision on model selection
(Beriro et al., 2013).
Search WWH ::




Custom Search