Geoscience Reference
In-Depth Information
will not actually produce a prediction, which might be important in a forecasting situation. If off-line
simulation is treated as a learning processes, however, the identification of such periods as a source of
new types of information on the catchment response might be valuable. A good introduction to fuzzy
methods in rainfall-runoff modelling is provided by Bardossy (2005).
4.6.5 Flexible Explicit Soil Moisture Accounting Models
In Section1.3, in discussing the modelling process, it was suggested that all hydrological models are
conceptual approximations to the complexities of the perceptual model of how catchments work. The
phrase “conceptual model” is also used in the literature to indicate a model based on a collection of
conceptual storage elements. This type of model dates back to the very first hydrological models imple-
mented as programs on digital computers, such as the Stanford Watershed Model (Crawford and Linsley,
1966). The Stanford Watershed Model established the practice of giving the various storage elements
process-related names (overland flow routing store, soil water store, groundwater store, etc.) that led
O'Connell (1991) to call these models “explicit soil moisture accounting” (ESMA) models. While I was
seeking opinions about improvements to this topic for the second edition, one or two people suggested
that there should be a chapter on this type of conceptual model, since they are still very widely used in
hydrological practice. I have largely resisted this suggestion, having suggested in the Preface to the first
edition that they represent the past. (However, the careful reader will find this type of conceptual storage
element structure underlying some of the models that are discussed elsewhere in this topic, such as the
Xinangjiang, Arno, VIC model of Box 2.2 and in many of the semi-distributed models in Chapter 6.)
My resistance derives from my very early experience as a debutant research student in the early 1970s
when I tried to make a list of the hydrological models at that time. I stopped counting when the list
went over 100. Forty years later, there are many many more, albeit that there has been also been some
concentration on a smaller number of such models (particularly those that are applied on the basis of
distributed hydrological response units because of the availability of supporting, GIS-linked, software
systems). The sheer variety of models of this type, however, means that it is very difficult to decide
which model structure (if any) might be appropriate for a particular application. Most might be able to
demonstrate some success in representing catchment response, if some observational data are available
with which to calibrate the parameters of the model (although see the discussion of calibration methods
and issues in Chapter 7). While this might be sufficient as a basis for making predictions of catchment
response for a particular purpose, it is very difficult to assess whether such models are getting the right
results (in the sense of reproducing calibration data with reasonable accuracy) for the right reasons (see
Klemes, 1986; Kirchner, 2003; Beven, 2010).
This variety has, however, now been used as a form of inductive modelling approach, particularly in the
Framework for Understanding Structural Errors (FUSE, Clark et al. , 2008). There have been a number of
modelling systems in the past that have allowed different types of model components to be put together
with a view to improving the representation of particular catchments (e.g. Leavesley et al. , 2002) but
FUSE has extended this to the evaluation of tens of model structures within a calibration framework for
a number of catchments in the Model Parameter Estimation Experiment (MOPEX; Duan et al. , 2006).
Alternative approaches to evaluating multiple model structures have been taken by Fenicia et al. (2007b,
2008a, 2008b) and Krueger et al. (2010). In a combination of empirical approaches, Xiong et al. (2001)
and Fenicia et al. (2007a) have reported on using fuzzy inference to combine the outputs of multiple
model structures. These studies have shown that a combination of different models might give better
predictions than any single model.
Some of the issues with all of these approaches were raised by Clark et al. (2008). How far can different
model structures be differentiated given the types of performance measures and limitations of calibration
data available? How far is the relative performance of particular model structures consistent between
different applications? How can model structures be designed to maximise the information content in
Search WWH ::




Custom Search