Geoscience Reference
In-Depth Information
There is one important limitation of transfer function models in flood forecasting applications. Transfer
functions are designed to make predictions of output variables for which historical data are available for
calibration. They are an empirical modelling strategy in this sense. Thus although they can provide
predictions, easily updated in real time, of river stage at specific sites in the catchment they cannot
predict the extent of flooding in the catchment, except in so far as this is possible from knowledge of
the flood stage at the measurement sites. Transfer models can also be used to emulate the outputs from
hydraulic flood routing models at sites that are not gauged. They can do so very accurately, including
taking account of the hysteresis in the flood wave celerity (Beven et al. , 2008b). However, that does not
mean that the predictions of the original hydraulic model will necessarily provide accurate forecasts of
inundation everywhere in an area at risk of flooding (Pappenberger et al. , 2007a) and clearly it is not
possible to update such local forecasts if no local level information is available.
8.4.3 The Bayesian Forecasting System (BFS) and Quantile Regression
A significant research programme to develop a flood forecasting system that took account of the un-
certainties in the forecasting processes has been led by Roman Krzysztofowicz (1999, 2001a, 2001b,
2002; Krzysztofowicz and Kelly, 2000; Herr and Krzysztofowicz, 2010). The Bayesian Forecasting
System (BFS) is based on combining a representation of the rainfall input uncertainties (Kelly and
Krzysztofowicz, 2000) with a representation of the hydrological modelling uncertainties (Krzysztofowicz
and Herr, 2001) within a Bayesian framework. Thus a good estimate of the uncertainty in the rainfalls
can be propagated through the hydrological model to provide a posterior distribution for the forecasts
(Krzysztofowicz, 2001b). The uncertainty in the hydrological model is derived from simulations of
past data for which the rainfalls are asssumed known. There is an expectation, however, that the errors
will be complex in nature, and the BFS uses mixed distributions for under and overpredictions that are
trasformed into a multivariate gaussian space using a normal quantile transform, otherwise known as a
metagaussian transform (Kelly and Krzysztofowicz, 1997). As new data become available, the uncer-
tainty associated with the predictions can be updated before calculating new forecast uncertainties for
river levels. Another application of the BFS, to forecasting on the River Rhine, is reported by Reggiani and
Weerts (2008).
In propagating the forecast uncertainties to longer and longer lead times, the BFS makes use of prior
distributions of forecast errors based on historical performance. This technique can, in fact, be used to
provide a simple off-line estimate of uncertainty in the forecasts for any deterministic forecasting model.
In effect, past performance is summarised in terms of the residual quantiles for different forecast lead
times over a number of past events. Regressions are then calculated, for each required quantile, that
smooth the relationship between quantile and lead time (see, for example, Weerts et al. , 2011). The
expectation is that the occurrences of residuals for new events in the future will (more or less) behave in
a similar way to the quantiles for past events. The method is simple and cheap to update after each new
event. It relies on new events being statistically similar to past events in their residual behaviour (see also
the error correction method of Montanari and Grossi, 2008). This is not necessarily the case but at least
gives a guide to the magnitude of potential errors. Where it is possible to use data assimilation during an
event, however, there is some advantage in dealing with the particularities of that event.
There are other dynamic error correction and Bayesian data assimilation schemes (such as the Ensemble
Kalman Filter and forms of particle filter) that can be used to make more complex models adaptive
(see Beven (2009) for more details). However, in complex models, with many different interacting
variables or components that could be adjusted, it might be difficult to decide what to adjust, especially
when the information content of a new observation residual at a gauging site is certainly going to
be limited (and potentially reduced by correlation with the same observation residual at the previous
time step).
Search WWH ::




Custom Search