Environmental Engineering Reference
In-Depth Information
represented and towards distributed models in which it
is. Advances in computing power and GIS technologies
have enabled the development of complex spatial mod-
els based on the discretization of landscapes into vector
polygons, triangular irregular networks, objects of com-
plex form or simple raster grids. Despite recent advances
in remote sensing there are still very many parameters
that cannot be measured using electromagnetic radiation
and thus remote sensing. The sophistication of spatial
models has rapidly outgrown our ability to parameterize
them spatially and they thus remain conceptually lumped
(Beven, 1992). The appropriate scale of distribution and
the optimum configuration of measurements for model
parameterization or calibration is the subject of much
debate. For example, Musters and Bouten (2000) used
their model of root-water uptake to determine the opti-
mal sampling strategy for the soil-moisture probes used
to parameterize it. Fieldwork is an expensive, labour-
intensive, time-consuming and sometimes uncomfort-
able or even hazardous activity. Traditional random or
structured sampling procedures usually require that a
very large number of samples be collected in order to ful-
fil the assumptions of statistical inference. In order to
reduce the sampling effort, prior knowledge about the
system under study may be used to guide convenience
or non-random sampling which is still statistically viable,
with the appropriate method depending on the type of
prior knowledge available (Mode et al ., 2002). Ranked set
sampling (Mode et al ., 1999) reduces the cost of sampling
by using 'rough but cheap' quantitative or qualitative
information to guide the sampling process for the real,
more expensive sampling process. Chao and Thompson
(2001) and others indicate the value of optimal adaptive
sampling strategies in which the spatial or temporal sam-
pling evolves over time according to the values of sites
or times already sampled. A number of authors indicate
how optimal sampling can be achieved by algorithmic
approaches that maximize entropy in the results obtained
(e.g. Bueso et al ., 1998; Schaetzen et al ., 2000). The lux-
ury of optimizing your sampling scheme in this way is,
however, not always available to the modeller, especially
within the context of policy models that are applied using
existing datasets generated by government agencies, for
example where 'you get what you are given' and which
may not be collected with uniform or standard protocols
(e.g. as outlined for soils data in Spain by Barahona and
Iriarte, 2001) or where the protocol may evolve over time
affecting the legitimacy of time-series analysis. Usually the
spatial sampling scheme chosen is a compromise between
that which best represents the system under investigation
and the computational resources and data available. This
compromise is most clearly seen in the extensive discus-
sions on the problem of grid size and subgrid variability
in general circulation models (GCMs). May and Roeck-
ner (2001), amongst others, indicate the importance of
grid resolution in affecting the results of GCMs. Smaller
grid sizes produce more realistic results, especially, in
highly mountainous areas, but smaller grids also have
substantially higher computational and data costs.
Wainwright et al . (1999a) indicated the importance of
the temporal detail of climate data for accurate hydro-
logical modelling. The calculation of evapotranspiration
using the Penman-Monteith formula for hourly data
and then the same data aggregated to a single value for
each day and then separately for each day and night
indicates that the day-night aggregation produces much
closer results to the original hourly data than does the
daily aggregation because of the domain change in net
radiation values from daylight hours when they are pos-
itive to night time hours when they are negative. The
error induced by aggregation to daily timestep is of the
order of 100% and varies with the month of the year too.
This indicates that one must pay attention to the natural
scales and boundaries of the processes being modelled
when devising the time (or space) scale for sampling.
Similarly, Mulligan (1998) demonstrated the importance
of high temporal resolution rainfall intensity data for
understanding the partitioning between infiltration and
overland flow. Where soil infiltration rates fall within the
range of measured instantaneous rainfall intensities (as
they often do), it is important to understand the distribu-
tion function of instantaneous intensities. The greater the
timescale over which these intensities are aggregated, the
lower the measured intensity would be. Such aggregation
can have major effects on the predicted levels of Hortonian
or infiltration excess overland flow production - which
is, after all, a threshold process (see Wainwright and Par-
sons, 2002, for spatial implications). Hansen et al . (1996)
suggested the importance of data quality in determining
streamflow prediction for the lumped IHACRES rainfall-
runoff model to conclude that rain-gauge density and the
sampling interval of rainfall are the most critical across a
range of catchments. Understanding these sensitivities is
critical to designing an appropriate sampling scheme.
2.2.2 Whathappenswhentheparametersdon't
work?
It is frequently the case that initial parameter estimates
will produce model outputs that are incompatible with
Search WWH ::




Custom Search