Environmental Engineering Reference
In-Depth Information
are decision-oriented and model-based like IAM tools (Finlay and Wilson 1991 ;
Mosqueira-Rey and Moret-Bonillo 2000 ; Brunner and Starkl 2004 ; Jakeman et al.
2006 ; Sojda 2007) . Evaluation of DSS can be separated into verification ('building the
system correctly') and validation ('building the right system for a given purpose')
(Boehm 1981 cited by Mosqueira-Rey and Moret-Bonillo 2000) . The verification
step ensures that the DSS is internally complete, coherent, and logical from a
modelling and programming perspective (Sojda, 2007) . Validation is less concerned
with internal operation of the software and more concerned with its output and its
usefulness to the user. It analyzes whether the decision support system addresses the
user's problem (e.g., making better decisions, avoiding bad ones, or helping the user
to take these decisions more quickly or with less data, information, and knowledge).
This attention for suitability to address the user's problem sets DSS and IAM
tools apart from the classical evaluation of numerical models, like ecological and
agronomical models (Parker et al. 2002) .
For DSS development, iterative development-evaluation processes, such as spiral
methodology (Boehm 1988 cited by Mosqueira-Rey and Moret-Bonillo 2000) are
advocated. These methods allow incremental development and fast prototyping that
are fundamental in the development of an intelligent system. In spiral methodology
the final stage of each development cycle is considered as an evaluation step of the
quality of the developed product.
Successful implementation of DSS or IAM for decision-making relies on the use
of three evaluation dimensions (Adelman 1992 cited by Sojda 2007) :
Examining the logical consistency of the system's algorithms (verification);
Empirically testing the predictive accuracy of the system (validation); and
Documenting users' satisfaction (validation).
Finlay and Wilson (1991) make a further distinction inside the testing of the
predictive accuracy of the system. They use 'analytical validation', for checking
each part of the modelling system, and 'synoptic validation' for checking that an
acceptable output from the whole modelling system is achieved for each set of inputs.
Many authors add to this validation one or several additional phases - commonly
grouped together under the term evaluation - assessing aspects of the tool beyond
the validity of the final solutions. As a result evaluation becomes an endeavour to
analyse aspects such as utility, robustness, rapidity, efficiency, extension possibilities,
ease of use, credibility, etc. (Mosqueira-Rey and Moret-Bonillo 2000) .
Three Phases to Evaluate an Integrated Framework
In this chapter we focus on the case of the integrated framework (IF), SEAMLESS-IF.
SEAMLESS-IF has been developed for ex-ante assessment of alternative agricultural,
economic and environmental policy options and technical innovations for their
impacts on the sustainability of agricultural systems at European, national or
regional levels (van Ittersum et al. 2008) . This framework is a model-based tool to
Search WWH ::




Custom Search