Environmental Engineering Reference
In-Depth Information
(Lopez 2003) . Not only do individual components of the system need to be tested,
which could be done in accordance with the evaluation procedures of a specific
discipline from which the component originates. In addition the system as a whole
needs to be evaluated, assessing whether interactions between different components
originating from different disciplines perform properly. Evaluation is then likely to be
less dependent on the conventional and classical peer review and history matching
and more dependant upon protocols and tests yet to be developed (Parker et al. 2002) .
In practice development of an operational IA methodology requires extensive
resources, leaving limited time and funds for evaluation.
This chapter reflects on the evaluation of IA tools, looking at different types of
components of an IA methodology as well as their interactions. The aim of this
chapter is to derive general lessons and ideas for evaluating IA tools from the practical
experiences gained from the SEAMLESS project. We first offer the evaluation
approach used in SEAMLESS. It consists of three steps (conceptual evaluation,
technical evaluation, system evaluation), each of which is discussed in more detail in
subsequent sections. In each of these sections we discuss evaluation of three types of
components of an IA methodology: procedures, quantitative tool and graphical user
interfaces. The different character of these components gives rise to different evalua-
tion approaches. The last section concludes by summarizing the lessons learned from
SEAMLESS which may be useful for others projects aiming at evaluating IA tools.
Deriving a General Approach to Evaluate IA Tools
Many methods and tools have been developed to deal with IA objectives and
constraints (van Ittersum et al. 2008) . Broadly speaking, they can be split into two
groups: analytical (embracing models, scenarios and risk analysis) and participatory
(including dialogue methods, policy exercises and mutual learning methods)
(Rotmans 1998) . Among these methods, Integrated Assessment and Modelling (IAM)
includes a variety of quantitative models as well as scenario-based approaches
(Sharma and Norton 2005) . Such tools aim to support managers to control uncertainty
when they are making decisions about future options. When making policy decisions,
scenarios enable policy makers to anticipate by exploring possible futures and to
assess different alternatives according to their potential consequences (van Notten
et al. 2003 ; Börjeson et al. 2006) . Scenario-based approaches tell 'highly detailed,
logically consistent stories about the future' (Sharma and Norton 2005) . Model-based
approaches aim at describing quantitatively as accurately as possible the causal
relationships and interactions between the various components of the system under
study, in reaction to external constraints and endogenous changes. Scenario-based
approaches can use simulation results from quantitative models.
Since IAM tools aim at addressing policy questions, their evaluation needs to look
beyond the classical evaluation processes of quantitative models (Oreskes et al. 1994 ;
Rykiel 1996 ; Sinclair and Seligman 2000) . The majority of literature on evaluation
deals with verification and validation of Decision Support Systems (DSS), which
Search WWH ::




Custom Search