Environmental Engineering Reference
In-Depth Information
early stage can interfere with proper testing to avoid deterring potential users and
leads to pressure on quick delivery of the integrated framework by fuelling users'
expectations. A balance thus needs to be found to ensure that the integrated
framework is relevant while not raising users' expectations beyond feasible levels.
Independent Testers
Following Mosqueira-Rey and Moret-Bonillo
(2000)
and Sojda
(2007)
the testing
of SEAMLESS-IF started with evaluators independent of tool developers. This
allowed a more objective evaluation of the tools and their suitability for the
integrated framework and it ensured that an interdisciplinary group was identified
in the project with the primary objective of regularly testing prototypes in realistic
applications to keep the development of the framework on track (Fig.
10.3
). During
the project, however, it proved difficult to enrol scientists in evaluation work that
is hard to publish in scientific journals, while requiring a high level of involvement
to understand and assess prototypes with limited features and a high level of integra-
tion of computerized tools from different disciplines. In addition to this push effect
the evaluators were also pulled into other parts of the project, for example contrib-
uting to the development of links between models, due to their extensive knowl-
edge of the framework and its components.
These push and pull factors led to a blurring of the distinction between evaluators
and developers at the end of the project. Given these factors at play it seems advisable
that evaluators start as external reviewers involved in the conceptualisation of the
whole framework based on potential applications, but not on tool development and
become involved in integrated tool development once individual components
become available for linking.
Multidisciplinarity
Evaluating a complex multidisciplinary tool like SEAMLESS-IF is a daunting
task. As noticed by Langvad and Noe
(2006)
multidisciplinary involvement is a
precondition for developing relevant decision support systems. At the same time
a different disciplinary background compounded by a wide diversity of cultures and
practices in an international project like SEAMLESS poses a major challenge to
both development and evaluation of the framework. As an illustration, in the
evaluation of the integrated assessment procedure the aim was to allow a diversity
of ways to frame problems to account for the diversity in backgrounds of both the
policy experts and modellers involved. Accounting for this diversity implied testing
in different countries. Because of language it was impossible to nominate a single
person to record observations (spontaneous comments from policy experts about
presentations and the way they interact with integrative modellers for problem