Geoscience Reference
In-Depth Information
vulnerability to nitrate pollution, Refsgaard et al. (2006) report how six groups of
engineering consultants developed six different accounts of groundwater vulnerabil-
ity. Each consultant worked from a common database, with major differences
related to choice of method and to the assumptions made in assessing vulnerability.
Similarly, in a review of multi-criteria decision models, Myˇiak (2006) found that
model selection was often based on prejudiced views about the strengths and weak-
nesses of the candidate methods, rather than a careful analysis of the decision
problem. Unsurprisingly, most scientifi c studies show a strong partiality for which-
ever method conforms best to the worldview of the policy advisor. Indeed, when
consensus is lacking, other factors often infl uence the selection of methodology, such
as institutional arrangements (Fisher et al., 2002) or historical precedent (Shackley
and Wynne, 1995).
Despite the success of mathematical approaches, there are still many situations
in which a technical assessment of uncertainty cannot establish the reliability of
data and models, or may itself lack credibility. For example, probabilities of extreme
events may be highly unreliable, as extreme events are rare by defi nition. Also, their
probabilities will vary with the trajectory of the system (e.g., with climate change),
and their process controls may be qualitatively different from those operating during
smaller events (e.g., Powell et al., 2003). In order to evaluate these probabilities,
observations must be pooled into groups of similarly behaving or 'stationary'
samples, yet the concept of stationarity may be diffi cult to justify for extreme occur-
rences. In principle, therefore, the types and levels of uncertainty should be refl ected
in the methodologies chosen to assess and propagate them. In practice, however,
this link between types and levels of uncertainty and methods of assessment is fre-
quently missed (Brown, 2004), leading to spurious notions of precision, unreliable
uncertainties, or the omission of key sources of uncertainty, such as those associated
with social and political processes.
Early approaches to assessing risk also focused on quantifying, minimising, and
controlling uncertainty. They typically distinguish between expert and lay under-
standings or 'real' versus 'perceived' risk (e.g., Irwin and Wynne, 1996a,b; Wynne,
1992a,b), with most research devoted to expert understandings of 'real' risk (Owens
et al., 2004). These views can be seen in successive reports on risk published in the
1980s and early 1990s. For example, the Royal Society (1983) clearly distinguishes
between objective risks, identifi ed by science, and subjective perceptions of those
risks, which are considered poor approximations of the former. A later report
(Royal Society, 1985) lamented the public 'misunderstanding' of risk and called for
wider education on its scientifi c basis, while Royal Society (1992) proposed a series
of remedial approaches to better inform ignorant publics of the 'real' risks they
faced (Owens, 2000).
These ideas, often referred to as the information defi cit model (IDM), are based
on a number of contentious assumptions about the primacy of scientifi c knowledge.
First, they view the environment as a physical phenomenon, separate from society,
and measurable through objective, scientifi c, procedures. Many commentators
(e.g., Wynne, 1996) have argued that this distinction is artifi cial because scientifi c
practices are necessarily complicated by social and political processes.
Secondly, technical approaches assume that risk can be measured objectively. The
modern image of science and technology has been that '. . .given enough information
and powerful enough computers, it could predict with certainty in a quantitative
form, which would make it possible to control natural systems' (Tognetti, 1999,
Search WWH ::




Custom Search