Geoscience Reference
In-Depth Information
paradoxes of contemporary hazards research is that the damage caused by hazards
continues to rise despite massive expenditure in preventing losses from fl oods,
earthquakes, and other environmental hazards to which 'less advanced' hunter
gatherer and agricultural societies were apparently much better adapted (Burton et
al., 1993; Tobin and Montz, 1997).
Therein lies the key conceptual concern of this chapter: How has research on
environmental extremes accounted for the qualitatively different experience of envi-
ronmental extremes under modernity? This chapter will explore this question by
tracing the major conceptual developments in the fi eld of hazards research.
Unpacking the Terminology of Environmental Hazards
Social scientifi c research on environmental hazards has understood them in terms
of four variables - risk, exposure, vulnerability, and response - though as we will
see precisely how those variables are defi ned and said to inter-relate is somewhat
contested in the literature. Mitchell (1990), for example, conceptualises hazards as
a multiplicative function:
Hazards
=
f (risk
×
exposure
×
vulnerability
×
response)
More commonly, however, risk is understood simply as the probability of particular
extreme events occurring in particular place. This more prevalent conceptualisation
of risk ignores the consequences of a given event captured by the more nuanced
defi nition of risk as a multiplicative function of probability × consequences (Cutter,
1993). Nevertheless, the simple, and some would argue misleading, probabilistic
formulation of hazard risk is particularly prevalent among engineering and actuary
practitioners in hazards management (Cardona, 2004). For instance, fl ooding risk
is typically discussed in terms of return periods, which represent the probability of
fl ood event of a particular magnitude, such as the 100-year fl ood that is the default
standard for many fl ood defence systems. Saying that the great Mississippi fl ood of
1993 was a 500-year fl ood is a way of stating that a fl ood of that magnitude has a
0.2 percent chance of occurring any particular year (Pitlick, 1997). Many physical
scientists, engineers, and regrettably, policymakers tend to over-emphasise the
importance of controlling the frequency and intensity of events. Many engineering
interventions are geared towards controlling this narrowly defi ned aspect of hazard
frequency, to the relative neglect of the remaining variables captured in Mitchell's
formulation above. The building of levees along riverbanks to control fl oods is one
such example of engineering-based risk mitigation. Although such physical interven-
tions to reduce the frequency of extreme events can be a useful part of an overall
disaster reduction and hazard mitigation strategy, they are not by themselves suffi -
cient, and can at times be wasteful, if they encourage behaviour that increases
exposure or vulnerability to a given hazard.
Exposure in the hazards context means the number of people and value of prop-
erty or other social goods physically exposed to a given hazard. Both the northern
coast of Australia and the state of Florida in the United States face a comparable
risk of hurricanes making landfall any given year. Yet the potential damage suffered
in the United States from such an event is much higher because the density of coastal
development is much greater in Florida than in northern Australia. In other words
the exposure is higher in the United States than the northern Australia. The relation-
Search WWH ::




Custom Search