Environmental Engineering Reference
In-Depth Information
been perceived (rational bad behaviour). A society that collapses is arguably
an extreme case of lack of resilience, yet it is probably no coincidence that we
find the positive version of exactly the same characteristics in the general
descriptions of what a system - or even an individual - needs to remain in
control. A resilient system must have the ability to anticipate, perceive, and
respond. Resilience engineering must therefore address the principles and
methods by which these qualities can be brought about.
It is a depressing fact that examples of system failures are never hard to
find. One such case, which fortunately left no one harmed, occurred during the
editing of this topic. As everybody remembers, a magnitude 9.3 earthquake
occurred in the morning of December 26, 2004, about 240 kilometers south of
Sumatra. This earthquake triggered a tsunami that swept across the Indian
Ocean killing at least 280,000 people. One predictable consequence of this
most tragic disaster was that coastal regions around the world became acutely
aware of the tsunami risk and therefore of the need to implement well-
functioning early warning systems. In these cases there is little doubt about
what to expect, what to look for, and what to do. So when a magnitude 7.2
earthquake occurred on June 14, 2005, about 140 kilometers off the town of
Eureka in California, the tsunami warning system was ready and went into
action.
It is a universal experience that things sooner or later will go wrong, and
fields such as risk analysis and human reliability assessment have developed a
plethora of method to help us predict when and how it may happen [6]. From
the point of view of resilience engineering it is, however, at least as important
to understand why things go wrong. One expression of this is found in the
several accident theories that have been proposed over the years not least the
many theories of ‗human error' and organizational failure. Most such efforts
have been engrossed with the problems found in technical or sociotechnical
systems. It is almost trivial to say that we need a model, or a frame of
reference, to be able to understand issues such as safety and resilience and to
think about how safety can be ensured, maintained, and improved. A model
helps us to determine which information to look for and brings some kind of
order into chaos by providing the means by which relationships can be
explained. This obviously applies not only to industrial safety, but to every
human endeavor and industry.
To do so, the model must in practice fulfill two requirements. First, that is
often expressed in terms of Murphy's law [6], the common version of which is
that ‗everything that can go wrong, will'. A much earlier version is Spode's
law, which says that ‗if something can go wrong, it will.' It is named after the
Search WWH ::




Custom Search