Civil Engineering Reference
In-Depth Information
One of the main concerns is the way in which indicators are devel-
oped through often 'ad hoc' processes without a structured framework
or consensus on what urban sustainability is (Alberti, 1996; Mitchell,
1999; Bossel, 1998; Lundin & Morrisson, 2002; Lombardi & Cooper,
2009). A further concern is that detailed indicator systems 'are often
difficult to operationalize…as precise empirical evidence is not always
available or accessible' (Finco & Nijkamp, 2001: 296).
According to Du Plessis (2009), a further problem with aggregate
indicator systems is that they break up the problem of urban sustain-
ability into smaller, simpler sub-problems that can then be reduced to
specific ratios, for example, energy use per square metre, people per
hectare, or number of parking spaces per tenant. This reductionist
approach was criticised as early as the 1960s by Jane Jacobs for
attempting to turn a problem of disorganised complexity into 'prob-
lems of simplicity' that can then be resolved in isolation (Jacobs,
1992[1961]: 438).
As discussed by Bossel (1998), Brugmann (1999), Meadows (1999)
and, recently, by Birkeland (2005), many larger-scale applications of
indicator systems, including current indicator-based building
assessment systems prioritise retrospective analysis over future ori-
entated design; their use encourages measurable and therefore
mechanistic approaches at the expense of more innovative systems
that defy simplistic measurement; analysis that aggregates meas-
urements obscure total resource flows and systemic interactions
and discourage solutions that build on synergies and symbiosis;
and data-driven processes come at the expense of mapping systems
dynamics (Du Plessis, 2009). This view is supported by Schendler &
Udall (2005) who, in their review of the Leadership in Energy and
Environmental Design (LEED) rating system, conclude that an indi-
cator-based rating system rewards point-mongering but not inte-
grated design or innovation.
Finally, perhaps most critically, many of the indicators reflect the
specific interests of their authors, they are blunt to say the least (Bossel,
1998; Sveiby, 2004; Adams, 2006). Even much-used statistics rely on
assumptions that are often hidden when we draw our conclusions. In
other words, often the decision determines the indicators chosen. As
such the development of indicators is 'a dialectic process that goes
hand in hand with the development of policies' (Foxon et al ., 1999: 146),
and not necessarily the product of an empirically derived understand-
ing of what would constitute sustainability in the particular domain in
which the indicator is to be used for assessment.
Literature in the field has highlighted the importance of user involve-
ment in indicator design and acceptance (Lombardi & Cooper, 2007b;
Alwaer et al ., 2008a,b; Alwaer & Clements-Croom, 2009). Stakeholders
may have local knowledge that can contribute to more effective indi-
cators. Participation also ensures relevance to the decision-making
Search WWH ::




Custom Search