Environmental Engineering Reference
In-Depth Information
well as in measuring patterns of sprawl in real-world contexts
(Torrens, 2008).
Fuzzy pattern recognition directly addresses the weaknesses
of pixel-matching techniques, by introducing additional func-
tionality. First, maps, or images that are being registered for
agreement are processed to yield ''soft,'' fuzzy or transitional,
boundaries between pixels, rather than the ''hard,'' crisp, discrete
boundaries used in basic matching methodologies (Heikkila,
Shen and Kaizhong, 2003). Second, ''fuzzy logic'' (Kosko, 1993)
is used to determine the agreement between scenes, using AI-
inspired pattern recognition rather than brute force matching. In
addition to determining the overall agreement between scenes,
steps are added to the procedure to determine localized agree-
ment (Liu and Phinn, 2003), using, for example, linguistic
membership functions. Power and colleagues (2000) used fuzzy
validation procedures, in this context, to assess the performance
of automata models developed by Engelen and colleagues (1995),
enabling disagreement between observed and expected scenes
to be broken down into state-specific (land-use types) details.
This has obvious advantages, beyond remedying the weaknesses
of brute-force pixel-matching; it could, for example, highlight
whether particular state variables were subject to systematic
agreement errors (which could point to data problems or could
indicate areas where amendments to a model's rules might
be appropriate).
Conclusions
This chapter has presented a review and discussion of the state-
of-the-art in calibration and validation of automata models to
urbanization applications in the context of complex and dynamic
city-systems. A variety of techniques have been introduced and
assessed. It is important to realize, however, that many of the
models described in this chapter are in very early stages of
development. Also, in the context of calibration and validation,
it is noteworthy that the intentions of many of the modeling
projects and exercises discussed here differ from those that
characterize work in land-use and transport modeling traditions
common in support of municipal planning. Automata modeling
represents somewhat of a paradigm shift in urban simulation
(Albrecht, 2005), away from thinking of models as diagnostic or
prescriptive tools, toward a conceptualization of urban models as
artificial laboratories for experimenting with ideas about urban
dynamics. Consequently, many models may not be validated at
all; they may be developed as pedagogic instruments, or as ''tools
to think with''. Accordingly, we might consider a broad spectrum
of models (Torrens and O'Sullivan, 2001), ranging from very
simply-parameterized models in the tradition of Wolfram's CA
designed to test universality (Wolfram, 1984) to ''fuller'' planning
support systems designed to assist planning management, and
policy exercises (Torrens, 2002; Engelen, White and Uljee, 2002).
Depending on their position on this spectrum, models may have
different calibration and validation requirements.
Nevertheless, there are some important issues that relate
to calibration and validation across that spectrum, including
simplicity in model specification, data issues, the generality of
models, and the relationship between models and urban theory.
Simplicity is one of the most commonly advertised advantages of
automata models. This stems from their association with the idea
of generative emergence - the concept that simple rules can gen-
erate surprising and intricate complexity and that, unlike chaos,
the path from simplicity to complexity can be traced through a
causal relationship (Batty and Torrens, 2005). Those ideas are
often taken at face value when automata models are developed:
simple parameters are often introduced to models - and choice
may be a function of what data are to hand - and simulations
are run as ''blue skies'' experiments to see, essentially, what will
come out. A problem with using automata in this regard is that as
simulations runs evolve beyond initial conditions, automata (and
particularly urban cellular automata) have a strong tendency to
''go exponential'' and must often be tightly constrained in order
to produce patterns that resemble real cities. The resulting ''soup''
is sometimes confused with emergent phenomena (Faith, 1998;
Epstein, 1999).
Another issue relates to the role of automata modeling in the
experimental process. Urban automata models are often bor-
rowed from the physical sciences; while automata methodologies
are usually similar regardless of application (an automaton is
an automaton after all), urban automata experiments often bear
little resemblance to those in fields such as computer science,
physics, and chemistry (Oreskes, Shrader-Frechette and Belitz,
1994), where experiments deal with systems that are relatively
well-understood in comparison to urban systems. The idea of
simple models generating surprising complexity is a powerful
one, but is predicated on the notion that an (often small) set of
23.3.3 Runningmodels exhaustively
Sweeping the parameter space of a model involves exploring the
complete range of outcomes possible with a particular model
specification, or looking at its ''space of possibilities'' (Coucle-
lis, 1997). One approach is to map those possibilities using
graphs. Finite state transition graphs can be used to examine the
global evolution - step-by-step or transition-by-transition - of an
automata simulation, relying on graphs (networks) to plot the
''state space'' of an automaton (Wolfram, 1994). Essentially, the
complete trajectory of a model can be visualized. Early inroads are
being made toward such a scheme, beginning with data-mining
procedures for automata models (Hu and Xie, 2006).
Using stochastic (probabilistic) constraint parameters creates
an interesting problem: different results can be produced from
identical parameterizations; there are often a near infinite number
of micro-states that might determine macro-conditions, even for
a small set of model parameterizations (Wilson, 1970; Oreskes,
Shrader-Frechette and Belitz, 1994). One way to ''smooth out''
this sort of variation and to narrow the candidate configurations
to a more manageable set size is to employ Monte Carlo averaging.
Simulations can be run from identical conditions or using the
same parameter values (variation then comes from different
random number draws used in simulation); they may also be run
repeatedly, using different combinations of parameter values (Li
and Yeh, 2000), or using variable start conditions. Monte Carlo
averaging is also useful for generating probability maps for use in
prediction. Simulations with the SLEUTH model, for example,
have been run using Monte Carlo Averaging in an application
to the Santa Barbara Region, enabling the selection of locations
based on a cut-off rate of 90% success (Goldstein, Candau and
Clarke, 2004).
Search WWH ::




Custom Search