Environmental Engineering Reference
In-Depth Information
such as choosing locations for any given production activity
like urban growth in a simulated landscape according to land
suitability (Manson, 2005). The decision making is commonly
formulated as a form of multicriteria evaluation (Collins, Steiner
and Rushman, 2001; Xiao, Bennett and Armstrong, 2007). How-
ever, a key challenge in designing software agents that represent
real decision-makers is to determine how each agent behaves over
the simulated landscape. In other words, it is a task of how to
computerize the behavior of each agent in the context of solving
a multicriteria evaluation problem.
There are several approaches employed in current literature
to enable agents to make intelligent decisions. From the per-
spective of dynamic systems concept, evolutionary programming
is employed to provide agents with intelligence to solve this
multi-criteria evaluation problem as an optimization problem
(Bennett and Tang, 2006; Manson, 2006; Xiao, Bennett and
Armstrong, 2007). Agents are also empowered with primitive
intelligence like swarms in natural environments (Parunak et al ,
2006; Alexandridis and Pijanowski, 2007) or random walkers in
built environments (Batty, 2001). In a spatially-explicit context,
agent-based models can be considered as generalized forms of
cellular automata where agents are not restricted to the cells
of a raster environment (Goodchild, 2005). Cellular automata
can be integrated as a physical infrastructure, over which agents
move according to specified rules to generate spatial patterns of
urban land uses (Li and Liu, 2008; Liu, Li and Liu, 2008; Torrens
and Benenson, 2005; Xie and Batty, 2005). Local neighborhood-
based analyses are frequently treated as important inputs for
agents to explore or examine rules of decision-making (Deadman
and Robinson, 2004; Torrens and Benenson, 2005; Alexandridis
and Pijanowski, 2007). Under the consideration of urban plan-
ning, probabilities derived from logistic regression models are
reformulated as quantitative measurements from which agents
make decisions (Waddell, 2002). Other statistical and quantita-
tive methods, such as regression model (Xie, Batty and Zhao,
2007) and discrete-choice model (Parker and Meretsky, 2004;
Jepsen, Leisz and Rasmussen, 2006), are deployed as the founda-
tions on which agents make decisions among choices of various
land developments, or compare model outputs with observed
land-use patterns.
Modelconstruc ionisanimplementa ionprocessthat
codes the model design by using existing tools or pro-
gramming from scratch for agent models. There are many
tools available for constructing ABMs and they are generally
grouped into four categories (The Center for the Study of
Complex Systems, 2009). The first group is open-source,
among which Repast (Recursive Porous Agent Simulation
Toolkit [Online] Available at: http://repast.sourceforge.net/
[accessed 19 November 2010]) and Swarm ([Online] Available
at: http://www.swarm.org/index.php/Main_Page [accessed 19
November 2010]) are the most known. The second cate-
gory is freeware. NetLogo ([Online] Available at: http://ccl.
northwestern.edu/netlogo/ [accessed 19 November 2010]) and
StarLogo ([Online] Available at: http://education.mit.edu/
starlogo/ [accessed 19 November 2010]) are the representatives
of this group. Proprietary tools, such as AgentSheets and iGEN,
belong to the third group. The fourth group includes a good
number of tool boxes that have been developed by researchers
and many of those are prototypes and hard to share or reuse
for a generic purpose (Karssenberg, De Jong and Van Der
Kwast 2007).
Model calibration and validation deal with the initiation and
the correctness of a model. In other words, calibration means
finding out the right values for the parameters contained in a
model, while validation aims at proving that a model is built cor-
rectly. Hence, calibration is a process of initializing a model such
that the model parameters are consistent with the data used to
create the model and they are the best fit with the real-world data
(Verburg et al ., 2006). A rigorous and comprehensive illustration
is provided in a paper dealing with the calibration of a SLEUTH
model (Dietzel and Clarke, 2007). However, validation is a chal-
lenging task in the domain of urban studies. In reality, urban
land use changes are spatially specific and temporally dependent.
Spatial variations and temporal changes are also impacted by
macro-scale policies and local socioeconomic conditions, which
are hard to capture in agent models. Moreover, not all variables
used in agent models are numerical. Texts and if-then rules are
often deployed to describe agent behaviors. Therefore, validation
is a critical but challenging task of agent-based modeling.
There are several published papers that provide a comprehen-
sive review of validation techniques designed for spatial models
(Turner, Costanza and Sklar, 1989; Kocabas and Dragicevic,
2009). Based on the summary provided by Kocabas and
Dragicevic (2009), there exist three positions on validation with
regards to complex systems models applied to geographical con-
texts. First, spatial models cannot be validated in a rigorous way
(Oreskes, Shrader-Frechette and Belitz, 1994), or cannot be used
for prediction (Batty, 2005). Second, outcomes of ABMs are
sensitive to the initial conditions of the model (Parker, Manson
and Janssen, 2003). Different initial conditions will lead to varied
evolution pathways and result in different structural patterns.
Therefore, path dependence analysis (Brown, Page and Riolo,
2005) and structural validation have to be performed to verify
an ABM. Third, it is possible to carry out ABM validation, but
specific procedures have to be followed.
The heart of these specific procedures relies on comparison
of simulation results with real-world observations. It involves
running anABMwith a variety of input parameters and observing
the program's outputs (Bratley, Fox and Schrage, 1987). The
values of the output variable derived from the model runs are
compared with the corresponding values of the independent
variable measured in observation (Gilbert and Troitzsch, 1999).
If the output from the model and the data collected from the
real-world observations are sufficiently similar, this will be a
good evidence in support of the validity of the model (Gilbert
and Terna, 2000).
The most common validation procedure is the map-
comparison, which compares the modeled output against the
map representing the reality. The relative operating characteristic
(ROC) method is used for assessing model validity in the context
of land-use-cover changes (Pontius and Schneider, 2001).
Chi-square and kappa statistics are often used to quantify the
raster-by-raster map comparisons, though there are obvious
limitations in the use of the kappa index and coincidence
matrix (Barredo, Kasanko and McCormick, 2003; Straatman,
White and Engelen, 2004; Hargrove, Hoffman and Hessburg,
2006). There are many other methods used in model validation.
For instance, model validation on the basis of multiscale
approaches was proposed by Kok and colleagues (2001), and
by comparison at pixel-to-pixel-level was tested by Boots and
Csillag (2006). Map comparisons are based on landscape metrics
(Lei et al ., 2005) and vector polygons using the goodness-of-fit at
various spatial configurations (Xie and Ye, 2007). A hierarchical
Search WWH ::




Custom Search