Geoscience Reference
In-Depth Information
drive consumption or migration such as proximity and the attractiveness of a particular location.
Models can also be used to predict behaviours under a changed set of conditions or the future state,
for example, migration patterns of individuals in the future or what the climate conditions will be
in 2050 and what implications this has for the environment and the world's economy. Yet there are
clearly limits to modelling that need to be recognised.
In the first edition of GeoComputation (GC), Kirkby (2000) wrote about limits to modelling in
the earth and environmental sciences. In contrast, this chapter considers limits to modelling in the
broader field of GC and covers both physical and human systems since both suffer from the same
types of limitations albeit to differing degrees. Moreover, integrated modelling, which takes both
physical and social systems into account, is increasingly becoming a more prevalent paradigm for
tackling problems of a regional or global nature (Kabat 2012). Back in 2000, the main issues raised
by Kirkby included inherent predictability and chaos, non-stationarity, hysteresis and the need to
simplify models. He also presented two areas where additional computational power should be
funnelled, that is, in investigating more complex models and in improving the process of calibra-
tion, validation and estimation of uncertainty. This latter recommendation has been realised with
increasing frequency over the last 15 years but has also led to the emergence of new potential limita-
tions. Many of the arguments presented by Kirkby (2000) still apply today, and despite changes in
computing technology and access to spatially referenced big data , some limits may have been raised
while others still remain fundamental barriers to modelling.
Within the more physical side of geography and related disciplines, numerical weather predic-
tion (Lynch 2008) and advances in physical hydrological models (Abbott et al. 1986) were already
benefitting from improved computational systems during the 1980s, a progress which continues to
the present day. The advent of GC was driven by the need to develop models of greater complexity
particularly within human geography where Openshaw (1995a) called for human systems modelling
to become a new grand challenge area in science. By coupling geographical information systems
(GIS), parallel processing and artificial intelligence (AI), Openshaw (2000) argued that progress
could be made towards tackling this grand challenge. Since then, human systems have increasingly
been recognised as complex systems characterised by self-organisation, emergence, path depen-
dence and criticality (Manson 2001) that require modelling using principles from complexity sci-
ence in order to be solved. More recently, the potential of the geographical sciences to address 11
fundamental challenges has been published, which forms a pressing research agenda for the early
twenty-first century (Committee on Strategic Directions for the Geographical Sciences in the Next
Decade; National Research Council 2010). GC has a crucial role to play in helping to answer many
of these questions. However, there are clearly limits to what we can model and understand. This
chapter provides a reflection on what these limitations are and, when possible, what we can do to
overcome them.
18.2 LIMITS OF COMPUTATIONAL POWER
One of the most obvious limits for GC is the computational power required to carry out the research.
The relentless hunger for computing power was a key driver in the origins of GC and moved early
proponents in this field like Stan Openshaw into a parallel computing world (Openshaw 2000). Back
then, the Cray T3D computers that were used to solve problems like spatial interaction modelling
and retail network optimisation were capable of teraflop computing or one trillion floating point
operations per second (Turton and Openshaw 1998). The latest Cray resources at the University
of Edinburgh are now capable of more than 800 teraflops (University of Edinburgh 2009), while
China's Tianhe-2 is the world's fastest supercomputer as of June 2013 with a speed of 33.86 pet-
aflops (Meuer et al. 2013).
Speed, however, was not the only issue to contend with when GC first emerged. Computer mem-
ory and storage were much smaller and considerably more expensive than they are now. So the
question for current GC research is whether these limits in computer speed, memory and storage
Search WWH ::




Custom Search