Geoscience Reference
In-Depth Information
thus far found only limited acceptance within the discipline'. However, this reflects her confusion as
to what GC is all about and her mistaken belief that GC is not new but has been practised for at least
a decade as a form of Dobson's automated geography; see Dobson (1983, 1993). Indeed one of the
reasons for the author's insistence in this topic on GeoComputation with a capital G and a capital C
in the middle of the term rather than GeoComputation, all in lower case, is to try and emphasise the
newness and capture some of the excitement of what we understand GC to be about.
However, in writing about the nature of GC, there is clearly a danger in associating it too closely
with this or that exemplar technique. For example, Longley (1998a) and Macmillan (1998) both
make several comments about the use of a highly automated form of exploratory geographical anal-
ysis in GC, and occasionally, they appear to think that GC is sometimes believed to be little more
than this. The Geographical Analysis Machine (GAM) of Openshaw and associates (Openshaw,
1987; Openshaw and Craft, 1991) is the subject of this criticism, but GAM was only really ever used
as an illustration of one form or style of GC. Longley writes, 'GeoComputation has been caricatured
as uninformed pattern-seeking empiricism in the absence of clear theoretical guidance' (Longley,
1998a, p. 8). Maybe it should be added by the misinformed ! It was always intended to be more than
this; indeed, this is an extremely biased, prejudiced and blinkered view. Again Longley writes,
'A central assumption of much of this work is that machine “intelligence” can be of greater import
than a priori reasoning, by virtue of the brute force of permutation and combination - “might
makes right” in this view of GeoComputation' (Longley, 1998a, p. 12). This is a gross misunder-
standing of the origins of GAM and also a reflection of a faith in theory and hypothesis that is quite
unreal! It might help to know that GAM was developed for two main reasons: (1) knowledge of
the data precluded proper a priori hypothesis testing and (2) pre-existing hypotheses which could
be legitimately tested reflected knowledge and theories that may well be wrong. For instance, one
might speculate that disease rates will be higher within exactly 5.23 km of a specific point location.
Suppose this general hypothesis is correct except the critical distance was 1.732 km! The hypothesis
would be rejected and you would never be any wiser about the form of the correct hypothesis! How
silly! So why not use a GAM that would indicate the location of patterns treating all locations and
distances equally. Of course, you would then have the problem of understanding and explaining the
results but at least you would have found something if there was anything there that was sufficiently
simple that GAM could find it. This does not reduce human thinking; it merely increases its utility.
There is nothing wrong with building pattern hunting machines that are able to be more suc-
cessful at this task than we are, particularly in complex or multidimensional search spaces. Nor
does it necessarily imply that pattern detection is sterile because there is no understanding. Any
researcher with more than a modicum of intelligence or scientific curiosity will want to know why a
pattern exists here and not there. Pattern detection or the discovery of empirical regularities that are
unusual or unexpected can be an important first step in scientific understanding. It does not have to
be an end in itself! We should not be so ready to neglect inductive approaches based on data mining
technologies. No one is insisting that GC has to be exclusively inductive, only that this is a useful
technology in relevant circumstances. What is so wrong about building machines dedicated to the
inductive search for new theories or new models or new ideas? We would be daft to neglect any new
opportunities to augment human reasoning, thinking and deductive powers and processes by the use
of machine-based technologies. No one is yet suggesting that we relegate all thinking to machines;
not yet anyway! However, it is hard for some to accept or appreciate what the possible benefits may
be. Longley writes, 'Technology empowers us with tools, yet conventional wisdom asserts that we
need consciously and actively to use them in developing science without surrendering control to the
machine' (Longley, 1998a, p. 5). Yes of course, but when this comment is applied to the GAM, then
it shows an amazing naivety.
The meaning of GC is therefore no great mystery. It is essentially a computationally intensive
science-based paradigm used to study a wide range of physical and human geographical systems.
It is neither a grab-bag set of tools, nor is it of necessity only rampant empiricism, nor must it be
inductive, nor must it be without theory or philosophy! The distinctive features relate to its central
Search WWH ::




Custom Search