Geoscience Reference
In-Depth Information
GC may appear to some to be technique dominated; however, as previously discussed, the driv-
ing force is and has to be the geo part, as it is not intended that GC becomes an end in itself.
However, GC is unashamedly a problem-solving approach. One ultimate goal is an applied technol-
ogy. Like GIS, it is essentially applied in character but this emphasis should in no way diminish
the need for solutions that rest on a better theoretical understanding of how geographical systems
work and of the processes that are involved. This focus on scientific understanding and theoretical
knowledge provides a strong contrast with GIS. The challenge now is to create new tools that are
able to suggest or discover new knowledge and new theories from the increasingly spatial data-rich
world in which we live generated by the success of GIS. In this quest for theory and understanding,
GC using HPC is a highly relevant technology.
There is an argument that GC would have developed sooner if the HPC technology had been
more advanced. Indeed, until as recently as the early part of the 1990s, neither the power nor the
memory capacities of the leading HPC machines were sufficient to handle many of the problems of
immediate geographical interest. However, HPC is a relative concept. It is certainly true that most
mathematical models developed by geographers made use of classical HPC hardware capable of a
few thousand arithmetic operations per second. However, today the HPC hardware is many millions
of times faster. It is still called HPC but it is like comparing the speed of a lame slug with a rocket!
One way of explaining what these changes in HPC hardware mean is to ask how would you
do your research if that PC on your desk was suddenly 10,000 times faster and more powerful. It is
likely that some researchers would not know what to do with it, some would not want it, but some
would spot major new possibilities for using the computer power to do geography (and geo-related
science) differently. It is this type of researcher who will switch to GC and be well placed to ben-
efit from the next two or three generations of HPC. However, merely identifying applications that
are by their nature potentially suitable for parallel hardware is not sufficient justification to invest
in the necessary parallel programming effort. The applications also have to present a formidable
computational challenge. What point is there in converting serial code that runs on a single CPU
workstation in 30 min to run on a parallel supercomputer with 512 CPUs in 10 s? Certainly there is
a software challenge, but the computational intensity of the task simply may not justify the effort
involved. An additional criterion is that the parallel application should offer some significant extra
benefit that could not be realised without it. There should be some evidence of either new science or
better science or of new results or improved results. The parallelisation task is not an end in itself.
In fact, it is totally irrelevant in the longer term. The biggest gains will come from those applica-
tions that were previously impossible but which can now be solved and, as a result, offer something
worthwhile knowing or being able to do.
What has changed dramatically during the 1990s is the maturity of parallel supercomputing, the
continued speed-up of microprocessors and the availability (after 20 years or so) of compilers that
bring parallel computing within the existing skill domain of computationally minded geographers.
The standardisation of a highly parallel Fortran compiler and also of the message passing interface
(MPI) eases the task of using parallel supercomputers in many areas of geographic application as
well as producing reasonably future-proof portable codes (Openshaw and Turton, 1999). When
viewed from a broader GC perspective, a major revolution in how geography and other spatial sci-
ences may be performed is well underway; it is just that many researchers in these disciplines have
not yet either realised it is happening or have not understood the possible implications for their
interests.
The opportunities are essentially fourfold:
1. To speed up existing computer bound activities so that more extensive experimentation can
be performed
2. To improve the quality of results by using computational intensive methods to reduce the
number of assumptions and remove shortcuts and simplifications forced by computational
restraints that are no longer relevant
Search WWH ::




Custom Search