Geoscience Reference
In-Depth Information
With the end of the Cold War, the supercomputer market was no longer large enough or rich
enough to support the design of new custom chips, and the remaining manufacturers in the market
changed to the use of highly parallel arrays of cheap consumer chips with fast interconnections
between them, for example, MPP. These chips were essentially the same as those used in worksta-
tions and personal computers. Likewise, physical limitations will eventually prevent chip designers
from making these chips any faster. The consequence is the descent of parallel computing from the
elevated heights of supercomputing to the desks of users.
More recently, a variety of parallel computing systems have been developed and brought into
applications. These include parallel computers (supercomputers), computer clusters (CPU and GPU),
computational grids and multi-node CPU computers as outlined in the previous section. The appli-
cation of parallel computing is not just limited to the field of computer science; rather, it is applied
in many more fields. Cosnard and Trystram (1995) set out how parallel computing has been adopted
in many fields, including weather and climate forecasting, genetics, nuclear engineering, mineral
and geological exploration and astrophysical modelling. This diversification has been accompanied
by other innovations, specifically the use of the GPU of a graphics card to process computationally
intensive tasks or programs. Graphics cards were initially devised to render images faster on a com-
puter. However, the processing cores on a graphics cards can be used to run algorithms in parallel.
3.4 PARALLEL COMPUTING AND GEOGRAPHY
The uptake of parallel computing has had a short and patchy history in geography and the social
sciences. In most cases, parallel computing has been used to speed up the application of an existing
algorithm, essentially offering data parallelism . It has not been used to develop fresh approaches
for resolving complex geospatial problems. This is by far the easier of our two options and such
failings have led some to argue that faster processing is not important. Does it really matter if you
have to wait a little longer for an answer? Perhaps program runtimes are not important because all
applications will eventually get faster as chip speeds increase. Such simplistic arguments, however,
completely miss the point. It is the nature of what can or cannot be achieved, in a timely, viable and
worthwhile manner, which has changed with the arrival of parallel computing. The overall con-
cept encompasses a synergistic combination of powerful hardware and software, exemplified, for
example, by the early work of Openshaw and Turton (1996), who used a Cray T3D to classify small
area UK census data. Another early example is ZDES (Zone DESign) (Openshaw and Rao 1995;
Openshaw and Schmidt 1996), which is a system that allows users to design new zoning systems
from smaller regions and is highly computationally intensive. By breaking the system out of the
GIS framework into a loosely coupled system, the use of available parallel resources to speed up the
problem became possible.
Various areas in which speed-up would be of the utmost importance might include instances
in which number crunching is replaced with model crunching , for example, in meta-modelling
(Sreekanth and Datta 2011), or where bootstrapping is used to generate statistical population dis-
tributions and confidence limits (e.g. Nossent et al. 2011) and where extended runs are performed
on large data sets for long periods of time, commensurate with our need to understand the surface
impact of global warming. The implementation of such products in the form of real-time data-
processing applications would also be of particular significance to geography and GC.
Fogarty (1994) pointed out that the US list of grand challenges for HPC (Office of Science
and Technology 1987) lacked any reference to the social sciences and in response argued for the
inclusion of more research on the use of supercomputing in GIS. This continued the trend of the
preceding years where the only interest geographers had shown in parallel computing was to per-
form basic GIS operations more quickly (Costanza and Maxwell 1991; Faust et al. 1991; Kriegel
et al. 1991). In this view, Fogarty was mistaken as to what a grand challenge was; while building a
faster GIS is important as a tool for geography, it lacks the novelty and size for many of the com-
plex problems faced by geographers. Openshaw (1995) subsequently proposed that human systems
Search WWH ::




Custom Search