Information Technology Reference
In-Depth Information
DEFINITIONS
deep into the development process of academia
and industrial companies.
The introduction of standard hardware com-
ponents was accompanied by a similar trend in
software. With Linux there is a standard operating
system available today. It is also able to span the
wide range from desktop systems to supercomput-
ers. Although we still see different architectural
approaches using standard hardware components,
and although Linux has to be adapted to these
various architectural variations, supercomputing
today is dominated by an unprecedented stan-
dardization process.
Standardization of supercomputer components
is mainly a side effect of an accelerated standard-
ization process in information technology. As a
consequence of this standardization process we
have seen a closer integration of IT components
over the last years at every level. In supercom-
puting, the Grid concept (Foster and Kesselman,
1998) best reflects this trend. First experiments
coupling supercomputers were introduced by
Smarr and Catlett (1992) fairly early - at that time
still being called metacomputing. DeFanti et al.
(1996) showed further impressive metacomput-
ing results in the I-WAY project. Excellent results
were achieved by experiments of the Japan Atomic
Energy Agency (Imamura et al., 2000). Resch
et al. (1999) carried out the first transatlantic
metacomputing experiments. After initial efforts
to standardize the Grid concept, it was finally
formalized by Foster et al. (2001).
The promise of the Grid was twofold. Grids
allow the coupling of computational and other
IT resources to make any resource and any level
of performance available to any user worldwide
at anytime. On the other hand, the Grid allows
easy access and use of supercomputers and thus
reduces the costs for supercomputing simulations.
When we talk about supercomputing we typically
consider it as defined by the TOP500 list (TOP500,
2008). This list, however, mainly summarizes
the fastest systems in terms of some predefined
benchmarks. A clear definition of supercomputers
is not given. For this article we define the purpose
of supercomputing as follows:
We want to use the fastest system available
to get insight that we could not get with
slower systems. The emphasis is on getting
insight rather than on achieving a certain
level of speed.
Any system (hardware and software combined)
that helps to achieve this goal and fulfils the criteria
given is considered to be a supercomputer. The
definition itself implies that supercomputing and
simulations are a third pillar of scientific research
and development, complementing empirical and
theoretical approaches.
Often, simulation complements experiments.
To a growing extent, however, supercomputing
has reached a point where it can provide insight
that cannot even be achieved using experimental
facilities. Some of the fields where this happens
are climate research, particle physics or astrophys-
ics. Supercomputing in these fields becomes a key
technology if not the only possible one to achieve
further breakthroughs.
There is also no official scientific definition for
the Grid as the focus of the concept has changed
over the years. Initially, supercomputing was the
main target of the concept. Foster & Kesselman
(1998) write:
A computational grid is a hardware and software
infrastructure that provides dependable, consis-
tent, pervasive, and inexpensive access to high-end
computational capabilities.
Search WWH ::




Custom Search