Information Technology Reference
In-Depth Information
tion via distributed services and web applications,
as well as managing data repositories.
best performance. Support of these applications
via UNICORE (2008) which allows treating the
whole simulation chain as a single job is one of
the strengths of the DEISA Grid.
Coupled applications involving more than
one platform. In some cases, it does make sense
to spread a complex application over several
computing platforms. This is the case of multi-
physics, multi-scale application codes involving
several computing modules each dealing with
one particular physical phenomenon, and which
only need to exchange a moderate amount of data
in real time.
Applications Adapted to the
DEISA Grid Infrastructure
In the following, we describe examples of appli-
cation profiles and use cases that are well-suited
for the DEISA supercomputing grid, and that
can benefit from the computational resources
made available by the DECI Extreme Comput-
ing Initiative.
International collaboration involving sci-
entific teams that access the nodes of the AIX
super-cluster in different countries, can benefit
from a common data repository and a unique,
integrated programming and production environ-
ment (via common global file systems). Imagine,
for example, that team A in France and team B in
Germany dispose of allocated resources at IDRIS
in Paris and FZJ in Juelich, respectively. They can
benefit from a shared directory in the distributed
super-cluster, and for all practical purposes it looks
as if they were accessing a single supercomputer.
Extreme computing demands of a chal-
lenging project requiring a dominant fraction of
a single supercomputer. Rather than spreading a
huge, tightly coupled parallel application on two
or more supercomputers, DEISA can organize
the management of its distributed resource pool
such that it is possible to allocate a substantial
fraction of a single supercomputer to this project
which is obviously more efficient that splitting
the application and distributing it over several
supercomputers.
Workflow applications involving at least
two different HPC platforms. Workflow applica-
tions are simulations where several independent
codes act successively on a stream of data, the
output of one code being the input of the next
one in the chain. Often, this chain of computa-
tions is more efficient if each code runs on the
best-suited HPC platform (e.g. scalar, vector, or
parallel supercomputers) where it develops the
HPC APPLICATIONS IN THE CLOUD
With increasing demand for higher performance,
efficiency, productivity, agility, and lower cost,
since several years, Information Communica-
tion Technologies, ICT, are dramatically chang-
ing from static silos with manually managing
resources and applications, towards dynamic
virtual environments with automated and shared
services, i.e. from silo-oriented to service-oriented
architectures.
With sciences and businesses turning global
and competitive, applications, products and
services becoming more complex, and research
and development teams being distributed, ICT
is in transition again. Global challenges require
global approaches: on the horizon, so-called vir-
tual organizations and partner Grids will provide
the necessary communication and collaboration
platform, with Grid portals for secure access to
resources, applications, data, and collaboratories.
One component which will certainly foster this
next-generation scenario is Cloud Computing,
as recently offered by companies like Amazon
(2007 and 2010) Elastic Cloud Computing EC2,
IBM (2008), Google (2008) App Engine and
Google Group (2010), SGI (Cyclone, 2010),
and many more. Clouds will become important
dynamic components of research and enterprise
Search WWH ::




Custom Search