Information Technology Reference
In-Depth Information
and XEN hypervisors 2 have managed to solve the performance issue by reducing the
virtualization management overhead through enabling native performance capabilities from
the virtual machines (VMs) and by allowing direct access from the VMs to the network.
High-speed networking is also an important factor in HPC which requires fast communication
between clusters of servers and storage (Shainer et al, 2010). With the ability of more cloud
vendors to provider faster speed and networking connections, this issue has become less of a
problem for HPC.
Hence, a number of scientific and research communities have begun to look at the
possibility of using cloud computing in order to take advantage of its economic and efficiency
benefits. The experience of the Medical College of Wisconsin Biotechnology and
Bioengineering Center in Milwaukee in the USA is one example. Scientists at this institute
are making protein research (a very expensive undertaking) more accessible to scientists
worldwide, thanks largely to renting the massive processing power that is available on
Amazon's powerful cloud-based servers.
One of the major challenges for many laboratories setting up proteomics programs has
been the need to obtain and maintain a computational infrastructure required for analyzing a
vast flow of proteomics data generated by mass spectrometry instruments used in determining
the elemental composition as well as chemical structure of a molecule. Cloud computing
provided that ability at a very competitive cost. This meant that many more users could set up
and customize their own systems and investigators could analyze their data in greater depth
than was previously attainable, thus making it possible for them to learn more about the
systems they are studying (La Susa, 2009; Halligan et al, 2008).
Major cloud computing providers such as IBM and Google are also actively promoting
cloud computing as tools for aiding research. In 2007 Google and IBM announced a cloud
computing university initiative designed to improve computer science students' knowledge of
highly parallel computing practices in order to address the emerging paradigm of large-scale
distributed computing. In 2009, the National Science Foundation (NSF) awarded nearly US$
5 million in grants to fourteen universities through its Cluster Exploratory (CLuE) program to
help facilitate their participation in the IBM/Google initiative. The initiative's goal was to
provide the computing infrastructure for leading-edge research projects that could help
promote better understanding of our planet, bodies and many other issues.
A number of other US government departments are also beginning to explore the merits
of cloud computing for scientific purposes. Two of those come to mind: the Department of
Energy (DOE) and NASA, the American space agency. The DOE has embarked US$ 32
million for a cloud infrastructure project aimed at exploring the ability of cloud computing to
provide a cost-effective and energy-efficient computing service for scientists to accelerate
discoveries in a variety of disciplines, including analysis of scientific data sets in biology,
climate change and physics. The DOE's centers at the Argonne Leadership Computing
Facility (ALCF) in Illinois and the National Energy Research Scientific Computing Center
(NERSC) in California are hoping to be able to determine how much of DOE's mid-range
computing needs could and should run in a cloud environment and what hardware and
software features are needed for science clouds. Due to the exploratory nature of this project,
it was named 'Magellan', in honor of the Portuguese explorer who led the first effort to sail
2 A hypervisor is a virtual machine monitor (VMM) that allows multiple operating system to run concurrently on a
host computer.
Search WWH ::




Custom Search