Information Technology Reference
In-Depth Information
unused computing capacity, widely distributed among a tribal arrangement
of PCs, midrange platforms, mainframes, and supercomputers. For exam-
ple, if a company has 5000 PCs, at an average computing power of 333 MIPS,
this equates to an aggregate 1.5 tera (1012) floating-point operations per
second (TFLOPS) of potential computing power. As another example, in the
United States, there are an estimated 300 million computers. At an average
computing power of 333 MIPS, this equates to a raw computing power of
100,000 TFLOPS. Mainframes are generally idle 40% of the time; Unix serv-
ers are actually serving something less than 10% of the time; most PCs do
nothing for 95% of a typical day. This is an inefficient situation for custom-
ers. TFLOPS speeds that are possible with grid computing enable scientists
to address some of the most computationally intensive scientific tasks, from
problems in protein analysis that will form the basis for new drug designs
to climate modeling and deducing the content and behavior of the cosmos
from astronomical data.
Prior to the deployment of grid computing, a typical business applica-
tion had a dedicated platform of servers and an anchored storage device
assigned to each individual server. Applications developed for such plat-
forms were not able to share resources, and, from an individual server's
perspective, it was not possible, in general, to predict, even statistically,
what the processing load would be at different times. Consequently, each
instance of an application needed to have its own excess capacity to handle
peak usage loads. This predicament typically resulted in higher overall
costs than would otherwise need to be the case. To address these lacunae,
grid computing aims at exploiting the opportunities afforded by the syn-
ergies, the economies of scale, and the load smoothing that result from the
ability to share and aggregate distributed computational capabilities and
deliver these hardware-based capabilities as a transparent service to the
end user.
At the core of grid computing, therefore, are virtualization and virtual
centralization as well as availability of heterogeneous and distributed
resources based on collaboration among and sharing of existing infra-
structures from different organizational domains that together build
the computing grid. The key concept is the ability to negotiate resource-
sharing arrangements among a set of participating parties (providers and
consumers) and then to use the resulting resource pool for some purpose.
The sharing that we are concerned with is not primarily file exchange
but rather direct access to computers, software, data, and other resources,
as is required by a range of collaborative problem-solving and resource-
brokering strategies emerging in industry, science, and engineering. This
sharing is, necessarily, highly controlled, with resource providers and con-
sumers defining clearly and carefully just what is shared, who is allowed to
share, and the conditions under which sharing occurs. A set of individuals
and/or institutions defined by such sharing rules form what we call a vir-
tual organization (VO).
Search WWH ::




Custom Search