Database Reference
In-Depth Information
time on a central, powerful mainframe. This sentiment is accurate in superficial ways.
Certainly the concept of using the network to access data and then processing it some-
where else is the same, but dig a bit deeper and you'll notice that there is something
much more profound happening.
Like Big Data, the term cloud computing is a buzzword that is often used to refer to
any number of concepts, from Web applications to Internet services to virtualized
servers. The real difference between what is known in the mainframe world as time-
sharing and cloud computing is that, rather than being tied to a single computer across
a network, users are increasingly served by a dynamic computing fabric. No longer is
user data tied to a single, powerful machine; data is sharded, replicated across many
data centers, and made redundant to protect against catastrophic failure.
Security maven Bruce Schneier once wrote, “The old time-sharing model arose
because computers were expensive and hard to maintain. Modern computers and net-
works are drastically cheaper, but they're still hard to maintain.” 2 Indeed, managing
machines can be challenging. Individual machines break, and maintaining hardware
can be expensive and time consuming. Network access to an account on a single
mainframe was once the only way to achieve powerful computing resources because
the mainframe itself was very expensive. Now the common administrative challenges
are being abstracted, and specialists in massive data centers focus on security, connec-
tivity, and data integrity. Computing as a utility is possible because economies of scale
and access to the network have made compute cycles very cheap.
A decade into the 21st century has brought out claims by tech punditry that the
PC is dead. The reality is a bit more complicated. In an era of device independence, a
user, whether on a laptop, on a phone, in the car, or at check-in at an airport kiosk can
access the same data connected to their identity. An average consumer can find a great
number of conveniences from utility computing, such as not having to worry about
the integrity of data on their local hardware and easy sharing of documents.
User data is already moving off of local machines and into utility computing envi-
ronments. As this goes, so does the need to process data locally. Until networking
speeds become faster (and they may never be fast enough), it's most efficient to process
data where it lives. In many cases, data is already living in the data center thanks to
the growth of Web and mobile applications.
Moreover, many Web applications require huge compute clouds to function at all.
Search engines like Google use massive data-processing algorithms to rank every pub-
lic page on the Internet. The social graphs that power LinkedIn and Facebook could
not be built without the huge amounts of user data being generated everyday. The
cloud is hungry, and it eats data. As a result of all of this, there is a great deal of inter-
est in Web services that live completely in the cloud. Data processing as a service is a
trend we will see more and more of as the field matures.
2. www.schneier.com/blog/archives/2009/06/cloud_computing.html
 
Search WWH ::




Custom Search