Information Technology Reference
In-Depth Information
may need hundreds of web servers to handle the traffic. Being able to acquire so many
servers in hours is invaluable. Most companies would need months or a year to set up so
many servers.
An advertising agency previously would not have the ability to do so. Now, without even
theknowledgeofhowtobuildadatacenter,anadagencycanhaveallthesystemsitneeds.
Possiblymoreimportant isthatattheendofthecampaign, theserverscanbe“givenback”
tothecloudprovider.Doingthattheoldwaywithphysicalhardwarewouldbeimpractical.
Another non-cost advantage for many companies is that cloud computing enabled other
departments to make an end-run around IT departments that had become recalcitrant or
difficult to deal with. The ability to get the computing resources they need by clicking a
mouse, instead of spending months of arguing with an uncooperative and underfunded IT
department, is appealing. We are ashamed to admit that this is true but it is often cited as a
reason people adopt cloud computing services.
Scaling and High Availability
Meeting the new requirements of scaling and high availability in the cloud computing era
requires new paradigms. Lower latency is achieved primarily through faster storage tech-
nology and faster ways to move information around.
InthiseraSSDshavereplaceddisks.SSDsarefasterbecausetherearenomovingparts.
There is no wait for a read head to move to the right part of a disk platter, no wait for
the platter to rotate to the right position. SSDs are more expensive per gigabyte but the
totalcostofownershipislower.Supposeyourequire10databaseserverreplicastoprovide
enough horsepower to provide a service at the latency required. While using SSDs would
be more expensive, the same latency can be provided with fewer machines, often just two
or three machines in total. The SSDs are more expensive, it is true—but not as expensive
as needing seven additional machines.
Service latency is also reduced by reducing the latency of internal communication. In
the past, information sent between two machines went through many layers of technology.
The information was converted to a “wire format,” which meant making a copy read for
transmission and putting it in a packet. The packet then went through the operating sys-
tem's TCP/IP layer and device layer, through the network, and then reached the other ma-
chine, where the process was reversed. Each of these steps added latency. Most or all of
this latency has now been removed through technologies that permit direct memory access
between machines. Sometimes these technologies even bypass the CPU of the source or
destination machine. The result is the ability to pass information between machines nearly
as fast as reading local RAM. The latency is so low that it has caused underlying RPC
mechanisms to be redesigned from scratch to fully take advantage of the new capabilities.
Search WWH ::




Custom Search