Hardware Reference
In-Depth Information
abstraction of their service. Google App Engine and Microsoft Azure raise the level of abstrac-
tion to managed runtimes and to ofer automatic scaling services, which are a beter match to
some customers, but not as good a match as AWS to the material in this topic.
Amazon Web Services
Utility computing goes back to commercial timesharing systems and even batch processing
systems of the 1960s and 1970s, where companies only paid for a terminal and a phone line
and then were billed based on how much computing they used. Many efforts since the end of
timesharing then have tried to offer such pay as you go services, but they were often met with
failure.
When Amazon started offering utility computing via the Amazon Simple Storage Service
(Amazon S3) and then Amazon Elastic Computer Cloud (Amazon EC2) in 2006, it made some
novel technical and business decisions:
Virtual Machines . Building the WSC using x86-commodity computers running the Linux
operating system and the Xen virtual machine solved several problems. First, it allowed
Amazon to protect users from each other. Second, it simplified software distribution within
a WSC, in that customers only need install an image and then AWS will automatically dis-
tribute it to all the instances being used. Third, the ability to kill a virtual machine reliably
makes it easy for Amazon and customers to control resource usage. Fourth, since Virtual
Machines can limit the rate at which they use the physical processors, disks, and the net-
work as well as the amount of main memory, that gave AWS multiple price points: the low-
est price option by packing multiple virtual cores on a single server, the highest price op-
tion of exclusive access to all the machine resources, as well as several intermediary points.
Fifth, Virtual Machines hide the identity of older hardware, allowing AWS to continue to
sell time on older machines that might otherwise be unatractive to customers if they knew
their age. Finally, Virtual Machines allow AWS to introduce new and faster hardware by
either packing even more virtual cores per server or simply by offering instances that have
higher performance per virtual core; virtualization means that offered performance need
not be an integer multiple of the performance of the hardware.
Very low cost . When AWS announced a rate of $0.10 per hour per instance in 2006, it was
a startlingly low amount. An instance is one Virtual Machine, and at $0.10 per hour AWS
allocated two instances per core on a multicore server. Hence, one EC2 computer unit is
equivalent to a 1.0 to 1.2 GHz AMD Opteron or Intel Xeon of that era.
(Initial) reliance on open source software . The availability of good-quality software that had
no licensing problems or costs associated with running on hundreds or thousands of serv-
ers made utility computing much more economical for both Amazon and its customers.
More recently, AWS started offering instances including commercial third-party software
at higher prices.
No (initial) guarantee of service . Amazon originally promised only best effort. The low cost
was so atractive that many could live without a service guarantee. Today, AWS provides
availability SLAs of up to 99.95% on services such as Amazon EC2 and Amazon S3. Ad-
ditionally, Amazon S3 was designed for 99.999999999% durability by saving multiple rep-
licas of each object across multiple locations. That is, the chances of permanently losing an
object are one in 100 billion. AWS also provides a Service Health Dashboard that shows
the current operational status of each of the AWS services in real time, so that AWS uptime
and performance are fully transparent.
Search WWH ::




Custom Search