Information Technology Reference
In-Depth Information
Traditional data centers housed a multitude of applications from just as many platforms,
each with a different requirement, like the OS and the kind of processor that is used. That
is why these data centers sometimes have different areas in which hardware is grouped
together to be used for certain applications or operating systems. With cloud computing
and standardization, a cloud data center can have a single, homogeneous hardware archi-
tecture, making it easy to connect all the components of the data center together and also
making it cheaper because suppliers often give certain discounts when you buy in bulk. It
also reduces maintenance costs since it would be easy to source spare parts and you would
only need to train personnel for a single hardware architecture; there would be no need to
hire experts with different hardware expertise. It certainly makes things simpler from a
technical standpoint.
Moreover, compared to traditional data centers meant to cater to the different needs of
different departments within an organization, cloud computing data centers are meant to
serve many customers using one or a couple of applications. In essence, cloud data centers
do only a few things but do them simultaneously and in bulk, which means processors and
hard drives are doing the same things over and over. The workload is repetitive and simple,
which allows operators and manufacturers to tune the hardware being used for a specific
type of processing, making it much more efficient.
Higher Efficiency Requirement
As data centers become larger and larger, so does the power requirement by simple addition.
More power means more heat, and more heat means more cooling. More of everything means
more capital expenditure and more maintenance expenditure.
So because of cloud computing, data center hardware has been forced to evolve. It is
becoming more efficient. For example, new server CPUs are clocked at speeds to that of
previous generations, but they contain more cores and better pipelining and therefore can
process more threads faster and yet produce less heat and consume less power. To put that
in perspective, the 2006-released Intel Xeon 7150N “Tulsa” CPU built on a 65 nm process
based on the old NetBurst architecture and running at 3.5 GHz was rated at 150 watts
thermal design power (TDP) and cost $2,622 at that time, while the September 2013 Xeon
E5-2697 V2 “Ivy Bridge-EP” with 12 cores and based on a 22 nm process running also at
2.7 GHz is rated at only 130 watts and costs similarly at around $2,614. It is simply amaz-
ing what a mere seven years can do in terms of hardware technology advancement.
With efficiency in mind, servers are now designed to be more energy and load aware and
use minimal power when workloads are less, especially at idle. Older generations of servers
still used above 60 percent of their rated power when idle, while newer servers are designed
to use only about 25 percent to 50 percent of their total rated power consumption.
When considering equipment for efficiency, it is often a good idea to find
out how efficient the individual parts are and how they actually perform in
real-world applications. Sometimes manufacturers have methods of deter-
mining factors like efficiency that may not be compatible with the kinds of
workloads you employ.
Search WWH ::




Custom Search