Information Technology Reference
In-Depth Information
Proprietary vs. Open Source
Earlier in this chapter we did an overview of Type 1 (bare metal) and Type 2 (VM)
hypervisors. We also discussed how hypervisor technology became an enabler for large-
scale data centers that originated from large enterprises, financial institutions, and such,
becoming centralized and commoditized into public and private cloud offerings.
In the pre-cloud machine stack, we had silicon (bare metal—CPU, memory controllers,
memory, etc.), BIOS (Basic Input/Output System), and operating system living on top of the
machine as the interface between the bare metal and the end user.
Moore's Law, Increasing Performance, and
Decreasing Enterprise Usage
Since the early '80s, Moore's law has been dictating the trend in performance increase
of processors. As chip companies embarked on the race to fire up the clock and increase
the number of transistors in processors in every release cycle, the raw performance of the
processors has increased manifold and keeps on increasing. In addition to CPUs, we now
have graphics processing units (GPUs) and field-programmable gate arrays (FPGAs) as
coprocessors and accelerators for cloud applications ranging from scientific simulations to
high-speed options pricing and high-frequency trading (HFT) within finance. The GPUs
are massively parallel processors that have generally defied Moore's law because instead
of increasing the clock of the processor, they have made processing cores lightweight and
increased their count, making them ideal for parallel workloads. This massive increase in
performance still needs to be matched with software that is most commonly used in enter-
prises, and this is where we realize the importance of hypervisors in increasing the density
of users on a single server node.
Most enterprises have their own private data centers that are deployed in-house and
host software applications in use across the enterprise. Intel holds a big chunk of the
processors that go into the servers, which then get deployed in these enterprise data cen-
ters. For the sake of this real-world example, we will use the popular Intel Xeon series
processor as a benchmark for available performance/CPU in a data center server. A third-
generation Xeon supports at least four physical cores, which can then be extended with
hyperthreading to at least eight. This gives plenty of raw computing performance for
applications to consume. However, most of the applications that are used within any given
enterprise cannot consume 100 percent or even 80 percent of the complete available raw
performance. This, however, does not mean that all enterprise applications are “compute
shy.” There are compute workloads, especially when we deal with cleaning up, processing,
and visualization of massive data streams generated by an enterprise, but these workloads
would classify as an exception rather than the norm.
The majority of the applications' compute load would easily consume at best a quarter
of the available raw compute power on a physical server. Now combine this with the fact
that not every single enterprise worker would be connected to the data center at the same
time; there will always be times of peak usage and times of low-density usage. This is where
Search WWH ::




Custom Search