Databases Reference
In-Depth Information
However, as you can probably already see, if the virtual server workloads in this example were
correctly sized and their workloads managed, then a signii cant amount of data center space, power,
cooling, server hardware, CPUs, and memory can be saved by deploying one rather than four
physical servers.
This “deploy only what you actually need” approach provided by virtualization explains why the
technology moved so quickly from being deployed in the development lab to enterprise data centers.
In fact, other than smartphone technology, it's hard to i nd another technological innovation in
recent years that has been adopted so widely and rapidly as virtualization has.
This rapid adoption is highly justii able; virtualization brought IT departments an efi cient data
center with levels of l exibility, manageability, and cost reduction that they desperately needed, espe-
cially during the server boom of the mid-2000s and then the recession of the late 2000s. Moreover,
once virtualization is deployed and the benei ts of replacing old servers with fewer new servers are
realized, the technology then goes on to deliver more infrastructure functionality — and
interestingly, functionality that wasn't available with traditional physical servers.
Indeed, it's rare to i nd a SQL Server environment now which doesn't use virtualization technologies
in some way. In larger environments, companies might only be deploying it on developer workstations
or in the pre-production environment; but increasingly I am i nding small, mid-size, and even large
infrastructures that are hosting their entire production environment in a virtualized manner.
History of Virtualization
The concepts of the virtualization technology that people are deploying today are nothing new, and
you can actually trace them back to IBM's mainframe hardware from the 1960s! At the time, main-
frame hardware was very expensive, and customers wanted every piece of hardware they bought to
be working at its highest capacity all of the time in order to justify its huge cost. The architecture
IBM used partitioned a physical mainframe into several smaller logical mainframes that could each
run an application seemingly concurrently. The cost saving came from each logical mainframe only
ever needing to use a portion of the mainframe's total capacity. While hardware costs would not
have decreased, utilization did, and therefore value increased, pleasing the i nance director.
During the 1980s and 1990s, PC-based systems gained in popularity; and as they were considerably
cheaper than mainframes and minicomputers, the use of virtualization disappeared from the tech-
nology stack for a while. However, in the late 1990s, VMware, a virtualization software vendor,
developed an x86-based virtualization solution that enabled a single PC to run several operating
system environments installed on it concurrently. I remember the i rst time I saw this running and
was completely bafl ed! A backup engineer had a laptop running both Windows and Linux on it;
from within Windows you could watch the virtual server boot with its own BIOS and then start up
another operating system. At the time, very few people knew much about the Linux operating
system, especially me, so the idea of running it on a Windows laptop looked even more surreal!
This example was a typical use of VMware's original software in the late 1990s and early 2000s, and
for a few years, this was how their small but growing customer base used their technology. It was only
a few years later that a version of their virtualization software hosted on its own Linux-based operating
system was released and data center hosted server-based virtualization solutions began appearing.
Fundamentally, this server-based virtualization software is the basis of the platform virtualization
solutions we use today in the biggest and smallest server environments.
 
Search WWH ::




Custom Search