Database Reference
In-Depth Information
Rack-mounted servers have been the predominant form of servers and probably still are. The typical
rack-mounted server is measured in unit height ; the width is predetermined by the rack you are mounting the
server to. The industry currently uses mainly 19-inch- and occasionally 23-inch-wide racks. The rack unit corresponds
to 1.75 inches, and a typical 19-inch rack has support for 42 units. With the basic parameters set, the IT department
is free to choose whatever hardware is 19-inch-wide and otherwise fits inside the rack. The benefit of rack-mounted
servers is that you can mix and match hardware from different vendors in the same cage. Rack-mounted servers
can usually take more memory and processors than their blade counterparts. Recent benchmark results available
from the TPC website refer to high-end x86-64 servers with 5 and up to 7 unit height, taking a massive two to four
terabyte of memory. The next generation of Xeon processors to be released in 2013/2014 will push that limit even
further.
Blades seem well suited for clustered applications, especially if individual blades can boot from the SAN and
generally have little static configuration on internal storage or the blade itself. Some systems allow the administrator
to define a personality of the blade, such as network cards and their associated hardware addresses, the LUN(s)
where the operating system is stored on, and other metadata defining the role of the blade. Should a particular blade
in a chassis fail, the blade's metadata can be transferred to another one, which can be powered up and resume the
failed blade's role. Total outage time, therefore, can be reduced, and a technician has a little more time to replace the
failed unit.
Rack-mounted servers are very useful when it comes to consolidating older, more power-hungry hardware on
the same platform. They also generally allow for better extension in form of available PCIe slots compared to a blade.
To harness the full power of a 5U or even 7U server requires advanced features from the operating systems, such as
support for the Non-Uniform Memory Architecture (NUMA) in modern hardware. You can read more about making
best use of your new hardware in Chapter 4.
Regardless of which solution you decide to invest in for your future consolidation platform, you should consider
answering the following questions about your data center:
How much does the data center management charge you for space?
How well can the existing air conditioning system cope with the heat?
Is the raised floor strong enough to withstand the weight of another fully populated rack?
Is there enough power to deal with peaks in demand?
Can your new hardware be efficiently cooled within the rack?
Is your supporting infrastructure, especially networking and fiber channel switches, capable
of connecting the new systems the best possible way? You definitely do not want to end up in a
situation where you bought 10Gbps Ethernet adapters, for example, and your switches cannot
support more than 1 GBps.
Does your network infrastructure allow for a sufficiently large pool of IP addresses to connect
the system to its users?
There are many more questions to be asked, and the physical deployment of hardware is an art on its own.
All vendors provide planning and deployment guides, and surely you can get vendor technicians and consultants
to advise you on the future deployment of their hardware. You might even get the luxury of a site survey wherein a
vendor technician inspects corridors, elevators, raised flooring, and power, among other things, to ensure that the
new hardware fits physically when it is shipped.
Let's not forget at this stage that the overall goal of the consolidation efforts is to reduce cost. If the evaluation of
hardware is successful, it should be possible to benefit from economies of scale by limiting yourself to one hardware
platform, possibly in a few different configurations to cater to the different demands of applications. The more
standardized the environment, the easier it is to deliver new applications with a quick turnaround.
With the basic building block in sight, the next question is: what should be added as peripheral hardware? Which
options do you have in terms of CPU, memory, and expansion cards? What storage option should you use, etc.? The
following sections introduce some changes in the hardware world which have happened over the last few years.
Search WWH ::




Custom Search