Information Technology Reference
In-Depth Information
Let us assume that a virtual machine with a guest operating system capable of 4 virtual
CPUs or sockets is used. It can be configured in the following ways:
4 virtual CPU/sockets with 1 core per CPU/socket
2 virtual CPU with 2 cores per CPU
1 virtual CPU with 4 cores
NIC Quantity, Speeds, and Configurations
A host to a virtual environment needs to have multiple channels for data traffic, and the
only way to provide them is via network interface cards. NICs are also required for con-
necting each virtual machine to the network because a VM's vNIC needs to be bound to
physical NIC in some networking configurations. But theoretically, you can make do with a
single NIC on the host and share that among all the virtual machines residing in that host.
However, saying that configuration is not recommended would be a gross understatement.
It would work, but network performance would be severely limited.
In network virtualization, we can typically assign a single NIC to a vSwitch or vRouter
to act as a gateway to the virtual network through that single physical NIC. But taking into
consideration all the performance, security, and failure precautions, we need multiple NICs
in a single bare metal host. To facilitate proper networking, we need a large bandwidth path
between the host and the network core. That means NIC teaming and bandwidth aggrega-
tion , which already requires at least two physical NICs. Long story short, we need a lot of
NICs in our host, but the question of how many really depends on the performance require-
ments, the workflow, and the technical and physical limitations of the host hardware as well
as the hypervisor that needs to run it all.
So when the host's hardware is planned, all the performance requirements have to be
established and it has to be fitted to the budget. This includes the number of physical pro-
cessor sockets, memory sockets and memory capacity, and of course the number of slots
available for NICs on the motherboard and its extensions. The hypervisor maximums also
have to be taken into consideration, though this maximum is often larger than what is pos-
sible for the hardware configuration. For example, VMware ESX hypervisor can support
as many as 32 e1000 1 Gigabit Ethernet ports on Intel PCI-x, but most high-end server
motherboards may only be able to support up to eight physical NICs aside from the built-
in internal NICs because of PCI port limitations. This requires more hardware to be put
together before that limitation is reached.
Internal Hardware Compatibility
When you're building or choosing the host hardware, the mix-and-match method is not
going to cut it. Extensive research must be made regarding full hardware compatibility
when you're building a host from scratch. For example, every part of the host server (such
as processor or NIC) has to be server grade and not consumer grade. There are often server
versions of consumer-grade computer parts, which are optimized for workflows typically
Search WWH ::




Custom Search