Information Technology Reference
In-Depth Information
All nodes should have two similarly connected and coni gured network adapters: one for
the production (or public) network and one for the heartbeat (or private) network.
All nodes should have Microsoft Cluster Services for the version of Windows that you are
using.
Earlier versions of Microsoft Exchange used to align to the shared storage based on the clus-
ter model that we've just explained. However, Exchange 2010 introduced a new concept, the
database availability groups (DAGs). While Exchange can still be installed with an application-
based cluster coni guration, it has departed from the common requirement of shared storage
and uses local storage on each node instead. Because of the I/O proi le that Exchange can
require, local storage is seen to be a better i t for this application. Before we can provide you
with the details on how to build a server cluster running Microsoft Windows Server 2008 on
vSphere, we i rst need to discuss the different scenarios of how server clusters can be built.
Reviewing VM Clustering Configurations
Building a server cluster with Windows Server 2008 VMs requires one of three different con-
i gurations, as follows:
Cluster in a Box The clustering of VMs on the same ESXi host is also known as a cluster in
a box . This is the easiest of the three coni gurations to set up. Minimal coni guration needs to
be applied to make this work.
Cluster across Boxes The clustering of VMs that are running on different ESXi hosts is
known as a cluster across boxes . VMware had restrictions in place for this coni guration in
earlier versions: the cluster node's C: drive must be stored on the host's local storage or local
VMFS datastore, the cluster shared storage must be stored on Fibre Channel external disks,
and you must use raw device mappings on the storage. In vSphere 4 and vSphere 5, this was
changed and updated to allow .vmdk i les on the SAN and to allow the cluster VM boot drive
or C: drive on the SAN, but vMotion and vSphere Distributed Resource Scheduler (DRS) are
not supported using Microsoft-clustered VMs.
Physical-to-Virtual Clustering The clustering of a physical server and a VM together is
often referred to as a physical - to - virtual cluster . This coni guration of using physical and vir-
tual servers together gives you the best of both worlds, and the only other added restriction
is that you cannot use Virtual Compatibility mode with the RDMs.
We'll examine all three coni gurations in more details in the sections that follow.
Building Windows-based server clusters has long been considered an advanced technol-
ogy practiced only by those with high technical skills in implementing and managing high-
availability environments. Although this might be more rumor than truth, it is certainly a more
complex solution to set up and maintain, and running on top of a hypervisor can increase this
complexity.
Although you might succeed in setting up clustered VMs, you may not receive support for your
clustered solution if you violate any of the clustering restrictions put forth by VMware. The follow-
ing list summarizes and reviews the dos and don'ts of clustering VMs as published by VMware:
32-bit and 64-bit VMs can be coni gured as nodes in a server cluster.
Majority node set clusters with application-level replication (for example, Microsoft
Exchange 2007 cluster continuous replication) are now supported.
Search WWH ::




Custom Search