Database Reference
In-Depth Information
file size is 16TB, and the maximum log file size is 2TB. For further maximums,
see http://technet.microsoft.com/en-us/library/ms143432.aspx .
Although having lots of 62TB virtual disks is unrealistic, having a few virtual disks >
2TB is possible and potentially desirable for large SQL Servers. You can use a single
virtual disk for your transaction logs (max 2TB per transaction log file), and you would
be able to use a single virtual disk for your backup drive. Both transaction logs and
backups are sequential in nature and could benefit from the capacity of a larger > 2TB
VMDK without the performance drawbacks that would be likely for data files. Your
underlying storage platform would need to support a VMFS data store of a LUN size big
enough to support all of these large VMDKs. You should also consider your restore
times when using large VMDKs. If you can't restore a large VMDK within your SLAs,
it is not a good choice. Just because you can use Jumbo VMDKs doesn't mean you
always should.
Caution
You can't extend virtual disks > 2TB online. You must shut down your virtual
machine first, and extend the virtual disk offline through the vSphere Web Client.
This is due to the disk needing to be in the GPT format. Once a virtual disk has
been extended to > 2TB, each time you need to extend it further, you must shut
down the VM. Alternatively, you can hot-add a new virtual disk to the VM online
at any time and the new virtual disk can be > 2TB. Jumbo VMDKs can only be
managed through the vSphere Web Client because the traditional VI Client
(VMware C# Client) only supports VMware vSphere 5.0 features. All newer
features are only available through the Web Client. We recommend you create all
SQL data file, Temp DB file, transaction log, and backup drives using the GPT
format.
VMFS Heap Size Considerations with Monster VMs and Jumbo VMDKs
ESXi 4.x and 5.x prior to 5.5 used a VMFS Heap value to control how much memory
was consumed to manage the VMFS file system and for open or active VMDK capacity
on a single ESXi host. This limit was not documented in the vSphere Maximum's
product document, and by default with a 1MB block size on ESXi 5.0 GA, it would
limit a host to being able to open 8TB of total VMDKs before errors could occur. The
maximum on ESXi 5.0 GA was 25TB with a 1MB block size, which required adjusting
the advanced parameter VMFS3.MaxHeapSizeMB. This was later increased to 60TB
by default on ESXi 5.0 by applying the latest patches and in ESXi 5.1 Update 1. The
only downside of this was 640MB of RAM was consumed for the VMFS Heap.
 
Search WWH ::




Custom Search