Database Reference
In-Depth Information
placed on the same VMFS5 data store on top of a Fusion-io ioDrive2 1.2TB PCIe flash
card. IOMeter was used to drive the IO load and measure the results.
Figure 6.12 Virtual storage adapter performance.
As you can see from the graph in Figure 6.12 (published at
http://longwhiteclouds.com/2014/01/13/vmware-vsphere-5-5-virtual-storage-adapter-
performance/ ), both SATA and LSI Logic SAS have no significant performance
advantage going from 32 outstanding IO operations (OIO) to 64 due to their maximum
device queue depth being 32. PVSCSI, however, sees a 15% improvement in IOPS
between 32 OIOs and 64, based on a single Fusion-io ioDrive2 card as the underlying
storage. A storage array of multiple backend devices will potentially show a much
greater improvement when queue depth is increased. This assumes the HBAs and
storage processors are configured to accept a higher queue depth and not overloaded.
Table 6.5 displays the IOMeter performance results for each virtual storage adapter,
including throughput and CPU utilization for the 8KB IO size. The IO pattern used was
100% random read, with a single worker thread and single virtual disk from the test
virtual machine. As you can see from the results, PVSCSI shows significantly better IO
performance at lower latency and lower CPU utilization compared to the other adapter
types.
 
 
Search WWH ::




Custom Search