Database Reference
In-Depth Information
needed more compute power, faster disks, and so on, they tended to get what they
wanted. They were the recipients of some nice powerful Tier 1 equipment.
So, if we put those two items together, vSphere coming up and running on Tier 2
equipment along with database servers running on Tier 1 equipment, and someone
migrates databases over to this environment without doing basic engineering and
architecture work such as the number and speed of disks supporting the database, that
person could be in trouble. Trust us, we see this all the time. One of the first things we
do when customers say, “It ran better in the physical world than in the virtual world,” is
ask them for a side-by-side comparison of the supporting subsystem of each
environment. We ask them to detail out disk type, disk speed, RAID, paths, directories,
and so on. Although some of this may seem “obvious,” we cannot tell you how many
calls we get concerning SQL performance being “slow” (love those ambiguous
troubleshooting calls) and we find that storage is sized incorrectly.
Obtain Storage-Specific Metrics
The first storage consideration for virtualizing SQL is to do your best to obtain the I/O
and throughput requirements for the databases you are going to virtualize. Remember to
account for the sum of all databases on a host/LUN, not just what one database requires.
Although this data is necessary, remember you must factor in the physical hardware's
limitations. In addition, I/O and throughput are two different and important items to
account for in your sizing. For existing databases, this is easier because we have the
ability to reference monitoring tools to gather this data from the actual system. For net
new applications, well, this can be tough. Trying to get the I/O profile from an
application vendor is often akin to pulling teeth.
Along with what the application will drive, you need to understand the workload pattern
of the database. Is the workload OLTP, batch, or DSS? These have different I/O
patterns in terms of read/write ratios and should be taken into consideration when sizing
the subsystem.
Next, size for performance, not capacity. This is where tight integration and team work
between the DBAs, vSphere administrators, and SAN administrators is paramount.
After the workload profile has been established and sizing has been determined, it is
key that all teams work together to put the proper infrastructure in place to support this
workload.
Think this is “common sense”? Well, we once worked with a customer who was
looking to virtualize a database that could sustain 20,000 IOPS for their VOIP recording
program. We had all the best-practice sessions, reviewed all the detail, told the SAN
team what was coming down the pipe, and what they were going to have to architect the
SAN to handle. “No problem” they told us; they just got a big, bad, shiny array that
could eat IO for breakfast and then spit it out. So we left the customer to build, install,
 
Search WWH ::




Custom Search