Databases Reference
In-Depth Information
The report contains a variety of metadata at the top, including how often the query
executes, its average concurrency, and the byte offset where the worst-performing in-
stance of the query was found in the log file. There is a tabular printout of the numeric
metadata, including statistics such as the standard deviation. 9
This is followed by a histogram of the response times. Interestingly, you can see that
this query has a double-peak histogram, under Query_time distribution . It usually
executes in the hundreds of milliseconds, but there's also a significant spike of queries
that execute about three orders of magnitude faster. If this log were from Percona
Server, we'd have a richer set of attributes in the query log, so we'd be able to slice and
dice the queries to determine why that happens. Perhaps those are queries against
specific values that are disproportionately common, so a different index is used, or
perhaps they're query cache hits, for example. This sort of double-peak histogram
shape is not unusual in real systems, especially for simple queries, which will usually
have only a few alternative execution paths.
Finally, the report detail section ends with little helper snippets to make it easy for you
to copy and paste commands into a terminal and examine the schema and status of the
tables mentioned, and an EXPLAIN -ready sample query. The sample contains all of the
literals and isn't “fingerprinted,” so it's a real query. It's actually the instance of this
query that had the worst execution time in our example.
After you choose the queries you want to optimize, you can use this report to examine
the query execution very quickly. We use this tool constantly, and we've spent a lot of
time tweaking it to be as efficient and helpful as possible. We definitely recommend
that you get comfortable with it. MySQL might gain more sophisticated built-in in-
strumentation and profiling in the future, but at the time of writing, logging queries
with the slow query log or tcpdump and running the resulting log through pt-query-
digest is about as good as you can get.
Profiling a Single Query
After you've identified a single query to optimize, you can drill into it and determine
why it takes as much time as it does, and how to optimize it. The actual techniques for
optimizing queries are covered in later chapters in this topic, along with the background
necessary to support those techniques. Our purpose here is simply to show you how
to measure what the query does and how long each part of that takes. Knowing this
helps you decide which optimization techniques to use.
Unfortunately, most of the instrumentation in MySQL isn't very helpful for profiling
queries. This is changing, but at the time of writing, most production servers don't have
the newest profiling features. So for practical purposes, we're pretty much limited to
9. We're keeping it simple here for clarity, but Percona Server's query log will produce a much more detailed
report, which could help you understand why the query is apparently spending 144 ms to examine a
single row—that's a lot!
 
Search WWH ::




Custom Search