Database Reference
In-Depth Information
Let's break down this query a little. First, I'm creating a common table expression (CTE) called xEvents. I'm doing
that just because it makes the code a little easier to read. It doesn't fundamentally change any behavior. I prefer it
when I have to both read from a file and convert the data type. Then my XML queries in the following statement make
a little more sense. Note that I'm using a wildcard when reading from the file, QueryPerformanceMetrics*.xel . This
makes it possible for me to read in all rollover files created by the Extended Events session (for more details,
see Chapter 6).
Depending on the amount of data collected and the size of your files, running queries directly against the
files you've collected from Extended Events may be excessively slow. In that case, use the same basic function,
sys.fn_xe_file_target_read_file, to load the data into a table instead of querying it directly. Once that's done, you can
apply indexing to the table in order to speed up the queries. I used the previous script to put the data into a table and
then queried that table for my output. This will work fine for testing, but for a more permanent solution you'd want to
have a database dedicated to storing this type of data with tables having the appropriate structures rather than using a
shortcut like INTO as I did here.
In some cases, you may have identified a large stress on the CPU from the System Monitor output. The pressure
on the CPU may be because of a large number of CPU-intensive operations, such as stored procedure recompilations,
aggregate functions, data sorting, hash joins, and so on. In such cases, you should sort the session output on the
cpu_time field to identify the queries taking up a large number of processor cycles.
Costly Queries with Multiple Executions
As I mentioned earlier, sometimes a query may not be costly by itself, but the cumulative effect of multiple executions
of the same query might put pressure on the system resources. In this situation, sorting on the logical_reads field
won't help you identify this type of costly query. You instead want to know the total number of reads, total CPU time,
or just the accumulated duration performed by multiple executions of the query.
Query the session output and group on some of the values you're interested in.
Access the sys.dm_exec_query_stats DMO to retrieve the information from the production
server. This assumes that you're dealing with an issue that is either recent or not dependent on
a known history because this data is only what is currently in the procedure cache.
But if you're looking for an accurate historical view of the data, you can go to the metrics you've collected with
extended events. Once the session data is imported into a database table, execute a SELECT statement to find the total
number of reads performed by the multiple executions of the same query as follows:
SELECT COUNT(*) AS TotalExecutions,
st.xEventName,
st.SQLText,
SUM(st.Duration) AS DurationTotal,
SUM(st.CpuTime) AS CpuTotal,
SUM(st.LogicalReads) AS LogicalReadTotal,
SUM(st.PhysicalReads) AS PhysicalReadTotal
FROM Session_Table AS st
GROUP BY st.xEventName, st.SQLText
ORDER BY LogicalReadTotal DESC;
The TotalExecutions column in the preceding script indicates the number of times a query was executed.
The LogicalReadTotal column indicates the total number of logical reads performed by the multiple executions
of the query.
 
Search WWH ::




Custom Search