When monitoring your SQL instance for performance the most convenient way to start is by analyzing the query plan cache using DMV’s. The plan cache contains all query plans used on the instance since the last restart. A key benefit of using this method is that an explicit trace (using SQL Profiler or preferably Extended events) is not needed. Be aware however that the amount of query plan information is limited based on the uptime of the SQL instance.
In this article I will show different methods to investigate the query plan cache. This article will use the AdventureWorks2014 database as an example, but the queries are compatible with SQL 2005 and up.
Some technical background about the DMV’s
The following DMV’s should be used to get information from the plan cache.
- sys.dm_exec_query_stats (contains cumulative information of the query plan cache)
- sys.dm_exec_sql_text (DMF to retrieve SQL text based on the sql_handle)
- sys.dm_exec_query_plan (DMF to retrieve the query plan based on the plan_handle)
The following columns from these DMV’s contain the most interesting information:
- execution_count (number of queries executed with this plan)
- total_logical_reads and last_logical_reads (amount of logical page reads)
- total_worker_time and last_worker_time (amount of CPU time)
- total_elapsed_time and last_elapsed_time (duration statistics)
- text (SQL statement text. To be used in conjuction with statement_start_offset and, statement_end_offset)
- query_plan (XML query plan
Cumulative statistics columns: total_* and last_*
Be aware that the dm_exec_query_stats DMV contains mainly cumulative statistics. For SQL Server a query plan which can be reused is most efficient. This means that a compilation of the query and storing the query plan in memory is not needed. You will see in the execution_count column if a query plan is reused. If you notice the execution_count = 1 this could be an indication of the use of unparameterized adhoc queries which could lead to bloating the plan cache and consuming lots of memory. You could rewrite the queries and parameterize them or use the “Optimize for ad hoc workloads” option to mitigate this issue.
The columns starting with total just add up each time the query plan is reused. SQL will track the execution statistics each time a query is executed using the specific plan and just add’s the totals up. Conclusions on the values in these columns should be handled with care. A small query executed a lot of times will have typically have large total values, but this doesn’t mean it’s a heavy query. It’s more useful to compute an average by making a division like total_logical_reads/ execution_count AS avg_logical_reads.
The columns starting with last contain only information on the last query execution. These values can be compared to the averages to check if some queries suffer from “parameter sniffing” problems. In this case the query plan could be suboptimal for specific parameter values.
DMV Query example on AdventureWorks2014
The following query is just a sample of getting TOP 10 resource consuming query plans. You can modify this script to your own needs. It’s important to do a correct interpretation of the data. This means checking execution_count to see if the query executes frequently and checking if the last execution is representative to the average cumulative statistics of the plan. The ORDER BY is based on the last_logical_reads column, but it could be interesting to execute this query multiple times ordering on the total statistics as well.
/* Returns aggregate performance statistics for cached query plans in SQL Server List top 10 most expensive queries */ SELECT TOP 10 SUBSTRING(qt.TEXT, (qs.statement_start_offset/2)+1, ((CASE qs.statement_end_offset WHEN -1 THEN DATALENGTH(qt.TEXT) ELSE qs.statement_end_offset END - qs.statement_start_offset)/2)+1), qs.execution_count, qs.total_logical_reads, qs.last_logical_reads, qs.total_logical_writes, qs.last_logical_writes, qs.total_worker_time, qs.last_worker_time, qs.total_elapsed_time/1000000 total_elapsed_time_in_S, qs.last_elapsed_time/1000000 last_elapsed_time_in_S, qs.last_execution_time, qp.query_plan FROM sys.dm_exec_query_stats qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp ORDER BY qs.last_logical_reads DESC -- logical reads -- ORDER BY qs.last_logical_writes DESC -- logical writes -- ORDER BY qs.last_worker_time DESC -- CPU time
Here is an example of an expensive query on AdventureWorks2014
SELECT [SalesOrderID] ,[SalesOrderDetailID] ,[CarrierTrackingNumber] ,[OrderQty] ,[ProductID] ,[SpecialOfferID] ,[UnitPrice] ,[UnitPriceDiscount] ,[LineTotal] ,[rowguid] ,[ModifiedDate] FROM [AdventureWorks2014].[Sales].[SalesOrderDetail] where [UnitPrice] > 3000
This query returns 1947 rows out of 121.317 rows and could be considered selective. Due to a missing index the query plan however is less efficient.
IO statistics: Table ‘SalesOrderDetail’. Scan count 5, logical reads 1371, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
You can find the statistics of this plan by investigating the query stats. Based on the TOP 10 DMV query above I can easily identify this query based on the query text or I could check on the last_logical_reads of 1371.
The advantage of using the first DMV query is the presence of the qp.query_plan column.
You can actually click on the XML link to open the query plan which will show the same query plan with the missing index hint and Clustered Index scan. It’s actually possible to search in these XML fragments for these hints. An example is this article written by Dennes Torres on Simple-Talk.
Reports in Management Studio
SQL management studio provides some reports to get the same type of information. Just right click on the instance, reports and click on one of the desired Performance reports. In this case I choose to order by Average IO.
You can see they query is here too, but the plan is missing. It is possible do drill down by clicking on the [+] icon in the Query Text column. However this just shows some of the performance statistics columns but not the real query plan.
The most powerful way to do some historical performance monitoring is to query the DMV’s using your customized ordering and filtering. The GUI functionality in SSMS is a quick way for revealing some of the information, but misses some important information.