Testing and comparing SQL Server query performance

A while back I talked to a developer, who played also the role of involuntary DBA at a company.
As we were discussing the poor performance of one of their servers, I realized that something wasn’t adding up, so I decided to ask the blunt question:
“How do you measure the performance of the queries you write? what tools do you use?”

He looked at me and said: “Well, after I write a query I open the task manager, I execute the query and I look to see if there is a spike in the CPU.

Then I realised that it was essential for the future well-being of the system (and the company) that we took a step back and learned about the tools of the trade in performance tuning.

The developer didn’t mind, and I must say that it took about 30 minutes and he was happily performance testing his queries.

Recently I wrote a post on the performance tuning wheel, which mentions many of the performance tuning tools, but in this blog post I will concentrate on a few very essential methods for pformance gathering, compare and benchmarking.

Why is the task manager not enough?

Let’s start at the root: a query can affect the hardware of SQL Server in many ways. It can affect the CPU, the memory, the disk system, combination of the above; the query can affect other queries, which in their turn can affect the hardware.

All of this is also measured by volume, scale, patterns and so on.

So, why is task manager not enough? Because it has very little information about all of the above.

Ok, what else?

A good way to start looking at a query performance is to look at the IO and CPU stats in SSMS.

SET STATISTICS IO
Set the query options
… or just write the query like this:
USE AdventureWorks;
GO
SET STATISTICS IO  ON
SET STATISTICS TIME  ON
 
SELECT p.Name, pr.ProductReviewID
FROM Production.Product p
JOIN Production.ProductReview pr
ON p.ProductID = pr.ProductID
 
SET STATISTICS IO  OFF
SET STATISTICS TIME  OFF

This statement will execute and will also return information like this:

(4 row(s) affected)
Table ‘Product’. Scan count 0, logical reads 8, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table ‘ProductReview’. Scan count 1, logical reads 4, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times:
CPU time = 0 ms,  elapsed time = 15 ms.

SQL Server Execution Times:
CPU time = 0 ms,  elapsed time = 0 ms.

 

Another way to look at the query is to use the client statistics.

 

Client Statistics

Use the client statistics to measure and compare queries

 

Another more comprehensive way to look at a query is to look at the Profiler – it gives info about the queries and about concurrent events.

 

To get even a better picture, we can capture Perfmon counters and Profiler event trace and correlate them. This method shows queries, concurrency and system information.

 

The execution plan – yes, it does give performance information, but this is more of an ‘ad-hoc’ info about what happened during the execution of the query, than a full picture. The execution plan is not stable in a sense that when the Query Optimizer (QO) generates the plan, it considers whole set of factors, hence the execution plan for a query may vary between executions (for example, look at the case of parallelism – there can be two different plans saved for a query – one using parallelism, and the other not using it.)

Furthermore the execution plan contains relative values, i.e. the cost of the operators is relative to the rest of them. This gives a great opportunity to compare queries executed in the same batch and to see if there are differences in the execution plans or what the total costs are.

Finally, what the execution plan does not give (and this may seem funny) is the cost of the generation of the execution plan itself. It is true that the essence of the QO is to generate ‘good enough’ plans which are less costly than the execution of the query itself, but at the same time problems may occur just because of the high number of generated plans. Keep in mind, that generating a query plan is a fairly expensive operation.

 

Query Execution Plan

The Query Execution Plan

Another way to look at query performance are DMVs.

DMV give stats about execution, but they also have lousy placement in time. What I mean by lousy placement in time is that DMVs are cumulative blobs, and even if they give information about some particular event or a query, there is no good way to get information about other dependent events happening at the same time (unless you really manage to execute the right DMVs at the right second!).

Well, you can write your own tool to gather a bit more detailed information and correlate events, but this is obsolete, since the Extended events are slowly coming to rplace the DMVs and the other performance tuning tools.

 

The bottom line is: know which tools you want to use, and what they are good for and what information they do not give.

 

Comments are closed.