Plan Caching and Query Memory Part II (Hash Match) – When not to use stored procedure - Most common performance mistake SQL Server developers make.

Posted by sqlworkshops on SQL Blogcasts See other posts from SQL Blogcasts or by sqlworkshops
Published on Wed, 16 Feb 2011 09:45:00 GMT Indexed on 2011/02/16 15:29 UTC
Read the original article Hit count: 517

Filed under:

SQL Server estimates Memory requirement at compile time, when stored procedure or other plan caching mechanisms like sp_executesql or prepared statement are used, the memory requirement is estimated based on first set of execution parameters. This is a common reason for spill over tempdb and hence poor performance. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union. This article covers Hash Match operations with examples. It is recommended to read Plan Caching and Query Memory Part I before this article which covers an introduction and Query memory for Sort. In most cases it is cheaper to pay for the compilation cost of dynamic queries than huge cost for spill over tempdb, unless memory requirement for a query does not change significantly based on predicates.

 

This article covers underestimation / overestimation of memory for Hash Match operation. Plan Caching and Query Memory Part I covers underestimation / overestimation for Sort. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.

 

To read additional articles I wrote click here.

 

The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script. Most of these concepts are also covered in our webcasts: www.sqlworkshops.com/webcasts

 

Let’s create a Customer’s State table that has 99% of customers in NY and the rest 1% in WA.Customers table used in Part I of this article is also used here.To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'.

--Example provided by www.sqlworkshops.com

drop table CustomersState

go

create table CustomersState (CustomerID int primary key, Address char(200), State char(2))

go

insert into CustomersState (CustomerID, Address) select CustomerID, 'Address' from Customers

update CustomersState set State = 'NY' where CustomerID % 100 != 1

update CustomersState set State = 'WA' where CustomerID % 100 = 1

go

update statistics CustomersState with fullscan

go

 

 Let’s create a stored procedure that joins customers with CustomersState table with a predicate on State.

--Example provided by www.sqlworkshops.com

create proc CustomersByState @State char(2) as

begin

declare @CustomerID int

select @CustomerID = e.CustomerID from Customers e

inner join CustomersState es on (e.CustomerID = es.CustomerID)

where es.State = @State

option (maxdop 1)

end

go

 

Let’s execute the stored procedure first with parameter value ‘WA’ – which will select 1% of data.

set statistics time on

go

--Example provided by www.sqlworkshops.com

exec CustomersByState 'WA'

go

The stored procedure took 294 ms to complete.

 

The stored procedure was granted 6704 KB based on 8000 rows being estimated.

 

The estimated number of rows, 8000 is similar to actual number of rows 8000 and hence the memory estimation should be ok.

 

There was no Hash Warning in SQL Profiler. To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'.

 

Now let’s execute the stored procedure with parameter value ‘NY’ – which will select 99% of data.

-Example provided by www.sqlworkshops.com

exec CustomersByState 'NY'

go

 

The stored procedure took 2922 ms to complete.

 

The stored procedure was granted 6704 KB based on 8000 rows being estimated.

 

 

The estimated number of rows, 8000 is way different from the actual number of rows 792000 because the estimation is based on the first set of parameter value supplied to the stored procedure which is ‘WA’ in our case. This underestimation will lead to spill over tempdb, resulting in poor performance.

 

There was Hash Warning (Recursion) in SQL Profiler. To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'.

 

Let’s recompile the stored procedure and then let’s first execute the stored procedure with parameter value ‘NY’.

 

In a production instance it is not advisable to use sp_recompile instead one should use DBCC FREEPROCCACHE (plan_handle). This is due to locking issues involved with sp_recompile, refer to our webcasts, www.sqlworkshops.com/webcasts for further details.

 

exec sp_recompile CustomersByState

go

--Example provided by www.sqlworkshops.com

exec CustomersByState 'NY'

go

 

Now the stored procedure took only 1046 ms instead of 2922 ms.

 

The stored procedure was granted 146752 KB of memory.

The estimated number of rows, 792000 is similar to actual number of rows of 792000. Better performance of this stored procedure execution is due to better estimation of memory and avoiding spill over tempdb.

 

There was no Hash Warning in SQL Profiler.

 

Now let’s execute the stored procedure with parameter value ‘WA’.

--Example provided by www.sqlworkshops.com

exec CustomersByState 'WA'

go

 

The stored procedure took 351 ms to complete, higher than the previous execution time of 294 ms.

 

 

This stored procedure was granted more memory (146752 KB) than necessary (6704 KB) based on parameter value ‘NY’ for estimation (792000 rows) instead of parameter value ‘WA’ for estimation (8000 rows). This is because the estimation is based on the first set of parameter value supplied to the stored procedure which is ‘NY’ in this case. This overestimation leads to poor performance of this Hash Match operation, it might also affect the performance of other concurrently executing queries requiring memory and hence overestimation is not recommended.

 

 

The estimated number of rows, 792000 is much more than the actual number of rows of 8000.

 

Intermediate Summary: This issue can be avoided by not caching the plan for memory allocating queries. Other possibility is to use recompile hint or optimize for hint to allocate memory for predefined data range.Let’s recreate the stored procedure with recompile hint.

--Example provided by www.sqlworkshops.com

drop proc CustomersByState

go

create proc CustomersByState @State char(2) as

begin

declare @CustomerID int

select @CustomerID = e.CustomerID from Customers e

inner join CustomersState es on (e.CustomerID = es.CustomerID)

where es.State = @State

option (maxdop 1, recompile)

end

go

 

Let’s execute the stored procedure initially with parameter value ‘WA’ and then with parameter value ‘NY’.

--Example provided by www.sqlworkshops.com

exec CustomersByState 'WA'

go

exec CustomersByState 'NY'

go

 

The stored procedure took 297 ms and 1102 ms in line with previous optimal execution times.

 

The stored procedure with parameter value ‘WA’ has good estimation like before.

 

Estimated number of rows of 8000 is similar to actual number of rows of 8000.

 

The stored procedure with parameter value ‘NY’ also has good estimation and memory grant like before because the stored procedure was recompiled with current set of parameter values.

 

Estimated number of rows of 792000 is similar to actual number of rows of 792000.

 

 

The compilation time and compilation CPU of 1 ms is not expensive in this case compared to the performance benefit.

 

There was no Hash Warning in SQL Profiler.

 

Let’s recreate the stored procedure with optimize for hint of ‘NY’.

--Example provided by www.sqlworkshops.com

drop proc CustomersByState

go

create proc CustomersByState @State char(2) as

begin

declare @CustomerID int

select @CustomerID = e.CustomerID from Customers e

inner join CustomersState es on (e.CustomerID = es.CustomerID)

where es.State = @State

option (maxdop 1, optimize for (@State = 'NY'))

end

go

 

Let’s execute the stored procedure initially with parameter value ‘WA’ and then with parameter value ‘NY’.

--Example provided by www.sqlworkshops.com

exec CustomersByState 'WA'

go

exec CustomersByState 'NY'

go

 

The stored procedure took 353 ms with parameter value ‘WA’, this is much slower than the optimal execution time of 294 ms we observed previously. This is because of overestimation of memory. The stored procedure with parameter value ‘NY’ has optimal execution time like before.

 

The stored procedure with parameter value ‘WA’ has overestimation of rows because of optimize for hint value of ‘NY’.

 

Unlike before, more memory was estimated to this stored procedure based on optimize for hint value ‘NY’.

 

 

The stored procedure with parameter value ‘NY’ has good estimation because of optimize for hint value of ‘NY’. Estimated number of rows of 792000 is similar to actual number of rows of 792000.

 

Optimal amount memory was estimated to this stored procedure based on optimize for hint value ‘NY’.

 

There was no Hash Warning in SQL Profiler.

 

This article covers underestimation / overestimation of memory for Hash Match operation. Plan Caching and Query Memory Part I covers underestimation / overestimation for Sort. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.

 

Summary: Cached plan might lead to underestimation or overestimation of memory because the memory is estimated based on first set of execution parameters. It is recommended not to cache the plan if the amount of memory required to execute the stored procedure has a wide range of possibilities. One can mitigate this by using recompile hint, but that will lead to compilation overhead. However, in most cases it might be ok to pay for compilation rather than spilling sort over tempdb which could be very expensive compared to compilation cost. The other possibility is to use optimize for hint, but in case one sorts more data than hinted by optimize for hint, this will still lead to spill. On the other side there is also the possibility of overestimation leading to unnecessary memory issues for other concurrently executing queries. In case of Hash Match operations, this overestimation of memory might lead to poor performance. When the values used in optimize for hint are archived from the database, the estimation will be wrong leading to worst performance, so one has to exercise caution before using optimize for hint, recompile hint is better in this case.

 

I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.

 

Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.

 

Disclaimer and copyright information:
This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners.

Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work.

This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.

 

R Meyyappan [email protected]

LinkedIn: http://at.linkedin.com/in/rmeyyappan

© SQL Blogcasts or respective owner