How to Reduce the CPU Overhead of Dynamic SQL

Development, T-SQL
14 Comments

Dynamic SQL is a good choice for catch-all type queries, but you have to be careful how you execute the dynamic string.

For frequently executed queries, “EXEC sp_executesql @sql”is a good choice but at the risk of encountering parameter sniffing issues.

For queries that aren’t executed very often, “EXEC (@sql)” can be used, but you could end up with a bloated plan cache with lots of single-use execution plans.

Let’s compare the performance difference between “EXEC (@sql)” and “EXEC sp_executesql @sql” for a frequently executed query.

The query is very fast: takes less than a millisecond to compile and execute.

I usually use SQLQueryStress to load test a query, but it couldn’t deliver what I needed. It maxes out at 200 threads for a single instance. You can run multiple instances of it to get past 200 threads, but each was using a lot of CPU for this load test. Erik pointed me to Microsoft’s ostress utility instead. He blogged about ostress last year.

EXEC (@sql)

Let’s look at the most common way to execute dynamic queries first: EXEC (@sql). The fact that it’s the most common doesn’t mean it’s the right way or the wrong way.

You may wonder why I’m calculating @d inside the proc and not just sending in a value to an input parameter. It was just to make the calls to ostress easy. I could have used a script file via the -i parameter rather than executing a single batch via the -Q parameter. Besides, this is my blog post. I’ll do what I want.

Before starting the test, I cleared out the plan cache so that when I show you the plan cache bloat you won’t wonder what was already in there.

Time to run the load test. I decided on 50 threads with 15000 iterations after running a bunch of tests. I wanted enough threads to cause high CPU utilization and enough iterations to run sp_BlitzFirst with a 30-second sample.

After a minute, I ran sp_BlitzFirst to see what was going on in SQL Server.

Though not horrible waiting time, SOS_SCHEDULER_YIELD is the primary wait here.

CPU was pegged at 100% during this load test.

After the test completed, I examined the plan cache using Kimberly‘s handy plan cache query.

The plan cache totaled 2.3GB, which isn’t that big. But it’s big when you consider that each of the adhoc plans are tiny. Most of the 36,000 adhoc plans were only used once. Why bother storing them in memory if they aren’t going to be reused?

EXEC sp_executesql @sql

Now let’s look at the other way to execute dynamic queries: EXEC sp_executesql @sql.

I again cleared out the plan cache, but this time so that we can see the difference between the two load tests.

I then ran the load test again but calling Test2 this time.

SQL Server was pretty bored with this workload even though it was running more queries per second than the previous test – not much waiting time on anything really.

CPU averaged around 45% during this load test.

Look at that tiny plan cache!

The roundup

ostress settings: threads=50, iterations=15000

Should you just use sp_executesql always?

If you have high CPU utilization and run frequently-executed dynamic queries with “EXEC (@sql)”, consider using “EXEC sp_executesql @sql” instead. You’ll need to be aware of parameter sniffing issues, but you should see a drop in CPU utilization and plan cache size.

Parameter sniffing issues are no joke. I have lost hundreds of hours troubleshooting parameter sniffing issues over the past 15 years. Know how to identify if parameter sniffing is the problem and how to troubleshoot it.

Don’t default to using sp_executesql though. If a query is not executed frequently, “EXEC (@sql)” can work just fine. Just be sure to monitor the adhoc plans in the plan cache.

Previous Post
Trivial Plans, Simple Parameterization and Check Constraints
Next Post
A Strange Place For A Memory Grant

14 Comments. Leave new

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.