How Trace Flag 2335 Affects Memory Grants

This trace flag is documented in KB #2413549, which says, “Using large amounts of memory can result in an inefficient plan in SQL Server.” The details are a little light, so let’s run a quick experiment with:

  • SQL Server 2017 CU 8 (14.0.3029.16)
  • VM with 4 cores, 32GB RAM, max memory set to 28GB
  • Stack Overflow database (circa March 2016, 90GB)

We’ll use a simple query that wants a memory grant (but doesn’t actually use it):

The Users table is less than 1GB, but because the DisplayName, Location, and WebsiteUrl are relatively large datatypes, SQL Server somehow thinks 22GB will come out of the clustered index scan, and go into the sort, as shown in the actual plan:

Hello

This affects the query’s memory grant.

Default memory grant

When I right-click on the select icon and go into properties to look at the memory grant, Desired Memory is 29GB! SQL Server wanted 29GB to run this query.

However, because my server isn’t that large, the query was “only” granted 5GB. It used less than 1GB because of course there’s just not that much data in the table.

If my server was larger, the query could get up to 29GB every time it runs. Run a bunch of those at once, and hello, RESOURCE_SEMAPHORE poison waits.

That’s where it sounds like trace flag 2335 would come in. The KB article says:

One of the factors that impacts the execution plan generated for a query is the amount of memory that is available for SQL Server. In most cases SQL Server generates the most optimal plan based on this value, but occasionally it may generate an inefficient plan for a specific query when you configure a large value for max server memory, thus resulting in a slow-running query.

This is one of the trace flags that can be enabled with QUERYTRACEON, so let’s give it a shot:

The new query plan looks and performs the same – but what about the memory grants? Are those more accurate?

Memory grant with 2335 enabled

No: the memory grant is still aiming for 29GB.

That’s because this trace flag isn’t directly about memory grants for the same operation. It’s about indirectly reducing your memory grants by changing the way SQL Server decides to build the plan in the first place.

Keep looking down, and check out the Optimizer Hardware Dependent Properties. These are some of the numbers SQL Server used when building the execution plan – assumptions it made about the hardware it was running on.

Estimated Available Memory Grant:

  • Default = 716MB
  • With 2335 = 26MB

Estimated Pages Cached:

  • Default = 179,200 (which is pretty odd, given that the table only has 80,026 pages)
  • With 2335 = 1,638 pages

If you think 2335 is right for you, look for plans where 2335 changes the entire shape of the plan, getting you a query plan that aims to use less memory overall.

One place to start looking for those plans is:

Scroll across to the right and check out the memory grants columns:

sp_BlitzCache memory grants

You’re not just looking for queries with unused grants – 2335 could (in theory) help your queries with large USED memory grants by changing the shape of the plan. A successful deployment of 2335 would mean a differently shaped plan that still performs fast, but desires (and uses) way less memory.

Of course, this is a last resort – if you can change the query by injecting this trace flag, then you should probably start by tuning the query first instead.

Oh, and you might be wondering – do I actually use this trace flag? Hell no – I just ran across it in the wild, found the documentation to be pretty lacking, and figured I’d document my research here for the next person who finds it turned on somewhere.

Previous Post
A Common Query Error
Next Post
Wait Stats When VSS Snaps Are Slow

5 Comments. Leave new

  • Jacob Horwath
    August 16, 2018 1:50 pm

    Brent, I wanted to thank you and company for this post. It gave me some ideas on how to troubleshoot memory grant problems that I had been trying to identify for a couple of weeks. We’re pretty sure we have discovered a pretty large bug with a vendor application, which I can attribute almost wholly to using sp_blitzcache sorting by top 50 memory grants. Next time you are in Boston, drinks are on me. Thanks again.

    Reply
    • Woohoo, that’s awesome to hear! The magic of sorting by grant is all Erik’s wizardry though, and he’s got family in Boston, so he might take you up on that, heh.

      Reply
  • In blitzcache what is serial desired memory represent. I have a query from hell asking for 30454320. What then does the serial required memory represent

    Reply
  • Manikandan GS
    July 12, 2020 9:58 pm

    Awesome! Really like the way that you’re are finding the pitfalls!

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.