Using Trace Flag 2453 to Improve Table Variable Performance

I recently saw a server with trace flag 2453 configured. I hadn’t come across this trace flag before, so I did a little research. Microsoft says it allows “a table variable to trigger recompile when enough number of rows are changed”. This can lead to a more efficient execution plan. Trace flag 2453 is available in SP2 or greater for SQL Server 2012, CU3 or greater for SQL Server 2014 and RTM or greater for SQL Server 2016.

I was curious how a query using a table variable performed as compared to the “same” query using:

  • trace flag 2453
  • OPTION (RECOMPILE)
  • a temporary table

DEMO SETUP

BASELINE

Table variable details:

TRACE FLAG 2453

Table variable details:

OPTION(RECOMPILE)

Table variable details:

TEMPORARY TABLE

Temporary table details:

THE RESULTS

The baseline query has a clustered index scan for the table variable since almost 3.5 million rows are returned from it. The other 3 queries have the same execution plan and stats IO output, with a clustered index seek on the table variable or temporary table since 963 rows are returned from it.

WHAT WOULD TARA DO?

Should you always use trace flag 2453 or OPTION (RECOMPILE) if you are using table variables? Maybe. But I’m leaning towards “it depends”. Microsoft says the “trace flag must be used with caution because it can increase number of query recompiles which could cost more than savings from better query optimization”.

Some people say to use table variables if the table will contain a small data set. I’ve been burned by that before. I’ve experienced poor performance when the table variable always only had one row in it. Switching it to a temporary table improved the performance dramatically. Your mileage may vary.

I’ll just stick with temporary tables.

Previous Post
[Video] Office Hours 2017/02/1 (With Transcriptions)
Next Post
Inline Table Valued Functions: Parameter Snorting

15 Comments. Leave new

  • Hi Tara,

    Someone is going to have to make a Home Depot run for all these nails getting driven into the table variable coffin.

    I have a few questions;

    1. What cardinality estimator are you using for this demo?
    2. Given that temp tables can have statistics, why does the optimizer still assume just 1 row is coming out of it? Is this related to the CE version?
    3. After attending Wayne Sheffield’s session at PASS last year, it was my understanding that using recompile on table variables would force the optimizer to make its estimate after the table had been populated, thus leading to a more sensible/accurate row estimate. This may be part of the new CE as well, I don’t remember.

    Nice meeting you at PASS, btw 🙂

    Reply
    • Hi Brandon, nice meeting you too!
      1. The demo was on the new CE. I was using a 2014 instance with compatibility level 120 for the StackOverflow database.
      2. The same behavior occurs on the old compatibility level.
      3. Not sure on that one, but something I’ll play around with when I get a chance.

      Reply
  • Hi.
    If I have a stored procedure that get a tvp input parameter.
    Is it better to move data into temp table in the stored procedure or stick with the tvp ?
    10x

    Reply
  • I recently read a blog post that suggested using memory optimized table variable. Unfortunately, I’m not able to find the post with a quick Google. Regardless, the post showed a potential solution to the poor performance of table variables. Would you expect that a memory optimized table variables would have the same problems as “traditional” table variables? I would expect the answer to be yes. However, given the performance that memory optimized table variables over “traditional” table variables, would these problems be more tolerable or even trivial?

    Reply
    • Reply
    • The solution for table variables is simply to never use them except in a few edge cases (SERIOUSLY high call rates where recompilation locks are a demonstrable problem or you need to hold contents after a rollback). Temporary tables always other wise. I can demo bad plans with table variables with a single-row-single-column entity where a temporary table gets the optimal plan.

      I also have it from an individual I trust that in-memory temp objects at least in their case were causing a myriad of problems.

      In pursuing in-memory table variables you completely miss the root cause of the problems. It is not that THEY perform slowly, it is that THEY create horrifically bad plans.

      Reply
      • If you’re implying that table variables are purely in memory they are not. That’s a myth. Table variables are persisted the same as #temp tables.

        Reply
        • Sorry, meant to also say I agree that #temp tables generally do generate better execution plans. Just wanted to clarify my point on how they are persisted.

          Reply
  • Vitalijus Jersovas
    May 18, 2017 7:04 am

    Hi
    Thnx for great article. Are you sure that “Estimated number of rows” is correct in “Table variable details” , “Temporary table details” (yellow screenshots) and Result table? I expect value 963 for three last cases.
    Also it’s unclear why “Actual rows count” is not equal for all cases. Isn’t all queries equal (except options) ?

    Reply
  • Kalen Delaney (@sqlqueen)
    March 1, 2018 8:07 pm

    I am very confused here. Why is the first example called the base case if it returns so many more rows that the other queries? I expected the only difference to be the performance, not the number of rows returned. Also, why are you showing the exact code if we can’t run it? Do you have any to create the table you’re using? And what’s with &lt ? Thanks!

    Reply
    • Hi Kalen!

      Each case returns the same amount of rows. The “actual number of rows” difference that you are seeing is on the seek/scan of the table variable/temporary table, not on the SELECT operator. The table variable baseline version does a full clustered index scan. The other options do a seek with 963 rows. My screenshots should have been more clear to indicate this. But rest assured that all cases do return the same amount of rows.

      Unfortunately the less than and greater than signs get converted to “<” and “>” respectively when I copy code into the WordPress code option. You have to change those when copying back out. Not sure if there’s a fix for that, but it’s definitely something I’ve noticed and have been annoyed with.

      We use the Stackoverflow database for our demos. You can download the database if you’d like to play along: https://www.brentozar.com/archive/2015/10/how-to-download-the-stack-overflow-database-via-bittorrent/

      Reply
      • Ack, and when I put the lt/gt thing in that comment, it converted them to the proper signs. Why won’t it do that in the code then? Weird!

        Reply
  • Thank you for your great article.
    I just noticed a side effect of Trace Flag 2453.
    One of our applications is using cursors on temporary table variables. This works normally fine, but as soon as this Trace Flag is enabled we get “Msg 16943, Level 16, State 4” “Could not complete cursor operation because the table schema changed after the cursor was declared.”
    Now we plan to switch to “real” temporary tables.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.

Menu
{"cart_token":"","hash":"","cart_data":""}