Temp Tables vs Table Variables vs Memory Optimized Table Variables [Video]

SQL Server, TempDB
7 Comments

Should you use temp tables or table variables in your code? Join Microsoft Certified Master Kendra Little to learn the pros and cons of each structure, and take a sneak peek at new Memory Optimized Table Variables in SQL Server 2014.

This video is a recording of a live webcast, so please excuse any audio and video imperfections. Want the scripts from the webcast? Scroll on down.

Restore AdventureWorks2012

We’ll be changing some things in the database, so I like to restore it to a database named “Showdown” for easy cleanup, but you don’t have to do it that way.

Test Temp Tables: Old School

In this example we create a nonclustered index after the temp table is created. By running it repeatedly you can see that the temp object is recreated each time.

Test Table Variables

Here we use the new SQL Server 2014 inline index creation syntax to create a nonclustered index on the table variable. (We couldn’t do that before!) But looking at the execution plan, things get a little weird.

Memory Optimized Table Variables

Oh Brave New World. First, enable things:

Then create the table type:

Now, let’s test!

Previous Post
A Manager’s Guide to Tuning Code
Next Post
Meet the Artist Behind the TempDB Toilets

7 Comments. Leave new

  • Dayan Gonzalez Katz
    July 17, 2014 10:42 am

    Great post!!! I’m really considering changing lots of temp tables and table variables to the new memory optimized table variable in SQL 2014. I guess there won’t be that much memory consumption in a 1.5GB database with 32 GB ram dedicated sql server right?

    I expect to get a good performance gain with this but please correct me if you think i might be wrong..

    Thanks a lot for this post!
    Dayan Katz.

    Reply
    • Kendra Little
      July 17, 2014 10:55 am

      I think you might be wrong. 🙂

      “I guess there won’t be that much memory consumption in a 1.5GB database with 32 GB ram dedicated sql server right?” Well, you have to guess– and you can’t really monitor or measure. That’s where I start to get worried, because on the chance that things do get slow, it’s very tricky for you to document what the state of things was. You might have plenty of free RAM on the server and have SQL Server run into some sort of internal memory allocation bottleneck that’s difficult to diagnose.

      Don’t get me wrong, I love new features and I’m really interested in this one. But if you’re talking about a mission critical system, start with just using temp tables and taking advantage of the IO improvements in tempdb in 2014 as a safer and easier bet, and easing into the memory optimized table variables much more gradually (https://www.brentozar.com/archive/2014/04/memory-ssd-tempdb-temp-table-sql-2014/).

      If you’ve got load balancing and/or can tolerate the risk for other reasons, though, I’d love to hear how things go!

      Reply
  • Dayan Gonzalez Katz
    July 21, 2014 10:26 am

    Hi Kendra, thanks a lot for your advices! I thought it wont be a problem since this server most of the time has almost half of the RAM not being used at all, it stays somewhere between 12 GB and 16 GB consumption on the task manager (even when i have set max memory to the 30 GB on sql), so i thought i could give a nice performance boost by using the other 16GB with some memory optimized table variable. I get your point that is impossible to know how the memory consumption will be after changing many store procedures, but for example, i thought won’t be that hard if i change 1 procedure per day, there are around 5 procedures using regular temp tables and table variables. Then i will monitor the RAM everyday and see if something looks bad, if it starts looking bad then i guess i will use old version of SP with the regular temp\variable tables. There is no performance issues on this server anyway, just thought i could squeeze more performance out of this server and make some website information display a little faster since these procedures uses a LOCAL FAST_FORWARD CURSOR to process temp table row by row with some heavy logic. (I know changing this for set based queries is the best way but believe me if i tell you this is very heavy logic and very long scripting inside the cursor. I have turn cursors into set bases queries but never nothing this big hehe). I saw a post where they benchmark the new optimized memory table with a houndred thousand rows inside a regular sql WHILE (which i have seen can be slower than a LOCAL FAST FORWARD CURSOR) and it processed one by one of the rows in 0 seconds!!! This caught my attention since our cursor can take from 1 second up to 5 seconds some times and there are less than 2 thousand rows always, of course we use heavier logic than the benchmark i saw, but maybe it boost performance a lil bit more.

    Reply
  • Pratish Pillay
    July 30, 2014 3:52 pm

    Great post and wish I knew all of this before we wrote our stored procedures. One of the most annoying thing about Microsoft is their reluctance to upgrade the debugger so we can actually watch temp table and table variables values while debugging. Enough ranting though. If you know of a method to do this, please please let us know. I’m currently using a script to query temp tables and watch the data as I debug but I’m still lost on how to watch table variables while debugging. Thanks much!

    Reply
  • My Dear,

    I Started using 2014 for testing purpose since a couple of days,
    and i saw many videos related to memory tables.

    My Main objective is to speed up the payroll calculation process as much as possible. but i have noticed the following:
    1- Disadvantages of Memory Obtemised Tables
    There are tooo many restrictions for memory tables like (can’t use OR and also can’t use outer join !!! and above all of this all the videos given by Microsoft was just to simulate a simple select statement that does not really need a stored procedure. Above all problems (Memory Tables does not use table relations can’t put any relation with this tables, that means my database will be just destroyed if i migrate some tables there). also many functions does not operate.
    when it comes to the real word that is never a practice. i have more then 300 Stored procedures in a very sophisticated product that 80% of them uses temp tables, and all of them uses a temp table called #Temp so i even can’t afford to keep the temp table structure located in tempdb if i used the new syntax of nonclustered index.
    2- the most dangerous part is this:
    i don’t want to speed up stored procedures, what i really need is to speed up is the functions (returning tables, with some recursive behavior sometimes) that is called more than procs in the code , and inside the select statements. and nothing has been released to do such a thing, calculating the salaries of employees is a very complex scenario that varies between my clients and many rules has to be checked from deferment tables each and every time a report is called, as you don`t know what has changed until you recalculate.

    Look what i have tested instead of Microsoft great Ideas or Memory Optimized Tables:
    1- I have relocated the entire database into a Memory Disk (actually one file group with a normal table created in this file group that really is resident on the memory). This gives me the Entire Flexibility of Memory speed with full features of a normal table. i just had to be careful to plan hardware crashes and recovery.
    2 – What we really wants Microsoft to do is to relocate the tempDB entirely to memory, that is all. if they think about such a realistic solution then things will be way better, as they keep tempdb as physical file on hdd because SQLwas designed when memory was expensive, but now they should redesign the tempdb location and all old technologies will stay operational without a big change.

    Thanks a lot for the great video you have shared.
    Tarek@easyhrsoft.com

    Reply
  • Great post as always. Just a laymen question (as I haven’t found any answer elsewhere): I want to replace my temp tables with MOT’s but without declaring them first. Something like this “SELECT column1, column2 INTO MyMOT FROM MySourceTable”.
    MyMOT is the memory optimized table I want to create by selecting the data from MySourceTable into it.
    Is that even possible?

    Reply
  • Interesting. I’m wondering if you could pass a memory table (code) to visual studio query to pump data out instead using a basic select query.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.