How to Implement In-Memory OLTP Quickly and Easily

Architecture
32 Comments

When you first hear about it, SQL Server’s In-Memory OLTP sounds like a new feature that’s hard to implement, requires a lot of schema changes, and takes planning and downtime.

I’m here to tell you it’s simply not true, and you can do it in just 3 steps. Here’s how:

  1. Read Microsoft’s guidance on how much memory you’ll need
  2. Give your SQL Server that much
  3. Set your max server memory setting appropriately, leaving 4GB or 10% free for the OS, whichever is higher

That’s it. That’s all there is.

Forget the feature named “In-Memory OLTP” and just put the hardware in that the feature would require anyway. You’re going to have to do it sooner or later, so just start there. Before you go writing the necessary conflict detection code, trying to rip out your unsupported cross-database queries, messing around with hash index troubleshooting, or stressing out about their statistics problems, just buy the damn RAM.

You’d be stunned at how often that simply makes the performance issues go away.

Previous Post
In Azure SQL DB, what does “The connection is broken and recovery is not possible” mean?
Next Post
Pop quiz: what do these things cost?

32 Comments. Leave new

  • Thomas Franz
    March 11, 2019 9:27 am

    Thanks you very much – it was the first time I read ” leaving 4GB or 10% free for the OS, whichever is higher”. Usually you find only the 10 % rule, which makes not very much sense when you have a server with 1 TB RAM (or even 128 GB), since Windows will not start using 12 or 128 GB for whatever just because you have very much RAM.

    Reply
    • Noah Engelberth
      March 11, 2019 12:00 pm

      As I understand it, the primary reason for “4GB or 10%, whichever is higher” (which would imply reserving 100GB on that 1TB RAM system) is not to reserve it just for Windows. There are some database engine processes that can use RAM above the Max Memory setting, and if you have any other roles (SSIS, SSAS, or SSRS being the common “big ones” that get co-located and can consume significant memory) on the server that need RAM as well, they’ll be competing against the database engine for those resources.

      Reply
      • It’s also for other user processes. I’ve been amazed at how often I’ve seen admins say, “Oh it’s okay, you can install ____ on the database server – it has plenty of memory.” In the ___, put in SSAS, SSIS, SSRS, Polybase, ML, R, Python, you name it.

        Reply
  • Emanuele Meazzo
    March 11, 2019 9:40 am

    I agree, in-memory-oltp is really useful only when dealing with huge lock/Latch Contention on a table.

    Reply
  • Please warn people that once you create a memory-optimized filegroup, you can only remove it by dropping the database. Also, this feature may cause issues with disk based snapshot backups, you may not be able to restore the database from the snapshot.

    Reply
  • Roman Brauchle
    March 11, 2019 9:57 am

    Just for my understanding. Is InMemory OLTP anything else then letting the admin decide if a table should stay in memory all the time? Normaly the SQL Server should know which tables are querried very often and keep them in memory. Now I tell the server:
    Hey I know better then you. Keep this table in memory all the time

    Reply
    • Emanuele Meazzo
      March 11, 2019 10:26 am

      Nope, that’s lock pages in memory, we’ve had it since forever.
      In memory OLTP is a new table structure that lives totally in memory, no more pages and extents but a whole different beast, with different index types and optimistic concurrency behavior.

      Reply
      • Heh, no, Roman’s referring to DBCC PINTABLE, which was only a feature briefly during the SQL 2000 era. Since then, we haven’t had the ability for an admin to decide if a table should stay in memory all the time – even with Lock Pages in Memory, that doesn’t let you define what objects stay in RAM (since after all, you might not have enough memory for a particular table.)

        Reply
  • You can hurt MS business by such articles. First “There’s a bottleneck in Azure SQL DB storage throughput” then this one 🙂

    Reply
    • Here’s my take on that… Can’t speak much to the other post except Brent posted the truth and it may actually spur more on-premise sales, which aren’t cheap either. For this post, I think you have it backwards. A lot of people may think that SQL Server can’t be used for “big stuff” unless they resort to such trickery as “in-Memory” along with all the complexities and headaches that go along with it. Brent just stated that you don’t actually need to do that and that could spur additional sales for those put off by “In-Memory”.

      Reply
  • Noted, thanks Brent. Getting RAM isn’t the problem (its relatively cheap and with hot-add in VMware simple to implement), its the dealing with the constant “why do your database servers need so much RAM!” each and every day from sys admins and the hardware guys that becomes grating after a while 🙂

    Reply
    • So true. I just tell them upfront now that I want all of their RAM and all of their storage, so let’s not keep arguing about it :-).

      Reply
  • Yup-a-duppa-ding-dong. As I like to say, “More RAM covers a multitude of sins”.
    When the in-memory stuff came out with the nickname “Hekaton” for 100x faster I thought “Hey now!”.
    But compared to just having enough RAM to keep your standard tables in buffers, I’ve never seen anything better than about 2x faster. And the headaches to get there, OMG, by the time that’s factored in you probably net no advantage at all and a hideous maintenance load going forward.

    Reply
    • Koen Verbeeck
      March 12, 2019 12:53 am

      The 100x faster doesn’t just come from keeping data in memory. It also comes from having less locking and because some stored procs are natively compiled, which can give an extra boost (at least in the demos 😉

      Reply
      • I typically get between 20X and 1000X improvements in performance and reductions in resource usage just by fixing “performance challenged code” (a politically correct term for what I normally call it 😀 ) One of my favorite mantras is “Performance is in the code”.

        Reply
        • Koen Verbeeck
          March 12, 2019 7:54 am

          I think the point is that if you have excellent code that can in no way run faster on your current system (which already has enough RAM) because of locking or whatever, you can resort to the trickery to make it run even more faster. Like using nitro to make your car even faster 🙂

          Reply
          • Heh… I’ve got 16 hyperthreaded to 32 and 512GB of RAM. My largest DB is only 2.3TB on that box. From the sounds of it from other posts on this thread, that’s not going to be nearly enough RAM. Seems awfully expensive just to put a hood-scoop on a shopping cart.

      • @Koen, there must be some larger benefits there for transactional code that benefits from the no-block architecture, as there must be some benefits from compiled code, but the 100x stuff was in comparison to mostly disk-resident data, and indeed hard-disk-resident data and not even SSD-resident data.
        Every time I’ve tried to use in-memory stuff it turned out I needed to eventually store it in a normal table as well, for various ad-hoc reasons that won’t run against memory tables, and again, by the time you’ve also done that, even if you do it lazy (after the fact), the modest benefits get lost.

        Reply
    • Brian Boodman
      March 12, 2019 6:32 am

      You can get even better performance if you disable data durability on your in-memory tables, as this bypasses transaction logging and checkpoints. It would be nice if these benefits were automatically available on TempDB (which doesn’t even have schema durability), but we’re not quite there yet.

      Reply
      • Disable Data Durability for the sake of performance… to me, that seems contrary to all that is holy unless the data is totally temporary (for example, in a staging table used for imports from files) and can be quickly replaced if something happens to it. I find that fixing “performance challenged code” instead of resorting to such trickery is much more effective not to mention being less dangerous if the data has irreplaceable meaning. If it were available for TempDB, then I might use it but for just about anything else, no way.

        Reply
  • Pittsburgh DBA
    March 12, 2019 6:43 am

    I agree with this 10,000%. When I tried to use this In-Memory OLTP feature, even on a 32-core box with 256 GB RAM, it sucked. The vast number of limitations, coupled with the shockingly-slow I/O was a non-starter.

    Reply
    • With 256 Gb of RAM and 32 cores, you probably dont have anywhere near enough memory unless you’re running queries with absolutely ridiculous CPU costs – that’s only 8 gb per core

      Reply
  • Alex Friedman
    March 12, 2019 8:48 am

    Hahahaha I love it!

    Reply
  • Carlos Garcia
    March 13, 2019 8:10 am

    oh please read “Unsupported SQL Server Features for In-Memory OLTP” before implementing… this really is an awesome feature but also a tricky one to troubleshoot https://docs.microsoft.com/en-us/sql/relational-databases/in-memory-oltp/unsupported-sql-server-features-for-in-memory-oltp?view=sql-server-2017

    Have fun 😀

    Reply
  • Darko Martinovic
    March 14, 2019 12:55 am

    The sentence : “forget In Memory OLTP”, sound a little bit cruel at first. But if you check this link https://www.red-gate.com/simple-talk/sql/t-sql-programming/converting-database-memory-oltp/ and this link https://github.com/Darko-Martinovic/MOConverter
    , you have to agree with Brent.

    Reply
  • Raphael Ferreira
    February 26, 2021 9:39 pm

    ROFL…!!! You crack me up…! Loved it. THANKS! I needed the laugh at the end of a Friday afternoon… Best, Raphael.

    Reply
  • Great article and interesting comments!
    Thank you for all your efforts!
    We are currently migrating to in-memory tables on SQL Server 2019 Standard Edition. The disk based table is 55GB data + 54Gb of indexes (71M records). RAM is 900 GB. But during data migration (INSERT statement) we get an error message: “Msg 41823, Level 16, State 109, Line 150
    Could not perform the operation because the database has reached its quota for in-memory tables. This error may be transient. Please retry the operation.”
    The in-memory file is “unlimited”, so it looks strange since SQL Server 2019 should not have any size restrictions for in-memory tables.
    Maybe you could have any ideas about the issue reasons?

    Reply
  • Rejected this as well 2 years ago. We have 10TB of data. Argument is trivial: it was possible to achieve the same or better performance without in-mem OLTP just by using proper architecture and good code design. And these frustrating restrictions, they limit capability to write elegant code a lot. In-mem OLTP may lead to spaghetti easily. Very same, as cursors: any cursor can be rewritten by faster, simpler an way more readable loop. And one more thing: it is vendor lock-in.

    Reply
  • George Walkey
    October 25, 2022 4:14 pm

    SCCM System Center patching can ITSELF takes over 4GB of RAM

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.