Slow Startup, Failover, and Restore Times with In-Memory OLTP

When In-Memory OLTP came out, I worked with it briefly, and I remember coming away thinking, “Who in their right mind would actually use this?” I was so horrified that I wrote a presentation about it and gave it at a couple of conferences:

Many laughs were had, but obviously that didn’t make me any friends at Microsoft, ha ha, ho ho. I figured I was done with Hekaton, and I wouldn’t ever have to see it again because y’all all saw the video, and you’d never be crazy enough to implement that feature.

<sigh>

Well, here it is in 2023, and recently I’ve talked to a couple of architects who wish they could go back in time and watch that video. In both cases, they suffered from the same issue.

The short story is that the more data you put into durable In-Memory OLTP tables – and even just 5GB of data can hit this issue – the more your startups, failovers, and restores turn into long stories, to the point where other databases on your SQL Server are practically unusable.

Setting up the 5GB Votes table

To demonstrate the problem, we’ll start with a large copy of the Stack Overflow database. I’ll drop my nonclustered indexes just to make the next screenshot more clear, and then I’ll list the tables in the database to find a good candidate for the demo.

The highlighted table, dbo.Votes, has about 150 million rows and takes up about 5GB of space. That’s not big data by any means, and we can easily fit it in memory on our 8-core, 60GB RAM server. Let’s migrate it into In-Memory OLTP in order to make performance go faster:

After migrating that 5GB table to In-Memory OLTP, it’s taking up 5GB RAM, right? Well, no:

Try 28GB RAM. That’s why we have a sp_Blitz check to warn about high memory usage for In-Memory OLTP:

As to why an uncompressed 5GB rowstore table takes up 28GB RAM, that’s a topic for another blog post. (Sometimes, I imagine rowstore tables being introduced today – people would think they’re the most amazing thing ever.)

What happens when SQL Server restarts?

For most of us, when we restart SQL Server, our biggest concerns are things like dropping connections, losing performance metrics, and starting over again with a fresh buffer pool. In-Memory OLTP users have a bigger problem: when the SQL Server restarts, CPU goes straight to 100%, and stays there:

Ouch. Why is CPU so high? Because with In-Memory OLTP, SQL Server starts up the database by reading the In-Memory OLTP tables up into something we call “memory.” It’s using less memory than before, thank goodness – it’s “only” 13GB – but 13GB of data still takes a long time to pull from disk and reconstruct in memory. (I’m not going to go into technical terms like crash recovery phases here because it’s not necessary to convey the overall problem.)

You can see exactly how long it takes in the SQL Server logs. In this case, the StackOverflow database recovery took 50 seconds – and in this case, because our server’s got really fast storage, the storage was able to deliver data so quickly that it pegged CPU at 100% for all 50 seconds.

During startup, wait stats show a lot of wait times on XTP_PREEMPTIVE_TASK and SLEEP_DB_STARTUP:

There’s also blocking, which is kinda funny – anything that needs to query sys.databases gets blocked, like IntelliSense queries running in the background:

This happens during failovers, restores, and attaches, too.

Startups, failover clustered instance failovers, and restores all exhibit the same problem: they need to start up the database, and doing that means reconstructing the In-Memory OLTP data from scratch.

This is particularly problematic for shops that use Always On Availability Groups. Say you want to add a new database into an existing cluster, and the new database happens to use In-Memory OLTP. You restore it on the primary, and … boom, the server hits 100% CPU usage for an extended period of time, which affected all the running queries on existing databases on that server.

Here’s what CPU looks like at the end of a database restore:

CPU was relatively low while the restore was processing, but once SQL Server was done writing the files, CPU went straight to 100%. Why? Because it needed to bring the database online, which meant reading the In-Memory OLTP data from disk and reconstructing the table.

Even if you just attach an In-Memory OLTP database – a process that is normally near-instantaneous – you bring SQL Server to its knees while it reads through that data and populates memory.

In my example, I don’t have indexes on the In-Memory OLTP table, but if I did, the situation would be even worse. Indexes on these tables are only kept in memory, not on disk, so they’re reconstructed from scratch at startup, failover, restore, and attach time.

Bottom line: the more In-Memory OLTP durable data you have,
and the more databases you have that use it,
the worse this gets.

Trace flags 3408 and 3459 don’t help on SQL Server 2022, at least.

Among other sources, Konstantin Taranov’s excellent trace flag list reports that trace flag 3408 forces all databases to use just one thread when starting up. I don’t think I’ve ever needed to use that trace flag, but it doesn’t appear to help here. In my lab, I set up 3408 in the startup config, restarted the SQL Server, and CPU still went straight to 100%:

Furthermore, 3408 doesn’t appear to have the desired effect on SQL Server 2022 (and I didn’t bother checking on other versions, since it wouldn’t have helped my client, as they had a lot of databases with In-Memory OLTP.) Databases are still starting up with multiple threads, which would drive CPU to 100%:

Same problem with trace flag 3459, which is supposed to disable parallel redo for AGs, but that’s unrelated to database startup, as we can see by the flames:

And just because someone’s going to ask, no, setting MAXDOP to 1 has no effect on system processes like bringing databases online.

So how do we get faster startups, failovers, and restores?

In theory, you could add CPUs. In practice, that’s a really expensive way to solve this problem, and it doesn’t work when you have several (or heaven forbid, dozens) of databases that use In-Memory OLTP. SQL Server doesn’t just start up one database at a time – it starts them in groups, which means you can saturate lots of CPUs on restart. If you need to dive into the internals of this, here are a couple of resources:

In theory, you could use slower storage. Bear with me for a second: if your primary concern was that other databases on the SQL Server were unusable while In-Memory OLTP databases came online, you could actually put the In-Memory OLTP filegroup on slower storage. In this 5GB Votes table example, that does actually lower CPU usage during most of the process, only driving it to 100% near the end of the process:

(Yes, I actually tested that for one particular client, who was curious.) However, that also makes the In-Memory OLTP databases take even longer to come online! In my example with a 5GB table, the database took ~80 seconds to come online instead of ~50 – making your RTO goals tougher to meet.

In practice, minimize what you keep in In-Memory OLTP durable tables. A few ways to do that:

  • Make them schema-only instead, and don’t keep the data around. Yes, you lose all data when the SQL Server goes down, but if you’ve been using this feature for temporary data anyway, like session state, reconsider whether you need to keep it.
  • Sweep older data out to conventional tables. One of Microsoft’s design patterns for this feature is to use In-Memory OLTP tables only for ingestion, but then after the data’s been absorbed into the database, archive it.
  • Use regular tables instead, not In-Memory OLTP. Conventional tables don’t interrupt startup.
Previous Post
[Video] Office Hours at Sea: Heading to Jamaica
Next Post
It’s Monday. Do You Know Where Your PowerShell Training Is?

14 Comments. Leave new

  • Grzegorz Lyp
    March 16, 2023 2:44 pm

    Created an issue on this to MS some years ago. They said that I am using InMemory OLTP in wrong way. Just try to create 100 schema only empty tables. It takes about a minute, at the same moment taking DB offline and online is longer for this period of time.

    Reply
  • Thanks Brent. Well detailed. In Memory Vs external data caching mechanism (redis/elastic) – Is there any threshold for data size to fit in this In Memory feature choice if going with this MSFT feature?

    MSFT also warns about CDC/CT enabled Database having similar latency during AG Failover, have you captured this too.

    Reply
    • Glad you liked the post, but both of those questions are outside of the scope of this blog post. For personal advice on production system designs, click Consulting at the top of the site. Hope that’s fair!

      Reply
  • I didn’t start working with SQL Server until 2016, so when I saw DBCC PINTABLE in the video, my brain came up with a new word: pint-able, as in, if it’s a good beer, it’s pintable. 🙂

    Reply
  • Allen Shepard
    March 16, 2023 5:46 pm

    Tom Kyte of ORACLE gave some similar answers. Yes ORACLE reduced some internal code and locking as the table was in memory so dirty page processing was simplified. That is the only part that looked good – reduced overhead.
    His first reaction was “The WHOLE database has to load into memory first – then some other processing starts.” So start up, shutdown, recovery is much longer. That was back (8i, 12g?) when we ran windows servers for months without rebooting.
    Now reboots are done bi-monthly for updates and patches at my shop (plus the unscheduled reboots a few times a year).

    Reply
  • RunsWithEagles
    March 16, 2023 7:00 pm

    I was experiencing very long start up times (in two cases; it took days) after patching due to In-Memory OLTP. We opened a ticket with MS and they recommended Trace Flag 9944. It has removed that one pain point. It’s worth testing if you have this problem.
    https://support.microsoft.com/en-us/topic/kb4090789-fix-in-memory-oltp-database-takes-a-long-time-to-recover-in-sql-server-2017-90db41aa-7fd7-da50-83fd-f96dc1a87cdd

    Reply
    • Thanks, and I should have mentioned that in the post – we tested that one, and it had no effect on SQL Server 2022 for our particular scenario, but I’m glad it helped yours!

      Reply
  • This is great, Brent. Thank you!

    Reply
  • Trace 3459 (disable parallel redo) works fine. We use it after a reboot on secondaries when parallel redo threads get allocated to the wrong databases.

    Reply
    • Can you clarify that?
      * What version of SQL Server are you working with?
      * Roughly how many databases have In-Memory OLTP objects, and roughly what kind of size?
      * Roughly how many CPU cores do you have?
      * How long does it take for the databases to come online after the restart?
      * What does CPU look like when the SQL Server service starts?

      Just making sure we’re comparing apples to apples. Thanks!

      Reply
  • Another reason to be very wary with In-Memory stuff

    Reply
  • Thanks Brent. I am with you on this one. Those who setup flags probably had a database or two but when you have many databases with memory optimized tables, it does not work. I have also tried both flags in SQL 2019. I found the threshold to be 8 Core CPU handles 5-8 databases. Anything more than that you will need to add more resources.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.