What should you do about memory dumps?

A SQL Server monitoring tool is warning you that you’re getting memory dumps, and you’re wondering what that means.

This isn’t normal: you’re encountering a situation where SQL Server is so freaked out that it’s unable to proceed normally with a query. In order to help you (and really, Microsoft) debug the problem, it’s calling a time-out, writing the contents of some of its memory to disk, and then proceeding.

Think of it like going to a gas station, getting a questionable hot dog, and then realizing that you need to make an urgent trip to the bathroom to … undo that decision. So reading a dump file is kinda like…actually, this analogy is more apt than I’d originally thought!

Start by querying sys.dm_server_memory_dumps. Run this query:

That’ll tell you how many times it’s happened, and where the dump files live:

sys.dm_server_memory_dumps
sys.dm_server_memory_dumps

If you’re comfortable with a debugger, read the post Intro to Debugging a Memory Dump by Microsoft’s Adam Saxton. Adam writes:

Having some development experience is also helpful. While you may not need to look at Code directly in a dump, you are looking at the results of code. The concept of Pointers is sometimes hard for someone to grasp that doesn’t necessarily have a programming background, but when dealing with a memory dump, or debugging, the understanding of how Pointers work can be very helpful. Even when we are debugging a Managed (.NET) application instead of a Native (C++ or non-managed) Application. Another good book is Windows via C/C++ by Jeffrey Rickter and Christophe Nasarre.

Don’t feel bad if that sounds terrifying – it’s terrifying to me too. I don’t read dump files either.

If you’re not comfortable with that, but you’re ambitious, check out the free online SQLDumpAnalysis.com. It’s from Microsoft employee and technical genius Sean Gallardy.

If you’re not comfortable with that, patch your SQL Server, then call Microsoft. I dunno about you, but I’ve never been particularly good with a debugger, and I certainly don’t wanna learn when I’m facing a production SQL Server that’s freaking out so badly that it’s dumping memory contents to disk.

Hit up SQLServerUpdates.com to grab the latest patch for your SQL Server and apply that. You’d be pleasantly surprised at how often Microsoft fixes bugs that cause memory dumps, and the bug you’re hitting has probably already been fixed. (I’d hate to have you waste money on a support call only to hear that the bug is already fixed.)

After applying the patch, if the memory dumps continue, open a support case with Microsoft. I know: you may have had rough support experiences in the past where you were told to defrag your turn signal fluid and reboot the firewall, but repeated memory dumps are different. Memory dumps are much easier for them to analyze because they see ’em more often, and they have more specialized tools at their disposal. (For a brief moment in time, you could upload your memory dumps for free in SSMS, but Microsoft took that down due to GDPR security concerns since the dumps can contain personally identifiable information like queries and data.)

Don’t dilly-dally: memory dumps are not a small deal. You wouldn’t wanna be making repeated trips to the bathroom after multiple gas station hot dogs – that’s no way to live your life. Your SQL Server deserves better.

Previous Post
You know what your deadlock graphs need? Animation.
Next Post
Building SQL ConstantCare®: Europeans, what do you want to see in a cloud-based monitoring product?

25 Comments. Leave new

  • Daniel Thompson
    February 22, 2019 7:10 am

    Hey Brent,

    Thanks so much for posting this! You are amazing value and running the simple query in this post is showing that since I patched one of our prod. servers we are no longer suffering from memory dumps – result. 🙂

    Dan

    Reply
  • great post as always — Thank you

    Reply
  • I know my age is showing, but I remember when we did dumps as a regular thing. In fact, we had green bar paper (young people look at this on Google or Wikipedia) and special transparent rulers that went all the way across the green bar with a yellow yellow stripe surrounded by blue stripes. This let you look at one output row at a time.

    To make myself look even older, I used to keep a Chinese abacus (it has two beads above the bar and five beads below the bar; again, Google it). The beads in heaven stood for five and the beads below the bar stood for one unit. Each each rod on the abacus could do a hexadecimal number! And that’s what we did to read our corporate tops. You have no idea how embarrassing it was to be called using an abacus while looking over computer printout by a noncomputer boss. Later, various pocket calculators had the option of octal and hexadecimal calculations. It was way cooler to be found using a pocket calculator.

    Reply
  • Danielle Paquette-Harvey
    February 22, 2019 9:55 am

    Thank you so much for you post! It happens I’ve had a memory dump just two days ago on my server so it’s a great coincidence. I asked the IT to update the server to the latest update and was planning on contacting Microsoft if it happens again. I’ve tried to debug the dump with WinDbg tool only to find the name of the Dll causing the problem, and nothing more since it’s so complicated so I’ve stopped trying 😉 I’m gonna rely on Microsoft.

    Reply
  • I might be really out of main concern exposed here. But my suggestion to Microsoft would be play this game using a new way for debugging issues, based on the same errors and severities we commonly see in the sysmessages table and drill down from than point on (logging then of course/using trace flags for example to increase logging level). Instead on focus on the final situation “memory dumps” there are always tons of end user “errors messages” or “system state” than show the symptoms and then points to disk/memory and/or cpu problems.

    Reply
  • “Think of it like going to a gas station, getting a questionable hot dog, and then realizing that you need to make an urgent trip to the bathroom to … undo that decision.”

    Someone definitely has a second career as a comedian! 🙂

    Reply
  • My system dumped itself in June last year and Blitz tells me about it every week. I haven’t updated to latest Blitz yet (thank you – Blitz suggested that today) but is there ever a point in time Dr Blitz doesn’t have to remind me I had diarrhea 9 months ago? 😀

    Reply
    • Chris – one of the cool things about T-SQL scripts like sp_Blitz is that you can read the code directly and see what it’s doing. Now’s a great time to roll up your sleeves and give it a shot. It can be a fun hobby when you’re sitting on the toilet. Enjoy!

      Reply
  • Alan Jordan
    June 19, 2019 2:02 pm

    I had the proverbial diarrhea also 9 months ago and the pepto-update did the trick. is there any reason for me to retain the memory dumps?

    Reply
  • Intro to Debugging a Memory Dump url does not work. Have another one?

    Reply
    • MGausen – sorry about that! Microsoft has a nasty habit of moving their blogs around constantly. I fixed the link.

      Reply
      • “nasty habit of moving their blogs around constantly”
        Thank you for saying that out loud! I think this all the time and it’s comforting to know I’m not just bellyaching to myself. It’s not just blogs either, it’s all of their stuff. I hate going to a support forum somewhere and seeing a post like “Try this” and they have a link to MS that’s 5 years old. There’s about an 80% change that link isn’t going to work. Grrrr!

        Reply
  • How do you clear the sp_blitz health check from sending old “memory dumps have occurred” messages? I run the HC weekly and the finding is still being listed even though the events are 2mnths + old.? I find this happens after findings are fixed, but the blitz csv file still contains the events with priority < 50…
    Awesome site, and scripts ! Keep up the great work!

    Reply
  • How do you delete records from sys.dm_server_memory_dumps

    Reply
  • Gracias Brent!

    Reply
  • sp_Blitz showed me memory dumps occurred on our production server last week. I was able to extract the problem query from the dump file, and reproduce the dump on our test server with a copy of the production DB: “Internal query processor Error: The Query Processor could not produce a query plan.” Narrowed it down to just a simple 3-table inner join with minimal conditions in the WHERE clause. Because it was so reproducible, I tried various things: CHECKDB (clean), changing compatibility level (no help), updating stats on the tables (no help). Finally, rebuilding all the indexes on one of the tables cleared the error.

    Reply
    • Tom Wickerath
      April 20, 2023 10:16 pm

      Hi Steve,
      Your observation of rebuilding an index to clear the problem reminds me of an incident that happened one morning several years ago at my former employer, The Boeing Company (I’m retired now). I had developed a read-only data reporting tool, based on MS Access, that used pass-through queries to query an Oracle database. One day, one of my customers reported a time out error when attempting to query data for a particular airplane. Sure enough, I was able to repeat it on my workstation. Querying any other airplane produced nearly instantaneous results (e.g., < 1 second). I thought what the hell!

      I fired up TOAD for Oracle and ran the same query that was causing the time-out error. Again, it was very slow taking over 3 minutes the first time, 7 minutes the second time, and about 4 minutes the third time. This test showed to me that Microsoft Access was not to blame. I contacted our Oracle DBA providing the information I had collected to him. He was very sharp, and had the problem fixed in under 5 minutes. My Access read-only tool was back to working great with this particular airplane. It turns out that the index was corrupted in the Oracle database. MS Access has a default timeout for P/T queries of 60 seconds, so that's why the time-out errors were observed by my customer. They only received a weird ODBC error (I forget the number, but not very descriptive of the actual problem).

      Reply
  • Botulism Bill
    May 17, 2023 6:55 pm

    In our case, it was a questionable botulism burger.

    Great post.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.