When Shrinking Tempdb Just Won’t Shrink

SQL Server, TempDB
71 Comments

I am not a proponent of shrinking databases, but sometimes you have to because your momma said to. Sometimes that database is tempdb.

It used to be that we were warned against shrinking tempdb because it could cause corruption, so your only recourse was to restart the SQL Server service. Paul Randal let us know that this is no longer a problem.

LET’S SAY YOU HAVE TO SHRINK TEMPDB

Like your life depended on it. Or perhaps you needed the alerts to stop.

If a user ran an atrocious adhoc query that caused your tempdb to grow so much that it caused your disk space alert to fire and:

  • you needed that alert to stop
  • the storage team is not going to give you more space
  • the user promised to never do that again

So you try to shrink tempdb, but it just won’t shrink.

Try clearing the plan cache:
DBCC FREEPROCCACHE

And then try shrinking tempdb again.

I came across this solution recently when I had to shrink tempdb. I tried shrinking each of the 8 data files plus CHECKPOINTs, repeatedly. It would not budge. I almost threw in the towel and emailed out that the space issue would be fixed during our next patching window, but then I found David Levy’s reply. DBCC FREEPROCCACHE worked like a charm.

Brent says:
not-mad-amazed

Previous Post
A funny thing happened on my way to set up Mirroring…
Next Post
New York City: The Data That Never Sleeps

71 Comments. Leave new

  • I have found that more often then not DBCC FREESYSTEMCACHE (‘ALL’) (also mentioned in David’s comment you reference) is more effective than FREEPROCCACHE (although FREEPROCCACHE does often work) – I have never dug into why one is more effective than the other (for me anyway).

    Reply
  • Reply
    • Sadly, I’ve used this link more than I care to admit. But, when you can’t take down an instance, it works. I’d say it’s worked 90% of the time from my personal experience.

      Reply
  • Thanks: This got the last 6 extra tempdb files (out of the unnecessary 31) an third party vendor dumped all over my instance! I can now sleep at night.

    Reply
  • A handy article … for some of the not well-versed DBAs it might be worth mentioning the downsides of running DBCC FREEPROCCACHE before some people go and run this command on their production or client-facing pre-production instances.

    Reply
    • I can add a disclaimer, but the point I was trying to convey was that there is an option to shrink tempdb without restarting SQL when shrinking isn’t reducing the file size. I’m not suggesting to run this command for no good reason.

      Reply
    • Stephen Evans
      March 29, 2017 9:21 am

      What are the downsides?

      Reply
    • Stephen Evans
      March 29, 2017 9:22 am

      @David Warner … What are the downsides?

      Reply
  • Kris Gruttemeyer
    February 4, 2016 8:40 am

    It’s a cool trick, sure, but all magic has a price.

    In this case, you’ll blow away each and every plan in your plan cache and have to recompile them all. Thereby incurring a CPU hit. Might kill your performance for a bit, might not. Depends on the number of plans and how complex they are.

    Can we have a disclaimer about running this in production added to the article? At the very least, can you add an explanation of what exactly this command will do?

    Reply
    • Indeed! This is a situation where you are trying to fix a problem (disk space alerts) and not have to restart SQL. Clearing the procedure cache is a much better option than the downtime incurred from restarting SQL. But I hear you on the possible negative side effects of this. Blowing out good plans and possibly getting bad plans is a risk, plus all the overhead of the compiles. Been there, done that.

      Reply
      • i mean… restarting sql also clears all the cache so…

        Reply
        • Not an option in PaaS

          Reply
          • Marco – can you be more specific about what you think isn’t an option in PaaS, and which PaaS vendor/solution you’re using? You might be surprised.

          • We’re using Azure SQL Managed Instance so don’t have the ability to restart SQL ourselves

          • Marco – in Azure Managed Instances, you shouldn’t be shrinking TempDB, so this article is irrelevant.

          • We’re in a situation where tempdb has grown to over 200gb after some one off processes. We’re on business critical 8 vCore which means we’re only allowed a maximum of 1TB unless we spend 10k extra per month to upscale to 24 vCore for 2TB

          • Marco – gotcha. It’s fairly normal to expect that TempDB will be around 20-25% of the size of the user databases. People are gonna dump stuff into TempDB when you’re not looking.

            If you have around 800GB of data, a 200GB of TempDB wouldn’t be unusual at all. I’d leave it there. (If you’re trying to cram every single last byte into the 1TB, then you’re probably better off stopping to review tables you’re not using, indexes, etc.)

          • Just a quick follow up – looks like our TempDB shrunk from 200GB to 10GB by itself! Probably as part of Microsoft’s behind-the-scenes routine maintenance?

  • Yavor Vrachev
    February 4, 2016 8:43 am

    I thought that was well known along with FREESYSTEMCACHE (‘ALL’), FREESESSIONCACHE and DROPCLEANBUFFERS.

    Nonetheless, cool tip!

    Reply
  • I ran into this problem yesterday, using the SSMS task to shrink files never accomplished anything, then I re-read the documentation on Shrink File and realized I needed to try this:

    DBCC SHRINKFILE (N’tempdev’, NOTRUNCATE) — Move allocated pages from end of file to top of file
    DBCC SHRINKFILE (N’tempdev’ , 0, TRUNCATEONLY) — Drop unallocated pages from end of file

    I did this for all 8 of the tempdev files and slowly, over time, it finally recovered about 80% of the space. I was truly surprised by this as nothing else had worked. I had to run a job every 15 minutes through the day but it eventually recovered the space.

    Reply
  • Thanks for the article. I think microsoft should make it more clear how and when use this command or should include more options in the GUID. For me, it’s still not totally clear when use shrink commands and what’s the advantage of it 🙂

    Reply
    • The best practice is to not shrink your databases. Sometimes you have to for storage reasons. Sometimes you do it because of a large one-time delete, and you know you aren’t going to need that size for a long time.

      Never shrink on a schedule. Only do it manually. Understand why the files grew and the ramifications of shrinking them.

      Reply
      • Matthew Holloway
        February 4, 2016 3:32 pm

        More often than not a look through the plan cache will tell you who thinks your prod server is a dev server.

        Reply
  • Marcy Ashley-Selleck
    February 4, 2016 1:51 pm

    Would this really be necessary ? FREEPROCCACHE will reset all execution plans. This command clears the cache for the entire SQL instance, all databases. To clear for just one database, use the following:
    DECLARE @intDBID INTEGER SET @intDBID = (SELECT dbid FROM master. dbo.sysdatabases WHERE name = ‘TempDB’ )
    –Flush stored procedure/plan cache for the specific database
    DBCC FLUSHPROCINDB (@intDBID )

    Reply
  • I had a related issue today. Although tempdb was fine, the log was auto-growing like crazy. Once I got the insanity to stop, I couldn’t get that log to shrink (log_reuse_wait_desc was ‘ACTIVE_TRANSACTION’). Executing CHECKPOINT freed it up so I could shrink it back to its normal size.

    Reply
  • Matthew Holloway
    February 4, 2016 3:29 pm

    We have a couple of instances where we can’t restart the instance without involving the 3rd party vendor, less than Ideal, we are working on that.
    For now we have added a separate drive for tempdb and pregrown the tempdb much like https://www.brentozar.com/archive/2016/01/cheat-sheet-how-to-configure-tempdb-for-microsoft-sql-server/ several months ago.

    Before that:
    When I just couldn’t wait, services are starting to timeout and running ‘
    DBCC FREEPROCCACHE’ has not helped and alerts are going off everywhere (i.e. a Hail Mary script) I have run the script below (modified to suit the number and naming convention of the tempdb files.

    Performance would be demonstrably quicker in a minute or so.
    How long the whole script takes to run depends on a lot of factors.

    Disclaimer: we are not talking financial transactions or other end of the world type stuff where the transactions cannot be re submitted. Test in your lab on your apps before you need it if you can.

    It’s dirty, but it works.

    — Report existing file sizes
    use tempdb
    GO
    SELECT name, size
    FROM sys.master_files
    WHERE database_id = DB_ID(N’tempdb’);
    GO
    DBCC FREEPROCCACHE — clean cache
    DBCC DROPCLEANBUFFERS — clean buffers
    DBCC FREESYSTEMCACHE (‘ALL’) — clean system cache
    DBCC FREESESSIONCACHE — clean session cache
    DBCC SHRINKDATABASE(tempdb, 10); — shrink tempdb
    dbcc shrinkfile (‘tempdev’) — shrink default db file
    dbcc shrinkfile (‘tempdev2’) — shrink db file tempdev2
    dbcc shrinkfile (‘tempdev3’) — shrink db file tempdev3
    dbcc shrinkfile (‘tempdev4’) — shrink db file tempdev4
    dbcc shrinkfile (‘templog’) — shrink log file
    GO

    — report the new file sizes
    SELECT name, size
    FROM sys.master_files
    WHERE database_id = DB_ID(N’tempdb’);
    GO

    Reply
    • I wonder how it resolved a performance problem. I mean, I get how it can resolve a performance problem for a user database when you can’t figure out which stored proc to recompile. But I wonder how it is resolving a performance problem when you run that script for tempdb. I’m wondering if it’s because of the size of the procedure cache.

      Reply
      • Matthew Holloway
        February 4, 2016 5:34 pm

        Primarily in waits caused by disk thrashing was dropping like a stone.
        In the specific instance we had issues with the previous Admin had left tempdb on c.
        Since I have been able to get all the ducks in a row for a full restart and reconfigured TEMPDB on its own drive, preinflated to max size etc, issue has not reared its ugly head.
        The sort of query (recursive that caused temp db to grow by30gb and keep growing ) that killed it still runs form time to time but doesn’t hold up the rest.

        Reply
  • So restarting SQL to shrink tempdb would do this:
    FREEPROCCACHE
    DROPCLEANBUFFERS
    FREESYSTEMCACHE

    By executing those commands instead of restarting SQL you are saving yourself an outage. You have to pay the price somehow. But I would rather that than an restart scenario.

    You may find that none of those commands actually work as many of you have. I expect this is because some queries have started in the mean time and hit tempdb again… doh! Try running the shrink and the DBCC between a BEGIN and END statement, that may help.

    I have found the command that also works is DBCC SHRINKFILE (N’tempdev’ , EMPTYFILE)

    Yes you have to expand it again. This time you are going to fix size the tempdb files and set it to what you believe is correct based on your knowledge of the system, then deal with the users complaining their (crazy) reporting transactions are dying.

    Reply
    • Saving us from an outage was my number one goal. Number two was to stop the disk space alert. And if there were performance problems that arose because I cleared out the procedure cache, I was prepared to deal with that. Always monitor after making production changes!

      In my case, I had to free the procedure cache several times. We had 8 tempdb data files, so I probably ran it 1-2 times per file before we were back to the size before the user ran the large query.

      Reply
  • It is a handy trick when you find yourself with a disk space issue, but really be sure to set your tempdb max file sizes so you don’t.

    Reply
    • Jon – that would seem to produce a different problem, right? Your TempDB runs out of internal space, and then what?

      (Just making sure you don’t think this permanently solves the problem either.)

      Reply
      • …and then as an operations DBA you don’t get paged at 3 AM to sort out a 0% free diskspace issue. 😉

        You can then shame the developers during normal business hours.

        Reply
    • I’m not a fan of setting max file sizes. I’d rather queries don’t get rolled back and deal with the disk space issue by having proper alerting in place. Now of course a user query could still fill the entire disk and fill it so quickly that any alerts in place are too late, but that should not be a frequent problem if the disk is sized appropriately for the workload. The disk in question here was over 500GB in size and dedicated to tempdb. This was a massive user query that ran and was not typical. The user did get the data since the query was able to complete. That would not have been the case if we had set max file size at 80% of the total disk size.

      Reply
      • Old thread but couldn’t help but responding. Identifying the root cause and how to avoid it is step one. The advice here is great but the real problem is that the words User and Query are in the same sentence. So the DBA is scrambling to resolve an issue with application design (or lack thereof) or SQL best practices. Adhoc user queries in my experience are generally a bad idea, queries that could take a server offline should be prevented by putting into an application or a reporting module like SSRS with sane limits wrapped around them and if something that large is needed the DBA gets involved. I would rather the query fail with programmatic or SQL file size limits, make them submit a ticket and ask why it failed and try to explain why they are running a query like that in the first place.

        Reply
        • You wrote:

          Adhoc user queries in my experience are generally a bad idea

          Errr, okay, back here on Planet Earth, we don’t usually get the ability to say, “Users aren’t allowed to run queries.”

          But I sure like the sounds of your planet. Which one is it, by the way?

          Reply
  • None of the above given workarounds are working in my case. I have a SQL 2005 SP3 environment. Every time I do shrink I get below message.

    DBCC SHRINKFILE: Page 1:2325704 could not be moved because it is a work table page.

    Have tried above workarounds multiple times but it did not work. 🙁

    Reply
  • Jonathan Shields
    March 14, 2016 10:42 am

    Thanks this worked for me. sp_spaceused reported 11GB free out of a 12GB tempdb data file. Tempdb on c. Only 2GB free on c. Server guys wont give me any more space until the next scheduled outage (weeks).

    This solved the problem.

    Reply
  • Renato Calonico
    June 20, 2016 5:35 pm

    Thanks a lot. I had the same problem and worked very fine…. 🙂

    Reply
  • I like the approach Marcy posted. I myself like to dig into the procedure cache individually for the production env; Usually it’s some rogue developer causing havoc. However, I have seen some procs kill tempdb quickly like this one… just to return analysis info…

    SELECT TOP 50 SUBSTRING(qt.TEXT, (qs.statement_start_offset/2)+1,
    ((CASE qs.statement_end_offset
    WHEN -1 THEN DATALENGTH(qt.TEXT)
    ELSE qs.statement_end_offset
    END – qs.statement_start_offset)/2)+1),
    qs.execution_count,
    qs.total_logical_reads, qs.last_logical_reads,
    qs.total_logical_writes, qs.last_logical_writes,
    qs.total_worker_time,
    qs.last_worker_time,
    qs.total_elapsed_time/1000000 total_elapsed_time_in_S,
    qs.last_elapsed_time/1000000 last_elapsed_time_in_S,
    qs.last_execution_time,
    qp.query_plan
    FROM sys.dm_exec_query_stats qs
    CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt
    CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp
    ORDER BY qs.total_logical_reads DESC — logical reads
    — ORDER BY qs.total_logical_writes DESC — logical writes
    — ORDER BY qs.total_worker_time DESC — CPU time

    Reply
  • *Quickly bookmarks this post*
    Thanks for the tip, I’ve been in this situation quite a few times, and I’ve sadly had to restart SQL to make tempdb go back to normal 🙁

    Reply
  • Lakshmi Kukkala
    March 29, 2017 8:04 am

    This worked the way it is supposed to. Thank you for your tip. I have this issue for last 3 days and finally i could get this solution. Thank you again.

    Reply
  • USE dbcc opentran and look for any open tran before you try to shrink the tempdb.

    Reply
  • Sometimes you have to restart it in single-user method. And, in my case, also undo-the max connections of 1. :
    These commands work:

    NET START “SQL Server (SQLEXPRESS)” /m”SQLCMD”
    sqlcmd -S .\SQLEXPRESS

    EXECUTE sys.sp_configure
    @configname = ‘show advanced options’,
    @configvalue = 1;

    RECONFIGURE;

    EXECUTE sys.sp_configure
    @configname = ‘user connections’,
    @configvalue = 0;

    RECONFIGURE;

    use tempdb
    go

    dbcc shrinkfile (tempdev, 5)
    go
    — this command shrinks the primary data file

    dbcc shrinkfile (templog, 5)
    go

    SELECT SUM(size)*1.0/128 AS [size in MB]
    FROM tempdb.sys.database_files

    Reply
    • I’d rather not shrink tempdb if I have to take a downtime like that. Do you have any links for when single-user mode is necessary? I’ve never had to do that. I’ve always been able to shrink even if I had to do all sorts of steps.

      Reply
  • Somehow I’ve had this issue multiple times but never come across or considered this as a solution. Until today, when I found this article, and it worked. Useful bit of info.

    Reply
  • It worked like a charm !
    Thanks Brent!

    Reply
  • This worked for me!

    But like others have mentioned, after you run this, you will have to look for a root cause, as it may just go right back to growing the database again.

    Reply
    • Reply
      • I used sp_WhoIsActive to get to the root cause of this. It was really clear once I filtered down to the timeframe in question and sorted by the tempdb allocation column. I then sent an email to the business user who wrote and was running the query.

        Reply
        • In my case, sp_whoIsActive did not help, as it was not a query per se; it was the fact that TempDB was not configured to the recommended best practice, which Brent has mentioned before (128mb data files, etc). The files were set to initialize at 20GB each, and as soon as the user tried running their job, it was maxing out TempDB. I ran the DBCC statement, released the space, shrank data files down to where they need to be, bounced the services (with the user’s blessing), and all was good.

          Reply
  • […] to Tara Kizer who’s post gave me the idea of clearing the plan […]

    Reply
  • TechnoCaveman
    February 8, 2019 1:17 pm

    When a temp db file will not shrink, check its initial allocation.
    Yea, Temp DB file #1, #3, and #4 where configured at 20 gig. TempDB #2 was configured at 56 Gb as initial extent or file size.
    This might make a great test question.

    Reply
    • There are no good test questions. (If you’ve ever attended Office Hours when we’ve been asked about certifications, you’ll know I’m pretty against them.) Plus I wouldn’t be a good DBA if my tempdb files were not all the same size and have the same autogrowth.

      Reply
      • Techno Caveman
        February 8, 2019 4:01 pm

        No good test questions? Are there any good answers ? If so, how do we find them answers ?
        I agree about Temp DB files. Ran into this one today on a DB I did not build but was asked to fix. It will take three weeks to fix the file size: – (sigh) life by multiple committee.
        One other repair – Temp DB uses a different collation than the database.

        Reply
  • Anyone else notice that the size reported by master.sys.master_files can be quite different to the size reported by tempdb.sys.database_files?
    Any ideas why that is?

    Reply
    • This is across versions 2008R2, 2012, 2014, 2016 and only appears to be tempdb – none of the other databases show this discrepancy.

      Reply
  • Excellent. I spent 45 minutes trying to shrink Tempdb but it won’t work. It worked before and that is what led me to wrong direction. Now evrything is good.

    Reply
  • Sean Fernandez
    October 8, 2019 6:57 pm

    Great tips Tara and from the comments above.

    I had two tempdb data files that had blown out to 208GB. After trying your steps I managed to shrink to 10GB, hurray… outage averted but still had the other pesky file not playing nice.

    Tried it multiple times (definition of lunacy) and then found that increasing the file by a few megabytes and then running the dbcc shrinkfile allowed me to shrink both files to manageable numbers.

    Reply
  • Davis Henely
    July 27, 2020 11:08 am

    Thanks for the tip. Worked like magic.

    What I am wondering, though, is why tempdb is tied to the procedure cache? I’m trying to figure out why this works.

    Thanks again

    Reply
    • When it comes to “why” questions, you would need to ask Microsoft, right? I can’t really tell you why Microsoft does what they do. Hope that’s fair.

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.