Cheat Sheet: How to Configure TempDB for Microsoft SQL Server

SQL Server, TempDB

The short version: configure one volume/drive for TempDB. Divide the total space by 9, and that’s your size number. Create 8 equally sized data files and one log file, each that size. Presto, the drive is full and your TempDB is configured for easy performance.

The long version is a little more complicated.

If you’re on SQL Server 2012, get on SP1 Cumulative Update 10 or newer. Microsoft made performance improvements to SQL Server 2014 so that it writes to TempDB less frequently. Because you were so good this year, they even backported it to SQL Server 2012, too. Get on this level – and by this level, I mean the latest updates shown on

Create a volume/drive/array for TempDB. You want this to be a separate volume from your user database & log files – not just for performance reasons, but also for logistical management reasons. Any user can dump a ton of stuff into TempDB without your control, and they can even run TempDB out of space. If TempDB lives on the same volume as your user data and log files, and the drive runs out of space, you can have a tough time restarting SQL Server. However, if TempDB is on its own volume that runs out of space, no big deal – just restart the instance and you’re back in business.

If you’re using a SAN-backed cluster, use local SSDs for TempDB. Since SQL Server 2012, Microsoft has fully supported using local solid state drives even in a cluster. This has three benefits: the local SSD is often cheaper, it’s often faster, and it keeps your SAN path traffic freed up for your valuable user data file & log file reads and writes.

Create 8 equally sized data files. This helps you avoid the GAM/SGAM/PFS page contention issues described in Microsoft KB 2154845. In your journey through the blogosphere, you may have read that at one point you were supposed to create one data file per core. That’s no longer the case – just start with 8, and only add more when you’re facing the page contention issues described in that KB article.

TempDB files need to be sized evenly to avoid page contention.

Size them (and the log file) to fill up the drive. If the drive’s dedicated to TempDB anyway, why leave free space? You don’t want SQL Server pausing user activity to stop to grow the log file out, and you don’t want to have to hassle with possibly uneven data file growth sizes, and you haven’t got time for the pain. Just set it up and walk away with confidence.

Make it easier with this script. Mike Petri created a gist that takes parameters for the size of the drive, number of files you want, and the folder path, and generates the T-SQL for you.

And while you’re setting up the server, check out our free SQL Server Setup Checklist in our First Responder Kit.

Want to learn more? Take my class.

My Fundamentals of TempDB class is for curious folks who want to learn:

  • What uses TempDB, and how that affects performance: not just temp tables and table variables, but also triggers, cursors, sorting indexes, workspace spills, the version store, and more
  • How to host TempDB depending on how you use it: understanding what goes in the log file, the data files, why the number of data files matter, and whether your particular workload should be on SAN, local storage, or ephemeral storage, and why
  • How to monitor it: after you understand what uses it and how to host it, you need to keep tabs on how its performance changes over time using DMVs, Perfmon counters, and the First Responder Kit scripts

Learn more and register now.

Previous Post
How to Talk People Out of the SA Account, Option 2
Next Post
How to Get Your Very First Full Time DBA Job

105 Comments. Leave new

  • Douglas Schmitt
    January 14, 2016 9:10 am

    Hi Brent –

    Great advice as always, but this is in conflict with the latest Setup Checklist, specifically for the location of the TEMPDB log file. Do you generally advise having the TEMPDB log with the other log files (L:) or with the TEMPDB data files (T:)?

    • Douglas – I’ve been kinda lucky, but I haven’t seen an environment where a separate location of the TempDB log file made a difference. Generally speaking, as long as both the data files and log files are on fast storage (which is typically local solid state), you’re good.

      • On AWS instances, it makes sense to put .ldf on a separate magnetic volume which has very low write latency but a pretty high read latency. If there are AGs, then it might not be ideal.

        I place mdf and .ndf files on an SSD volume, ideally on the local host to avoid network bottlenecks.

  • > If the drive’s dedicated to TempDB anyway, why leave free space?

    One might have more than one SQL Server instance on the physical box?

    • WQW – right, but don’t you want to size both of the instances’ TempDBs to begin with? Grow ’em out and cap ’em. Otherwise, one instance could grow, dominate the drive, and while growing, force the other instance out of space. Ouch.

      • better yet, but temp DB for each instance on separate volumes… understandably more difficult with physical machine but in a virtual world easy as pie

  • Also, turn on TraceFlag 1118. This again helps with SGAM contention.

    If you don’t want to completely fill the drive, (monitoring as well as, give me a chance to look before it all goes to pot,) look into turning on TraceFlag 1117 (effects all databases, not just TempDB). This will make it so that when your TempDB does have to grow it will grow all the files all at once so you still have good parallelism.

    BTW, the effects of these TraceFlags are automatically on in SQL 2016 at this point since MS figured out it would be wise to set them to on and have people purposely set them off if needed instead of the other way around.

    • Hey Josh – you mention trace flag 1118 and 1117? all in the same breath – did you really mean 1117? I dont see that mentioned anywhere when I follow the MSFT technet links.

  • In your linked Check List you recommend leaving the tempdb logfile on the same drive as your user database logs.

    In this article you recommend adding it to the same drive as the the rest of tempdb.

    Has your recommendation changed?

  • “If TempDB lives on the same volume as your user data and log files, and the drive runs out of space, you can have a tough time restarting SQL Server. However, if TempDB is on its own volume that runs out of space, no big deal – just restart the instance and you’re back in business.”

    Can you expand on this? Why would a full user database data or log file drive be more difficult to restart SQL Server when the solution is to restart the instance which would reset tempdb back to its original size?

    • Jeremy – because if a user database runs out of space, recovery isn’t as elegant as TempDB. See, user databases have transactions that I need to worry about rolling forward or rolling back. TempDB, not so much.

  • what about when the SQL server is running as a VM machine and all the drives are on a SAN?

  • Am I right to assume that there is no golden rule how big the drive / the tempdb should be (e.g. = 10 percent of total db sizes)?

    How much space should I give to the tempdb.log? The remaining 1/9 of the drive (so it is 100 % full -> no room to grow = instant fail if something is filling up the tempdb)?

    • Thomas – yeah, sadly, it’s really hard to guesstimate how big TempDB needs to be if you’re dealing with an app you’ve never seen before. You can use the size of your largest object + 50% if you’re going to rebuild that object, for example, but that doesn’t take into account things like the version store or multiple active databases.

  • What If there are hardware errors on the physical drive and some sectors get marked not to be used. Would you need to have some spare space left on the drive for this scenario?

    • Dennis – your capacity doesn’t decrease in that scenario. SSD vendors ship drives with higher capacity than what’s on the label so that they can address this live as it happens without exposing a space decrease to the OS.

  • Brent, are you talking about just for SQL 2012 onwards or is the 9 way split worthwhile on older versions. I’ve a whole range of versions (2000 SP4 up to 2014 SP1) so have generally used the 1 data file per core “rule” up till now and not had any issues so far.

  • Would this still apply if we have everything on a flash storage array?

  • This maybe obvious but – specifically the recommendation is:

    The 8 files consist of (1 MDF and 7 NDF’s)
    1 LDF

    For a total of 9 files – ALL of equal size

    All on a dedicated disk drive!

  • Does DBCC Traceon (flag, -1) = globally on, survive a sql server instance restart ?

    • GT – why not give it a try on your development server and find out?

      • Right. The answer is NO! Tnx for the nudge 🙂

        • For the sake of completeness: you have to use the SQL Configuration Manager and define a startup parameter -T to turn it on permanently, e.g.:
          “-T 3226;-T 1222;-T 1117;-T 1118;-d…”

          There must be no spaces between the flag number and the ; (otherwise the SQL Server will not start). It’s easier in SQL 2014ff, because you can there specify each paramerter separat and do not have to enter the whole parameter string.

  • I opened a connect item because I’m seeing too much unallocated space in memory that is tied to having large TempDB data files. It’s something I’ve seen on multiple servers and multiple versions of SQL. The workaround of having a smaller TempDB has made a difference for me in how much memory is being used.

    Because of this, I’m recommending for people to monitor TempDB usage then set the data files to a total of 125% of the peak observed size being used. Still avoiding autogrowth, but also avoiding what I believe to be a bug.

    If the only bug is my logic, please let me know.

  • Hello Brent,

    Are there any circumstances in a production environment where having a single TempDB data file is a good idea ? Broad question I know.

  • Andrew mitchell
    January 18, 2016 4:08 am

    Don’t forget whilst allocating space for tempDB that it is advisable to check the limits that any monitoring software has in place for alerts.
    I usually size tempDB to 80% of the storage of the volume on which it resides to stop the monitoring software giving false positives.
    I also use mount points to add extra flexibility in terms of adding storage for files, file groups etc

    • Andrew – I’m gonna ask a really dumb question, and I swear I’m not being sarcastic.

      Is it really that hard to change the thresholds on your monitoring software?

      I mean, to the point where you’re willing to leave 20% of your storage unallocated just to avoid alerts? And does management know about this? This seems like one of those so-insane-it-might-be-true stories about corporate IT.

  • Eddie Maloney
    January 20, 2016 6:11 am

    I do have disagree on one point about allocating all the free space, it’s your last point in the article.
    Even if you determined your needs and allocated over that amount, you will always get that rouge query or transaction that fills up tempdb. Some user somewhere will decided to select data from the beginning of time with a terrible sort. When tempdb does fill up, one of two events will happen. (1) the query/transaction will rollback… big deal (being sarcastic because they deserve it) or (2) SQL will crash (reboot). Yep. Had it happen and its not a fun event.

    I really think it’s necessary in a production, 24/7 environment, to have a ‘Buffer’ with an alert on space growth so someone can get in there and kill that spid before it can cause damage. I agree with you in the sense we should pre-allocate up to that pre-buffer amount. I don’t see the buffer as wasted space, I see it as insurance.

    Great post otherwise.

    • Eddie – you’re assuming that someone can get to the keyboard and kill the spid before it causes damage.

      In today’s modern servers with fast solid state storage used for TempDB, you can write 200+GB in a matter of minutes – before anyone can get in to stop the runaway query.

      If your high availability plan relies on human beings to do manual tasks 24/7, it ain’t gonna work.

      Thanks though!

      • Kind of ironic – what Eddie described happened to me just yesterday on the way home from work (hour drive). I personally like to leave some space so i can be alerted on drive capacity – something that others on my team can consume as well (my boss receives the same “high” alerts). Someone ran a rogue query in a 3rd party tool against our production AX database, TempDB expanded, drive threshold alert fired/alert sent, I got a text from my boss “TempDB in prod” – when I got home I found the SPID and killed it. I then went into my monitoring tool and did my AAR – found the culprit and addressed it with the end user. I see the value in both approaches – “it depends” does apply here, as there are environment variables that might play in to the approach.

        • Brent, as usual, another great article, thank you. I set tempdb files to fill the drive, both data and log, with virtually no space left over. But I use alerts to notify the team if tempdb fills to 80% of capacity. Relatively easy to implement, too. And, I can grab a snapshot of current workload in the job that is executed when the trigger fires. This seems to meet our needs.

          One question – if I have a 50GB SSD allocated for tempdb on a development server, it seems that creating 8 data files of 5+GB each, and a tempdb log file of 5+GB may not be optimal. Shouldn’t the trans log be larger?

          • Steve – thanks! When you ask about the size, I’ll turn around and follow up with a question: why do you think it should be larger?

          • Well, my question doesn’t really have a straightforward answer besides “It depends.” Sizing tempdb on a new server isn’t a simple task. Without databases and a workload to analyze,I think the initial tempdb configuration will just as likely change after implementation, as remain as it is. Fortunately it’s not difficult to reconfigure. And I do agree that setting all the data files to their full size at start is a smart move.

            It’s tough when one has a “sense” that some setting isn’t right, with only limited data to prove or disprove. I feel like a tempdb log file that is larger than the individual data files is proper but cannot prove that assumption. Even after reviewing all of my production metrics, I haven’t come up with a model that suits most situations. For the time being I plan to create up to 8 (depending on cores) same-size data files, and a log file that is some multiple larger. Eight 5GB files and a log file of 10GB, perhaps, for example. Then I’ll monitor and adjust as needed.


  • Hi Brent,

    You said to create 8 equally sized data files regardless of the number of cores and provided a link to a KB article that reads:
    As a general rule, if the number of logical processors is less than or equal to 8, use the same number of data files as logical processors. If the number of logical processors is greater than 8, use 8 data files and then if contention continues, increase the number of data files by multiples of 4 (up to the number of logical processors) until the contention is reduced to acceptable levels or make changes to the workload/code.

    Am I am reading/understanding it correctly?

    Thanks for the great post as always.


    • Ivan – if you’re running a physical SQL Server with less than 8 cores, I can lend you my laptop for a while if you want. You’re a nice guy. I trust you. 😉

      • Cool, ship it away: Palacio de los Jugos, Hialeah, Florida.

        But if seriously, how about if you are running on a VM/cloud environment and your virtual server has only 4 cores? Does still the initial 8 files rules apply?

        You also are a very nice guy by the way.

        • Ivan – hahaha, thanks. Yeah, if you’ve only got 4 cores, you can stick with 4 files.

          • In that Virtual environment. If the drives are really just sitting on the same spindle. Does adding extra tempdbs in this fashion really matter?

          • Tempdb data files — yes!

          • >>Tempdb data files — yes!
            Can you elaborate? You’re saying yes. Even though you don’t have extra channels for parallel data writes because they’re on the same spindle? Where is the extra performance coming from? Trying to understand, Thanks

          • Gary, TempDB data files don’t work in parallel all at the same time as I believe you might be thinking – TempDB data files work in a round robin fashion, so imagine if one data file had to keep track of all the activity, or if they shared the burden in a round robin fashion. That’s where the advantage comes in. Caveat: they need to all be the exact same size – make sure one isn’t larger or the concept of round robin and equal sharing doesn’t apply.

  • Hi,

    Very useful! The Server Setup Checklist (also very useful) suggests putting tempdb log file on the same drive as user log files, but here you suggest putting it on the same drive as your tempdb data files?

    Is there a preference/advantage with one location vs the other? Both our log (E:) and tempdb (D:) drives are local SSD’s?

    • Adrian – no, generally I haven’t seen an issue where tempdb’s log file was the biggest bottleneck holding up a system, so I’d stick with it on the data drive. Thanks!

  • Ok got my SSD on my active/passive cluster, followed all steps cluster works great. But, with my 16 core server was going to add files for my tempdb, and when I went to GUI for this the local ssd drive does not show up as an option to add files. Did I miss a step?

  • Hi Brent,

    Thanks for this info! Can you clarify something for me? I am sizing tempDB for a new VM with 2 cores. Should I just create 2 data files or do I create 8 data files?

    In your initial post you say “In your journey through the blogosphere, you may have read that at one point you were supposed to create one data file per core. That’s no longer the case – just start with 8, and only add more when you’re facing the page contention issues described in that KB article.”

    But then later in the comments when someone asked a similar question as myself you say “Ivan – hahaha, thanks. Yeah, if you’ve only got 4 cores, you can stick with 4 files.”

    So while I believe that based on this answer I should just create 2 data files, I just want to make sure since this post is now about 1 year old.


  • I set up tempdb on local SSD storage within a Windows Failover Cluster on SQL Server 2008 by cheating the setup process (for more details have a look on my website). You would have to carefully weigh out if you are willing to go that way as it limits your flexibility and of course is unsupported but it was a valid option for me though.

  • Question from the Oracle DBA who hasn’t worked much with SQL Server since 2008- In an Oracle world, we monitor temp usage, as high temp usage is a sign of an inefficient OLAP database, (over sorting and hashing, especially when not on SSD, causes performance hits.) Is this the case for SQL Server? Would you be concerned and monitor tempDB growth and high sorts/hashes that occur outside of memory?

    • In theory, yes, but in practice, no. All kinds of stuff uses TempDB – I like to call it SQL Server’s public toilet – and if you just monitor TempDB, sure, you’ll see usage. The key is first identifying the most critical problem facing the SQL Server, and I’d use wait stats for that.

  • Andrzej B?och
    April 28, 2017 3:54 am

    Hi Brent.
    Could you explain why you advise creating 8 files while the KB article you are reffering to is saying that “As a general rule, if the number of logical processors is less than or equal to 8, use the same number of data files as logical processors.”?

    • Thomas Franz
      April 28, 2017 4:33 am

      does it really matter? If you have an SQL server with less than 8 cores, it is usually not very heavy used and a 2 or 4 additional temp files does not really add overhead. And for “big” servers with more than 8 cores MS suggests to use 8 files (unless you have performance problems because of tempdb)

  • […] we go read some stuff and figure, hey, sounds like a great idea, right?  Put TempDB on a local SSD, or better yet, a […]

  • Garrett Devine
    May 3, 2018 11:01 am

    We just separated out our tempdb files to a new SSD and have started seeing “5348 occurrence(s) of I/O requests taking longer than 15 seconds to complete on file [T:\MI\tempdb\tempdb4.ndf] “. we have multiple occurrences of this error. We did not see the errors when tempdb was back on its original RAID 5 home. I followed your tutorial on SQLIO and I think the SSD should be much faster when doing 8kb random read/writes, than the previous RAID 5 disks, so why are we seeing these errors. Also, by way of more proof that not all is well, the batch file we run overnight, when these errors occur, is taking 7 hours instead of 6.25 hours, on the old disks. is there a magic switch that we need to configure? The disks sit in a directly attached array. the RAID5 for data, RAID 10 for logs and a spare slot that we used for the SSD. The RAIS 5 and SSD are formatted for 64kb block size. the log is incorrectly set to 4KB block size (I know – will fix whne I get a chance).

    Do you have any idea what could be causing all these IO errors getting reported by SQL Server?

  • Henrik Staun Poulsen
    July 31, 2018 2:54 am

    Why do you recommend a separate partition for the TempDB database files, when adding a MaxSize on each file would do the same trick?
    In most organisations, it is a lot easier (for the DBA) to change MaxSize than to ask the storage guys to enhance a partition.
    And I do not see any speed advantages to having a separate drive. (Are there any?)

  • I have a question related to TempDB and auto-grow settings. I understand that in many cases auto-grow is bad, especially since TempDB needs to grow the various files equally (handled by trace flags or 2016) and that letting default settings auto-grow is even worse. My question is on a an instance that supports an application with very heavy TempDB usage and is also mission critical. In this kind of case, isn’t it better to setup auto-grow that will expand TempDB up to a point so that the application is still functional without manual intervention with the caveat that once it auto-grows you review the expansion to grow out the initial sizing?

    Realize that in this case a lot of the easy maintenance is limited by the high availability demands. Modifications, failovers, storage expansion, etc must be done during a scheduled maintenance window rather than reactionary. My thinking is that the “full drive” TempDB approach only works after you can demonstrate that your sizing is correct for a given application. Prior to achieving that, you want to have auto-grow on so you can see how it is processing as you scale up the files to reach a stable sizing.

    Thanks for any advice, sorry for length.

  • […] Here is a good cheat sheet from Brent Ozar on How to Configure TempDB for SQL Server. […]

  • Can a local tempdb dedicated SSD drive on a blade/host work wit vmotion? I guess the other blades/hosts would have to have a SSD drive as well.

    • Jeremy – only if you pinned a guest to a specific host. If that host failed, the VM would go down instantly. VMware doesn’t have the concept of some drives being disposable, and it would not restart the guest automatically on another host. Plus, even to vMotion a guest from host to host, VMware would copy the contents of TempDB’s drives across over to the other host.

  • If the database has a table that is occupying 1tb of space, surely that would be the smallest size you would expect the tempbd file to be set to? Especially if the index optimisation job is configured to sort in tempdb?

  • zahid iqbal
    June 27, 2019 5:14 am

    what is feature of enabling short-in tempDB in index rebuild

  • […] database design issues. He quoted Brent Ozar, a SQL Server consultant and trainer who half-jokingly refers to tempdb as “SQL Server’s public […]

  • >>what about when the SQL server is running as a VM machine and all the drives are on a SAN?
    >>Yes, the same guidance applies.

    Hey Brent, very good post. I’ve used your tempdb recommendations for some time, quite successfully. But, recently I am constantly being fought more and more w/people saying the tempdb does not need to be Most recently, I am told by an ‘IT systems specialist’ that the only advantages of placing tempdb on a separate drive are these:

    1. directory organization. ( The individual then suggests we can just create a separate directory on the same drive. )
    2. allows a separate IO queue, but is still being sent to the same host kernel/HBA/physical drive array as all the other IO. It really does not provide any performance benefit as the current virtual drive controller already allows a deep queue depth.

    So. Very simply. What are the performance gains / advantages of isolating the tempd on it’s own volume for VMs ? Any disadvantages? What is the best way for me to answer that question? Your thoughts?

  • Slight type-o. Wish I could edit. 🙂

    But, recently I am constantly being fought more and more w/people saying the tempdb does not need to be isolated to its own drive on a VM. I am told by an ‘IT systems specialist’ that the only advantages of placing tempdb on a separate drive are these:

  • Harsh Khandelwal
    November 27, 2020 7:15 pm

    My TempDB is becoming full. The only option I can do is that i restart the SQL Server. Can you please tell me is it safe to shrink the TempDB if it is required to clear space. ?

    • Harsh – if your TempDB is filling up, you won’t be able to shrink it. It’s in use. Instead, add another TempDB file on another drive to let it continue to grow, and work with your managers to get more space for TempDB long term.

      • Harsh Khandelwal
        November 28, 2020 3:30 am

        Thanks brent for your quick answer. Please answer one last question.
        As you said. If I add tempdb file in another drive to let it grow, then will it release space automatically ?. Can I assume that once the use of my tempDB is completed by SQL SERVER, then it will release space automatically or i have to shrink manually when the SQL server does not have any user traffic.?

        Please reply

  • Peter Kilpatrick
    February 15, 2021 6:33 pm

    Some background – SQL Server 2014 on a VM environment. We have five drives on this server all going through one controller to one SAN drive. One data file for tempdb, wait times for temp space 10ms. We moved the VM to newer hardware and now are experiencing jobs taking twice as long. Still tempdb wait times are 10ms. Called in Microsoft and immediately they say the problem is with the number of tempdb data files. I asked for proof and was told “that is the recommendation.” I hate doing things just because, I always want to understand. Are they right, with no tempdb waits could the problem be tempdb number of files? I have just added the number of tempdb data files to see / to remove this from the list. But would like your thoughts / comments

  • Hi Brent. Thank you for the great article! Would it be more efficient to create a number of TEMP DB files = number of cores or number of logical processors?

  • What size does the disk have for tempdb? or what is the space required for the tempdb files?

  • Hi Brent!

    Our Tempdb is configured according to Best practices. But we have a strange issue. In the one time 100-150 application sessions request the same page in tempdb. And according to sys.dm_exec_requests these requests are from two different DBs (DBs for two different applications)

    PAGELATCH_UP wait becomes high in this time.

    Could you advise possible reasons why absolutely different processes would like to have the same page in tempdb?

  • In an ideal world the OS, logs, data, backup and tempdb should all be on different physical drives as well as volumes. But if limited resources mean 3 or 4 drives instead of 5, which volumes should share a bed?

    Thinking perhaps the tempdb and backup could have 2 volumes on the same drive. Management wise, if that drive tanks the files are non-critical, and performance wise I’m guessing the backup drive would have the least IO. But, I have a sneaking suspicion I’m completely wrong.

  • Hi Brent,
    If I a DB which normally is about 140GB, how big the size of the each data file for the tempdb should be? If the drive for the tempdb files is not big enough, can I leave the log file in a different drive? Also If l leave the log file of the tempdb in a different drive than the datafiles of the tempdb, is it necessary for the log file in the same size as the data files?

  • Completely filling out tempDB drive with tempdb files has led backup failures due to low VSS snapshot space. I’d leave some headroom there.

  • Does top clause in the the subquery section of select query can cause for PAGELATCH_UP contention on tempdb SGAM pages ?

  • Hi Brent! This is my scenario: we’re using 48 cores processors, and 10 tempdb data files, each file has 28.5Gigs size and it’s in a dedicated drive only for tempdb data (SQL 2012 sp4 Enterprise Edition with 3 cluster nodes and around 8.5TB of the database size), The problem is last couple of weeks the tempdb data files were filled up very faster than usual,
    Should I decrease the data file size and add few more files? Or increasing the drive(currently tempdb data file having drive size is 300Gigs) and few number of data files will resolve my problem?


Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.