Blog

The short version: configure one volume/drive for TempDB. Divide the total space by 9, and that’s your size number. Create 8 equally sized data files and one log file, each that size. Presto, the drive is full and your TempDB is configured for easy performance.

The long version is a little more complicated.

If you’re on SQL Server 2012, get on SP1 Cumulative Update 10 or newer. Microsoft made performance improvements to SQL Server 2014 so that it writes to TempDB less frequently. Because you were so good this year, they even backported it to SQL Server 2012, too. Get on this level – and by this level, I mean the latest updates shown on SQLServerUpdates.com.

Create one volume/drive/array for TempDB. You want this to be a separate volume from your user database & log files – not just for performance reasons, but also for logistical management reasons. Any user can dump a ton of stuff into TempDB without your control, and they can even run TempDB out of space. If TempDB lives on the same volume as your user data and log files, and the drive runs out of space, you can have a tough time restarting SQL Server. However, if TempDB is on its own volume that runs out of space, no big deal – just restart the instance and you’re back in business.

If you’re using a SAN-backed cluster, use local SSDs for TempDB. Since SQL Server 2012, Microsoft has fully supported using local solid state drives even in a cluster. This has three benefits: the local SSD is often cheaper, it’s often faster, and it keeps your SAN path traffic freed up for your valuable user data file & log file reads and writes.

Create 8 equally sized data files. This helps you avoid the GAM/SGAM/PFS page contention issues described in Microsoft KB 2154845. In your journey through the blogosphere, you may have read that at one point you were supposed to create one data file per core. That’s no longer the case – just start with 8, and only add more when you’re facing the page contention issues described in that KB article.

tempdb-uneven

TempDB files need to be sized evenly to avoid page contention.

Size them (and the log file) to fill up the drive. If the drive’s dedicated to TempDB anyway, why leave free space? You don’t want SQL Server pausing user activity to stop to grow the log file out, and you don’t want to have to hassle with possibly uneven data file growth sizes, and you haven’t got time for the pain. Just set it up and walk away with confidence.

And while you’re setting up the server, check out our free SQL Server Setup Checklist.

Brent Ozar
I make Microsoft SQL Server faster and more reliable. I love teaching, travel, and laughing.

I’m mostly a figurehead here at Brent Ozar Unlimited. My true skills are menu advice, interpretive dance, and reading Wikipedia.
Brent Ozar on sabtwitterBrent Ozar on sablinkedinBrent Ozar on sabinstagramBrent Ozar on sabgoogleBrent Ozar on sabfacebook
↑ Back to top
  1. Hi Brent –

    Great advice as always, but this is in conflict with the latest Setup Checklist, specifically for the location of the TEMPDB log file. Do you generally advise having the TEMPDB log with the other log files (L:) or with the TEMPDB data files (T:)?

    • Douglas – I’ve been kinda lucky, but I haven’t seen an environment where a separate location of the TempDB log file made a difference. Generally speaking, as long as both the data files and log files are on fast storage (which is typically local solid state), you’re good.

      • On AWS instances, it makes sense to put .ldf on a separate magnetic volume which has very low write latency but a pretty high read latency. If there are AGs, then it might not be ideal.

        I place mdf and .ndf files on an SSD volume, ideally on the local host to avoid network bottlenecks.

  2. > If the drive’s dedicated to TempDB anyway, why leave free space?

    One might have more than one SQL Server instance on the physical box?

    • WQW – right, but don’t you want to size both of the instances’ TempDBs to begin with? Grow ’em out and cap ’em. Otherwise, one instance could grow, dominate the drive, and while growing, force the other instance out of space. Ouch.

  3. Also, turn on TraceFlag 1118. This again helps with SGAM contention.

    If you don’t want to completely fill the drive, (monitoring as well as, give me a chance to look before it all goes to pot,) look into turning on TraceFlag 1117 (effects all databases, not just TempDB). This will make it so that when your TempDB does have to grow it will grow all the files all at once so you still have good parallelism.

    BTW, the effects of these TraceFlags are automatically on in SQL 2016 at this point since MS figured out it would be wise to set them to on and have people purposely set them off if needed instead of the other way around.

    • Hey Josh – you mention trace flag 1118 and 1117? all in the same breath – did you really mean 1117? I dont see that mentioned anywhere when I follow the MSFT technet links.

  4. In your linked Check List you recommend leaving the tempdb logfile on the same drive as your user database logs.

    In this article you recommend adding it to the same drive as the the rest of tempdb.

    Has your recommendation changed?

  5. “If TempDB lives on the same volume as your user data and log files, and the drive runs out of space, you can have a tough time restarting SQL Server. However, if TempDB is on its own volume that runs out of space, no big deal – just restart the instance and you’re back in business.”

    Can you expand on this? Why would a full user database data or log file drive be more difficult to restart SQL Server when the solution is to restart the instance which would reset tempdb back to its original size?

    • Jeremy – because if a user database runs out of space, recovery isn’t as elegant as TempDB. See, user databases have transactions that I need to worry about rolling forward or rolling back. TempDB, not so much.

  6. what about when the SQL server is running as a VM machine and all the drives are on a SAN?

  7. Am I right to assume that there is no golden rule how big the drive / the tempdb should be (e.g. = 10 percent of total db sizes)?

    How much space should I give to the tempdb.log? The remaining 1/9 of the drive (so it is 100 % full -> no room to grow = instant fail if something is filling up the tempdb)?

    • Thomas – yeah, sadly, it’s really hard to guesstimate how big TempDB needs to be if you’re dealing with an app you’ve never seen before. You can use the size of your largest object + 50% if you’re going to rebuild that object, for example, but that doesn’t take into account things like the version store or multiple active databases.

  8. What If there are hardware errors on the physical drive and some sectors get marked not to be used. Would you need to have some spare space left on the drive for this scenario?

    • Dennis – your capacity doesn’t decrease in that scenario. SSD vendors ship drives with higher capacity than what’s on the label so that they can address this live as it happens without exposing a space decrease to the OS.

  9. Brent, are you talking about just for SQL 2012 onwards or is the 9 way split worthwhile on older versions. I’ve a whole range of versions (2000 SP4 up to 2014 SP1) so have generally used the 1 data file per core “rule” up till now and not had any issues so far.

  10. Would this still apply if we have everything on a flash storage array?

  11. This maybe obvious but – specifically the recommendation is:

    The 8 files consist of (1 MDF and 7 NDF’s)
    1 LDF

    For a total of 9 files – ALL of equal size

    All on a dedicated disk drive!

  12. Does DBCC Traceon (flag, -1) = globally on, survive a sql server instance restart ?

    • GT – why not give it a try on your development server and find out?

      • Right. The answer is NO! Tnx for the nudge 🙂

        • For the sake of completeness: you have to use the SQL Configuration Manager and define a startup parameter -T to turn it on permanently, e.g.:
          “-T 3226;-T 1222;-T 1117;-T 1118;-d…”

          There must be no spaces between the flag number and the ; (otherwise the SQL Server will not start). It’s easier in SQL 2014ff, because you can there specify each paramerter separat and do not have to enter the whole parameter string.

  13. I opened a connect item because I’m seeing too much unallocated space in memory that is tied to having large TempDB data files. It’s something I’ve seen on multiple servers and multiple versions of SQL. The workaround of having a smaller TempDB has made a difference for me in how much memory is being used.

    https://connect.microsoft.com/SQLServer/feedback/details/2215297/tempdb-holds-excessive-unallocated-pages-in-memory

    Because of this, I’m recommending for people to monitor TempDB usage then set the data files to a total of 125% of the peak observed size being used. Still avoiding autogrowth, but also avoiding what I believe to be a bug.

    If the only bug is my logic, please let me know.

  14. Hello Brent,

    Are there any circumstances in a production environment where having a single TempDB data file is a good idea ? Broad question I know.

  15. Don’t forget whilst allocating space for tempDB that it is advisable to check the limits that any monitoring software has in place for alerts.
    I usually size tempDB to 80% of the storage of the volume on which it resides to stop the monitoring software giving false positives.
    I also use mount points to add extra flexibility in terms of adding storage for files, file groups etc

    • Andrew – I’m gonna ask a really dumb question, and I swear I’m not being sarcastic.

      Is it really that hard to change the thresholds on your monitoring software?

      I mean, to the point where you’re willing to leave 20% of your storage unallocated just to avoid alerts? And does management know about this? This seems like one of those so-insane-it-might-be-true stories about corporate IT.

  16. I do have disagree on one point about allocating all the free space, it’s your last point in the article.
    Even if you determined your needs and allocated over that amount, you will always get that rouge query or transaction that fills up tempdb. Some user somewhere will decided to select data from the beginning of time with a terrible sort. When tempdb does fill up, one of two events will happen. (1) the query/transaction will rollback… big deal (being sarcastic because they deserve it) or (2) SQL will crash (reboot). Yep. Had it happen and its not a fun event.

    I really think it’s necessary in a production, 24/7 environment, to have a ‘Buffer’ with an alert on space growth so someone can get in there and kill that spid before it can cause damage. I agree with you in the sense we should pre-allocate up to that pre-buffer amount. I don’t see the buffer as wasted space, I see it as insurance.

    Great post otherwise.

    • Eddie – you’re assuming that someone can get to the keyboard and kill the spid before it causes damage.

      In today’s modern servers with fast solid state storage used for TempDB, you can write 200+GB in a matter of minutes – before anyone can get in to stop the runaway query.

      If your high availability plan relies on human beings to do manual tasks 24/7, it ain’t gonna work.

      Thanks though!

      • Kind of ironic – what Eddie described happened to me just yesterday on the way home from work (hour drive). I personally like to leave some space so i can be alerted on drive capacity – something that others on my team can consume as well (my boss receives the same “high” alerts). Someone ran a rogue query in a 3rd party tool against our production AX database, TempDB expanded, drive threshold alert fired/alert sent, I got a text from my boss “TempDB in prod” – when I got home I found the SPID and killed it. I then went into my monitoring tool and did my AAR – found the culprit and addressed it with the end user. I see the value in both approaches – “it depends” does apply here, as there are environment variables that might play in to the approach.

        • Brent, as usual, another great article, thank you. I set tempdb files to fill the drive, both data and log, with virtually no space left over. But I use alerts to notify the team if tempdb fills to 80% of capacity. Relatively easy to implement, too. And, I can grab a snapshot of current workload in the job that is executed when the trigger fires. This seems to meet our needs.

          One question – if I have a 50GB SSD allocated for tempdb on a development server, it seems that creating 8 data files of 5+GB each, and a tempdb log file of 5+GB may not be optimal. Shouldn’t the trans log be larger?

          • Steve – thanks! When you ask about the size, I’ll turn around and follow up with a question: why do you think it should be larger?

          • Well, my question doesn’t really have a straightforward answer besides “It depends.” Sizing tempdb on a new server isn’t a simple task. Without databases and a workload to analyze,I think the initial tempdb configuration will just as likely change after implementation, as remain as it is. Fortunately it’s not difficult to reconfigure. And I do agree that setting all the data files to their full size at start is a smart move.

            It’s tough when one has a “sense” that some setting isn’t right, with only limited data to prove or disprove. I feel like a tempdb log file that is larger than the individual data files is proper but cannot prove that assumption. Even after reviewing all of my production metrics, I haven’t come up with a model that suits most situations. For the time being I plan to create up to 8 (depending on cores) same-size data files, and a log file that is some multiple larger. Eight 5GB files and a log file of 10GB, perhaps, for example. Then I’ll monitor and adjust as needed.

            Thanks!

  17. Hi Brent,

    You said to create 8 equally sized data files regardless of the number of cores and provided a link to a KB article that reads:
    [QUOTE}
    As a general rule, if the number of logical processors is less than or equal to 8, use the same number of data files as logical processors. If the number of logical processors is greater than 8, use 8 data files and then if contention continues, increase the number of data files by multiples of 4 (up to the number of logical processors) until the contention is reduced to acceptable levels or make changes to the workload/code.
    [/QUOTE]

    Am I am reading/understanding it correctly?

    Thanks for the great post as always.

    Ivan

    • Ivan – if you’re running a physical SQL Server with less than 8 cores, I can lend you my laptop for a while if you want. You’re a nice guy. I trust you. 😉

      • Cool, ship it away: Palacio de los Jugos, Hialeah, Florida.

        But if seriously, how about if you are running on a VM/cloud environment and your virtual server has only 4 cores? Does still the initial 8 files rules apply?

        You also are a very nice guy by the way.

  18. Hi,

    Very useful! The Server Setup Checklist (also very useful) suggests putting tempdb log file on the same drive as user log files, but here you suggest putting it on the same drive as your tempdb data files?

    Is there a preference/advantage with one location vs the other? Both our log (E:) and tempdb (D:) drives are local SSD’s?

    • Adrian – no, generally I haven’t seen an issue where tempdb’s log file was the biggest bottleneck holding up a system, so I’d stick with it on the data drive. Thanks!

  19. Ok got my SSD on my active/passive cluster, followed all steps cluster works great. But, with my 16 core server was going to add files for my tempdb, and when I went to GUI for this the local ssd drive does not show up as an option to add files. Did I miss a step?

Leave a Reply

Your email address will not be published. Required fields are marked *

css.php