The questions came fast and furious in one of my recent TempDB webcasts, so here’s the ones I wasn’t able to answer during the live session:

Q: Virtualized OS, SAN, no dedicated LUNs, most likely scenario with no gotchas, theoretically: dump TempDB with everything else all on one virtual volume, including logs — or split to separate virtual volumes and separate from other databases and also from logs? (Matthew N.)

When building virtual SQL Server machines, always use a separate VMDK/VHD for TempDB. This lets your VMware and Hyper-V sysadmins manage your TempDB differently. Some storage systems can automatically move different VHD/VMDK files to different tiers of storage based on their access patterns. If you suddenly enable RCSI and need a lot more TempDB throughput, it’s easy for your VMware admin to adjust that when it’s in a separate file (or LUN).

Q: How many SGAM pages we have in one data file? (Alvaro C.)

One per each 4GB of data file space. To learn more about the internals of how that works, check out Microsoft’s knowledge base article on TempDB.

Q: Can filtered indexes eliminate the use of tempdb on complex queries? (Greg J)

It’s possible, but I would want to find the exact queries causing the TempDB use before I tried to prescribe a method to get pain relief. I can envision a scenario where a query gets insufficient memory grants and then frequently spills to TempDB when doing sorting and joining to other tables, but … I’ll be honest, this is kind of a stretch. Filtered indexes wouldn’t be the first place I’d turn here.

Q: How do you diagnose TempDB problems in SQL Server 2000? (Debbie C)

I don’t. I don’t work on SQL Server 2000 anymore. It’s unsupported. If anything breaks on it, you’re out of luck. When a client comes to me with a SQL Server 2000 performance problem, step 1 is to get them onto a supported version of SQL Server.

Q: If I only have RAID 5 storage available, should I keep tempdb in 1 datafile? (Mike K)

If your server’s biggest performance bottleneck is SGAM contention in TempDB, then you need multiple TempDB data files to alleviate the bottleneck. If, on the other hand, your server’s biggest performance bottleneck is storage writes, then you shouldn’t be playing around with TempDB. 😀

Q: is there a rule of thumb on the autogrowth settings for the temp db files? (Lee T)

I actually want to pre-grow my TempDB files whenever possible. They shouldn’t be growing frequently on a production box. For example, if I bought 200GB of space for my TempDB data files, I’m going to go ahead and pre-grow out TempDB to take that space. It doesn’t really make sense to wait around for users to grow it out when I’ve got dedicated storage for it. Because of that, I don’t usually focus on TempDB autogrowth sizes.

Q: My TempDB avg write is around 5ms, is that good enough? (Welly S)

Only your server can tell you: using wait stats, is your biggest bottleneck right now the TempDB writes? If yes, and if your users aren’t satisfied with performance, and it’s user queries hitting TempDB, then it’s not good enough. If TempDB writes aren’t your biggest bottleneck, or if your users aren’t complaining, then it’s fine.

Q: Sometime I use DBCC FREESYSTEMCACHE (‘All’) to shrink the tempdb. Any side effects to that? (Ashwani M)

What’s the problem that you’re trying to solve? Why are you shrinking TempDB? Grow it out to the space it needs to be, and leave it there. If it keeps growing larger, that’s SQL Server telling you it needs more space. If you want to permanently address the root cause, you need to find the queries that are allocating TempDB space and fix those.

Q: If you have only 1 tempdb file, what is the best way to add 3 more, especially if the single file is very large already. (David S)

I’m a stickler about change control. I want to make my production changes at a controlled time when I know it’s safe. Because of that, I tend to schedule non-critical changes for a maintenance window, do all of them then, and then restart the SQL Server instance afterwards. Changing TempDB configurations is a great fit for that – keep it in a change window.

Q: Can we start initially with 2 files of Tempdb or it should be minimum 4? (Syed A)

Microsoft’s Bob Ward recommended 4 in his TempDB talk at the PASS Summit, and I’d tend to stick with that guidance. I don’t see a benefit in using just 2.

Q: We tried using and SSD in various configurations and had minimal suddess. What should we look for? (Brad P)

In my recent post on benchmarking SSDs, I discussed the approach I use for load testing. I’d start there. We often uncover bottlenecks in the RAID controller, its settings, or the cabling.

Q: We have SQL Server 2008 R2 in Production. Would you recommend using SQL Server 2012 going forward? (Sreedhar L)

Sure. It’s got lots of nifty improvements that make it my favorite SQL Server version yet, and it’s pretty stable now that it’s had a round of updates.

Q: In a SQL Server 2008 R2 cluster of two nodes, would this affect the number of tempdb files? (William N)

No, the number of nodes in a failover cluster don’t affect the number of TempDB files you choose.

Q: we have 24 cores and allocated 24 tempdb files as per the recommendation 3 years ago. Is this still valid? (Nageswararo Y)

Most of the storage I test seems to work worse as you throw more random load at it. Using 24 files instead of 8 makes some storage perform much worse. Because of that, I wouldn’t go with the one-file-per-core recommendation unless I was having serious SGAM contention – and then, I’d want to load test my storage under extreme random load to make sure it performed well.

Q: I am at war with my san admins who deny there is a san issue. tempdb (using your blitz script) shows a high write stall time of 51ms to 10ms, could there be any thing else that i can do to check? or any other stats that i can gather to help me proof we have a problem? (Ozzie B)

Sure, check out our videos on storage troubleshooting and our 6-hour training class on storage, hardware, and virtualization.

Q: what to do when tempdb is full in realtime and need to troubleshoot without sql restart? (Ravi S)

Normally, I don’t want to troubleshoot in real time – I want to kill the queries involved (try sp_WhoIsActive to see which queries have TempDB allocations) and then fix the problem long term by getting more TempDB space. Someone’s always going to run a crazy query to fill up TempDB, and we need to plan for that. I’m a big fan of using third party performance monitoring tools for this type of thing – they can jump back in time to look at what was using TempDB at any point in recent history.

Q: What are the challenges if we maintain TempDB on local storage? (Nageswararao Y)

Make sure you monitor for drive failures, make sure the hot spare drive automatically takes over for failed drives, and make sure drive performance doesn’t go down as the drives get older.

Q: What if my read is 2.5 msec but my write is like 40 msec. All on raid 5 this still means I need a raid 10 or a ssd with raid1? (Pramit S)

If wait stats show that writes are your bottleneck, then yeah, you want to investigate what it would take to make the storage go faster. This can include changing to RAID 10, adding more drives, switching to SSDs, or reconfiguring your RAID controller’s settings.

Q: You mention to put TempDB files on local drive within cluster for better performance which are there any drawbacks on doing that? (John H)

It’s unsupported on SQL Server 2008 R2 and prior versions. You also have to make sure the local drives are actually as fast as the SAN, too – heaven forbid you do this with three locally attached magnetic hard drives in a RAID 5 config.

Q: Any recommendations as far as transaction logs for TempDB? (Dominick S)

TempDB’s log file tuning is the same as user database log files, so no special guidance there.

Q: Have you started using SMB 3 for SQL storage? How much gain can be made with the ability to use RAM cache for SMB 3.0 shares? (Fred P)

If I’m going to invest in more memory somewhere, it’s going to be in the SQL Server, not the file server. That’ll avoid the round trip latency off to the Windows file server box, and SQL Server can use it for other things like caching execution plans and query workspace.

Q: You mention that you don’t like to store TempDB in RamDisk, Is there a big reason for that? (Jose S)

RAM drives aren’t built into Windows – they’re third party software and drivers. If I can avoid installing software on the SQL Server, I will. I wish I could find completely bug-free software, but I haven’t seen that yet. 😀

Q: should the Recovery_Model be set to FULL OR SIMPLE for tempDB? (Sushant B)

Full recovery lets you take transaction log backups. Since you won’t be doing that in TempDB – you can’t restore that database – TempDB goes in simple recovery mode.

Q: Also, we encountered a strange issue recently which is after re-starting our staging SQL Server 2008 R2 , some queries were running slow. Why would it be? Would you need to re-build statistics or Indexes after the temDB restart? I appreciate your help. (Sreedhar L)

You got a different execution plan after the restart because when SQL Server restarts, the execution plan cache disappears. Avoid restarting SQL Server if possible. When you do run into an issue like this, you have to investigate issues like parameter sniffing and statistics. Start your journey with Erland Sommarskog’s excellent post, Slow in the Application, Fast in SSMS.

Q: since initial tempdb data file calculations are often based on core count, how does hyperthreading impact that formula? (Allen M)

As far as SQL Server is concerned, hyperthreading just doubles the number of cores. SQL Server doesn’t know whether they’re real cores or virtual. If you give SQL Server 8 cores, it just sees 8 cores. (NUMA and SMP architectures aside.)

Q: Any risks or drawbacks of using Instant File Initialization? (John M)

There’s a security risk. Say you’ve got a development SQL Server that houses a database with all your customer and payroll data on it. You drop the database, and then you go create a new database. The storage gear decides to use the exact same storage area as the database you just dropped. With Instant File Initialization, the contents of your newly created database are actually the contents of the old database file. Now, granted, you won’t see the tables in there in SSMS, but if you now detach that database, copy the MDF somewhere else, and use a hex editor on it, you’ll see the original data pages. Without Instant File Initialization, on the other hand, when you create the new database, Windows will erase out the space before it’s available to the new database.

Q: any suggestions for TempDB on Amazon EC2? (Vladimr F)

Unfortunately, if you’re seeing storage throughput as your biggest problem in EC2 (and it very often is), you’re going to have to do sysadmin-level tuning first. We’ve written a lot about EC2 tuning here at – use the search for EC2 – but the work involved changes pretty frequently as Amazon updates their infrastructure. This is one of those cases where your systems administration work is never really done. You have to figure out how to take the virtual hardware they give you and make it perform well, and those techniques change frequently.

Q: Does the Temple DB need be backup? how often? (Li Z)

No. Isn’t it nice that we have these little breaks every now and then?

Q: What type of RAID did you recommend for Local TempDB SSDs? (David W)

I suggest starting with a mirrored pair (RAID 1). If you need more space than that, you’ll need to do load testing on your server, your RAID controller, and your drives, because each combination is different. For example, in the Dell PowerEdge R720 tests we did recently with Samsung 840 drives, we found that we didn’t really pay a speed penalty for RAID 5 once we stuffed it with drives.

Q: If SQL Server recreates the tempdb when it restart, how do we configure the 4 tables every time? (Carlos M)

When you do it once, SQL Server will repeat the configuration every time. It gets saved.

Q: When SQL Server restarts would the transactions be lost inside the tempdb? (Mahsa A)

Yes. (It’s definitely called TempDB because it’s temporary.)

Q: best practice for putting temp DB on SMB 3.0 file share? (Fred P)

The best practice is not to do it – I’d always recommend using local mirrored solid state storage for TempDB because it’s cheap and fast. The instant I have to hit the network in order to access TempDB, performance goes down – because remember, I’m sharing that network for my data file access too.

Q: Should all this stuff not be done by the engine itself rather than be done by a poor DBA? (Lakshminarayan N)

If you’re a poor DBA, there’s never been a better time to look for a job. It’s a fantastic job market for DBAs who understand how to do this kind of performance tuning. Hey, think of it as job security – it’s a good thing! If SQL Server managed itself, you wouldn’t be getting paid to read this blog. (You *are* reading this blog from work, right?)

Learn more in our free Secrets of TempDB video.

Brent Ozar
I make Microsoft SQL Server faster and more reliable. I love teaching, travel, and laughing.

I’m mostly a figurehead here at Brent Ozar Unlimited. My true skills are menu advice, interpretive dance, and reading Wikipedia.
Brent Ozar on sabtwitterBrent Ozar on sablinkedinBrent Ozar on sabinstagramBrent Ozar on sabgoogleBrent Ozar on sabfacebook
↑ Back to top
  1. Pingback: (SFTW) SQL Server Links 25/10/13 • John Sansom

  2. You mentioned that you like to pre-grow TempDB. How does that change if you’re using TDE which would make IFI unavailable? I would think that waiting on server reboot for 200 GB of TempDB files to be recreated would add more overhead to the startup process than what would be ideal.

  3. This is referring to the log file, but I’m actually concerned with the data files for TempDB. I’m guessing that data files wouldn’t have to zero out based on Jon’s post on the log files, but I’ll have to run a test once TDE is on the database just to make sure.

    My plans before this post were to test setting it at 10 GB and grow it in 10 GB chunks through a job until it reached full size to avoid doing it all in one chunk. However, now I have one more test to run before I go and do something stupid like making my relaxing little DBA job too complex.

  4. I’m not sure there’s an easy answer to this one. If you’re using separate dedicated vdmk’s for tempdb, how much space are you allocating for the tempdb volume/file? Do you have a formula or rule of thumb? Is there a best practice you follow for allocating space on a new system? I also have been using separate vdmk’s in our new systems, but find that tempdb can grow very large on some systems if a lot of ‘dirty’ queries are run that sluff a lot of data into tempdb. We obviously want to ensure we never fill the drive and the system choke, but on the flip side we don’t want to over-allocate resources if they’re not going to be used. If it’s a migration, I’ve benchmarked size on old system and added some extra on the new system. However, I have had to make fairly quick volume increases from time to time if alerted that drive space is dropping below threshold…

    • The size of TempDB has nothing to do with whether you’re running physical or virtual – it’s about your end user queries, how you rebuild indexes, and whether you’re using SQL Server features like RCSI that rely on the version store. Unfortunately that’s a little too big of a question to tackle in a blog comment.

  5. If we can’t take backup to tempdb the what is the use of restore option provided when we right click on tempdb

  6. I have a clustered sql server 2008 r2 environment and I want to ensure that the tempdb log file has the correct number of VLFs if the service is restarted. The current size of the log file is 32GB and if I just let SQL server rebuild it when it starts it only has 16 VLFs which I’m sure is not enough.

    Here is my stab at the problem:
    I have created a script in the master db that is run as a startup script using the sp_procoption proc that will shrink the tempdb log file to ~750MB then it will grow the file by 8000MB until it reaches 32GB. This leaves me with anywhere from 64 to 70 VLFs depending on how much close the shrink is able to get the log file down to 750MB.

    Is this the best way to ensure the VLFs are correct or is there a better method?

  7. Hey Brent – not only is your newsletter SUBLIME, so is sp_blitz! I have a question about a result indicating tempdb needs DBCC CHECKDB. I am confused about this. I don’t find any information indicating this is part of regular management of tempdb. We have a virtual 2008 R2 installation. Any further details would be much appreciated.

    Thanks for all the great information your group provides!

    • Linda – thanks, glad you like it! I see more and more folks doing CHECKDB on their TempDB because they use local solid state drives, and they want every opportunity to know things are starting to go wrong there.

      Glad we can help!

  8. Pingback: Cómo mover la base de datos tempdb y su rendimiento en SQL Server. | El Blog de Carlos De Fuentes Rodriguez

  9. I have followed the rule-of-thumb guidelines (I think) for tempDB .mdfs and .ldfs. I pre-grew the files in such a way that they consume 90% of a dedicated volume such that we have 20 .mdfs (one per core) and 1 .ldf and the single .ldf file is twice as large as any of the .mdf files. We have repeatedly run into the problem of ‘The transaction log for database ‘tempdb’ is full’ and were wondering if there is an easy way to ‘flush’ the transaction logs or minimize how much logging is taking place so that we don’t run into the problem of a full log file? OR is there a more elegant solution that I am totally missing?

  10. I have Raid 10 on SSD’s on a server and I’m still getting very poor write latency…300ms write latency but about 2.5ms read latency. 4 tempdb files, 8 core. 8GB pipe to SAN from VM. Am I missing something obvious? I have other high latency from the DB files up to 384ms, SAN admin shows ok response times, so this leads me to believe it’s something in-between the SAN and the VM. Any ideas?

    • Hi Jeff,

      How full are the SSDs? When SSDs fill up, the performance and take a serious tumble. I think the Rule Of Thumb guidelines are close to 95% as a max-fill for SSDs to maintain performance. Personally, I would aim a lot lower like 80% as a max-fill to leave a good buffer and just to be sure. One other thing to consider *might* be your tempdb auto growth settings. If these files are continually having to grow at small percentages, this might eat into your write performance, but I don’t think this would impact your read performance too much. Other than that, I can’t think of a reason for SSDs to perform so poorly on writes but not reads.

      • There are 12 1.6TB SSDs. We are allotted 78% (5.01TB) at the moment and only using 20% of that (1.06TB).

        We are not auto growing at this point, we are fixed 25GB initial X 4 files and our autogrowth is set to 4GB per file IF it grows.

        • Sorry Jeff, I’m at a loss unless you are getting hosed up by your RAID config or RAID controller. I would try troubleshooting the storage at this point with something like CrystalDiskMark to make sure the storage is performing as expected. Sorry I don’t have any better guidance, but it *sounds* like you should have a screaming fast storage layer and so TempDB should be having write performance similar to the read performance at single-digit latency.

  11. Quick question, we had some friends from MS come by to do a RaaS/RAP assessment thing on our servers. Large tempdb files were flagged, however I was going by what I had read here (yes, why waste space?)

    Do you see cases where large tempdb files would be disadvantageous? Like say ~10GB, something like that?

    (the thing I had not done was started smallish to see how big is ‘as big as I need’)

    Anyhow thanks for all you do for the SQL world, including but not limited to DBAReactions.

    • Howdy sir – I’d actually push that back on them – if they flagged it as a warning, I’d ask them to explain why. (I don’t mind doing your homework, but ain’t no way I’m doing Microsoft’s homework when they’re charging you for an assessment, hahaha.)

  12. Fair enough. Under initial questioning they kind of weaseled out (‘oh, you know, this is what the system comes back with’). Perhaps I should push them harder…

  13. If I set up TempDB with 4 or 8 files, how can I measure the utilization of each file, to determine if SQL is really using all 4 or 8?

  14. Moving from a Physical server with TempDB on FusionIO cards to a Virtual space, in this case VMWare, how would you recommend keeping TempDB on these fast drives to avoid using the SAN?

  15. Regarding TEMPDB, I know the recommendation is 8 data files, but does that include the MDF file?
    I can’t find clarification on this anywhere and not sure it means the mdf file and 8 data files or the MDF file and 7 data files?

    Thanks for clarification.

    • I don’t recall any recommendation for 8 data files, but I may be wrong. I do recall that the recommended MAX DOP be set to 8 as a rule of thumb. I think the recommended number of data files is one per CPU core.

      also, mdf = data file, so you should have as many mdf files in TempDB as you have cores that SQL can use and one log file (ldf). Sizing and growth settings should also be considered.

      • I was thinking the mdf was treated differently and you added extra data files, which is why I asked. I never treated the mdf the same. I have 8 cores and why I have 8 data files. Thanks for clarification.

  16. Is there a query that shows which database uses the tempdb the most? or sort databases by percentage of tempdb usage?

  17. I have a shrink tempdb job which runs every hour on sql server 2000 system. The job is failing with the errors:
    Server: Msg 845, Level 17, State 1, Line 1
    Time-out occurred while waiting for buffer latch type 4 for page (1:770952), database ID 2.
    Server: Msg 3140, Level 16, State 1, Line 1
    Could not adjust the space allocation for file ‘tempdev’.
    DBCC execution completed. If DBCC printed error messages, contact your system administrator.

    Could you help with this?

  18. I have a ~500GB TempDb.mdf on a very large data warehouse. I need to split it up to relieve tempdb contention and planned to start with 8 equal files, but am concerned as doing a little digging before hand has revealed that I have outlying queries that can sometimes consume upwards of 200GB of tempdb space. Reading Kendra’s post about split results during the shrink operation (which I’ve identified the proper maintenance windows/testing/etc to address), I’m just curious as to your insight as to how this find might affect these one off queries going forward. Do these smaller available tempdb files still cause reason for concern if a query with need for moar storage (however disgusting it’s code) makes it’s way to the engine?

Leave a Reply

Your email address will not be published. Required fields are marked *