Transaction Log Too Large?

The big dog might not be your data file.

The big dog might not be your data file.

A transaction log file that is larger than the data file can indicate that:

  • Transaction log backups are not being performed (or are not being performed often enough, like are only being done every 4-12 hours)
  • Log-based replication is broken (like replication, Always On Availability Groups, or database mirroring) and the primary is keeping the log active to send to the secondary when it comes back online
  • Someone typed BEGIN TRAN and went home for the weekend
  • Or any number of other things you’ll find in LOG_REUSE_WAIT_DESC

This part of our SQL Server sp_Blitz script checks the size of each log file and compares it to the total data file size for the database.

It’s completely normal to see transaction logs at 10-50% of the size of the data files. It’s unusual to see them LARGER than the data file.

To Fix the Problem

Read the details below for information on how to:

  1. Make sure you’re in the right recovery model
  2. Set up transaction log backups if you need point in time recovery
  3. Shrink your transaction log file (maybe)

Return to sp_Blitz or Ask Us Questions


How to Fix Giant Transaction Log Files

Check Your Recovery Model and Backups

First, determine whether the database requires a full or simple recovery model.

  • Full recovery enables point-in-time restores with transaction log backups
  • Simple mode is used when you only need to recover to the point of the last full backup.

If you’re in the wrong recovery model, now’s the time to change. For example, if you were in full before but you don’t really need point in time recovery, then flip over to simple recovery model.

If full recovery model is required, create scheduled transaction log backups with maintenance plans or scripts. Follow the instructions here. You might not be backing up the log fast enough, either – if you’re only doing it every hour or two, it’s way too easy for end users, ETL jobs, and index rebuilds to generate tons of log data in that time. Consider backing up more often.

Shrink the Log File (maybe)

Once the backups are under control, you could consider shrinking the log file back down to a reasonable size. We raaaaarely recommend shrinking the log file, but this could be one of those scenarios.  If the log grew out of control because the database was in full recovery mode and nobody was taking backups, DBCC SHRINKFILE may actually be right for you – but only for the log file.

Here’s an example command:

If the log file won’t shrink beyond a certain size, odds are you’ve hit the active portion of the transaction log. You can do a log backup, or wait and try again later after the active portion has moved elsewhere.

If you’re not sure what size of transaction log to start with, use 25% of the size of the database, or the size of your largest table in that database, whichever is larger. (If you’re not sure about your table sizes, run sp_BlitzIndex @Mode = 2, copy/paste that data into a spreadsheet, and then sort by the size column.)

Tracking Down the Root Cause

Sometimes the root cause isn’t a long running transaction – for example, it could be that someone set up replication, and never properly tore it back down. Start by checking log_reuse_wait_desc in sys.databases:

Keep in mind, though, that’s only a snapshot of why the log can’t shrink right NOW.

Then, if you don’t find anything interesting in there, you can log sp_WhoIsActive to a table to catch when folks do a BEGIN TRAN, and leave their session open for hours. Look for long-running transactions, talk to the owner, and see if they can do their work in smaller chunks instead of one giant transaction.

1 Comment. Leave new

Menu
{"cart_token":"","hash":"","cart_data":""}