Log Shipping FAQ

I’ve been working with and talking about SQL Server Log Shipping a lot lately!

One way to ship logs…

I’ve gotten a lot of great questions about log shipping, so I’ve put together an FAQ.

  • What editions of SQL Server is log shipping available in?
    • 2012 – Enterprise, Business Intelligence, Standard, and Web
    • 2008R2 – Datacenter, Enterprise, Standard, Web, and Workgroup
    • 2008 – Enterprise, Standard, Web, and Workgroup
    • 2005 – Enterprise, Standard, and Workgroup
  • Does the secondary need to be licensed?
    • I am not the licensing police, and I am not Microsoft – check with your licensing representative to clarify your exact situation. Generally, you can have one warm standby server. However, the second someone starts using it for reporting, testing, or anything else, you need to license it like any other server.
  • Log shipping is compatible with backup compression. What edition of SQL Server do I need to take advantage of compression?
    • 2012 – Enterprise, Business Intelligence, or Standard
    • 2008R2 – Datacenter, Enterprise, or Standard
    • 2008 – Enterprise
    • 2005 – Not available
  • When log shipping is set up, Agent jobs are created to alert me if a backup, copy, or restore fails. How do I get notified?
    • You need to go into the Agent job, pull up Notifications, and choose your method – email an operator, or write to the event log, for example.
  • Are my logins shipped from the primary to the secondary?
    • No, they are not. You’ll need to set up a separate method to sync the logins.
  • Does this replace, or can it be combined with, our existing daily full and log backups?
    • TL; DR – no.
    • You’ll still want to take regular full and/or differential backups. Log shipping only takes one full backup – at the beginning – and that’s only if you specify that it does so. It can also be initialized from an existing full backup.
    • Taking two log backups in separate jobs will break the log chain, however. If you implement log shipping, it will replace your current transaction log backup job.
  • What’s the difference between the secondary being in “Restoring” vs. “Standby”?
    • Restoring means the database is not accessible. Standby means it is read-only. You make this decision when you set up the log shipping.
    • If the database is in Standby mode, users can query it – except when a log backup is being restored. You need to decide if a restore job will disconnect users, or if the restore is delayed until after the users are disconnected.

What other questions do you have?

Interested in learning more about backups and recovery?

Grab my Backup & Recovery Step By Step training for more information on why backups and restores are so important, and how to perform them!

Previous Post
Databases Five Years from Today
Next Post
An Introduction to SQL Server IO for Developers (video)

390 Comments. Leave new

  • Great FAQ, Jes!

    One small disagreement, taking a log backup in a separate job won’t break the log chain. Failing to restore that log backup on the secondary will cause a gap in the log chain on the secondary. You can manually restore that log backup (or even a differential that spans the gap) on the secondary to get log shipping to continues.

    Reply
    • Jes Schultz Borland
      March 22, 2013 9:27 am

      Robert, thanks for the clarification. You can take as many log backups as you want – it’s the order they are restored in that matters.

      Reply
      • I learned this the hard way when trying 3 times to set up log shipping. the restore kept giving errors saying this is not the earliest transaction log in the set. The problem was I had not turned off our hourly log backups. Once I realized it, I used the old hourly log (stored in a different folder) to manually restore. Since then the job has worked fine.

        Reply
        • So just to clarify , consider below scenario:

          1. already setup a jobs to backup transaction log backup.
          copies to 2 different locations.

          2. if log shipping has to be configure, can we use log backups already take ( in step 1)

          3. or step 1 transaction log backup schedule has to be cancelled.

          4. can i cancel the cancel the transaction log jobs in log shipping job, and point the location of logs from the step 1.
          thanks in advance…

          Reply
          • So to get this straight you already have the log backups being copied to two other locations? If one of those locations is within the same LAN as the server that you want to restore to, then yes, just use that backup location (just make sure to disable the extra “LS_Copy” job that is created.

            I am not sure what you are trying to say in line 3 and 4, but there is never a reason to shutoff the original log shipping, unless you are disabling it to no longer use on the secondary.

    • you are right, if you take any log backup or and log backup schedule on primary database in log shipping surely log shipping get fail.

      Reply
    • kushal bhowmick
      September 8, 2015 1:41 am

      4. In a log shipping environment if the recovery model is being changed in primary server then what happened?

      Reply
    • My Log shipping restore job is failing with error Could not acquire exclusive lock on database? What could be the issue and how to resolve?

      Reply
      • That means someone else is querying the database. You’ll want to check which queries are running in that database, and kill ’em or ask ’em to quit.

        Reply
  • Jes,

    What are the pros and cons of using backup compression with log shipping?

    Thanks,

    Patrick

    Reply
    • Jes Schultz Borland
      March 22, 2013 9:31 am

      Patrick, I’d ask, “What are the pros and cons of backup compression?” Enabling backup compression will use extra CPU, but the backups will take up less space on disk, and the files that need to be copied across the network are smaller. Don’t use it on a system constrained by CPU, however.

      Reply
  • I haven’t gotten to work with this yet, but it surprises me that there are log chain issues. Can you not just set the log shipping job to copy_only and leave your existing log backups untouched? Though I guess you would then need to backup the secondary’s log to keep it under control.

    Reply
    • Jes Schultz Borland
      March 22, 2013 9:34 am

      If you leave an existing log backup job in place, and implement log shipping, you would have to figure out a way to restore both log backups to the secondary, in order, so the log shipping chain was never out of order.

      Reply
    • No, you cannot set the log shipping job to do copy only log backups. There’s no logical reason you would want it to do that. You would then need a second log backup job to maintain the log file.Why woul you want to do twice as many log backups?

      The secondary’s log cannot be backed up. The database is not online. It is maintained by replaying the records of the primary. As long as you maintain the log on the primary, it will be maintained on any secondaries.

      Reply
      • This is actually an excellent question because it allows for thought.
        Today we as DBAs deal with potentially different backup process that may be out of our control. These products may be performing T-Log backups so the question of can log shipping be used with the WITH COPY_ONLY?

        Scenario: Enterprise B/U tool running SQL Server Backups (full,tlog) directly to some inaccessible vault by any means other then using the B/U tool.

        Requirement: Log shipping to a secondary server.

        SQL Server allows for the copy_only t-log backup with a “BACKUP LOG successfully” result.

        How will sql server respond to rolling forward t-logs that were taken with the copy only flag. … thinking about the LSNs here. What will happen. This is a good experiment. 🙂

        Reply
        • Clarky – the bigger problem is that if the enterprise backup tool *doesn’t* use the copy-only flag (and most don’t, for obvious reasons) then what happens when the enterprise backup tool copies and clears the transaction log as part of a log backup, and you haven’t gotten your separate transaction log backup first? You’ll be missing transactions.

          Reply
  • •Taking two log backups in separate jobs will break the log chain, however. If you implement log shipping, it will replace your current transaction log backup job.

    Jes – if you do a “copy only” backup – that will not break a log chain IMO

    Reply
  • HI Jes. I am a big fan, congrats.

    Jes. You really lost me in the log shipping job (second). If i have a log shipping set up and in the same time i have a recovery model. (full every Sunday- Differentials every day and logs every hour) it will make a conflict.
    Please can you clarified this foe me you put me aout track.
    Thanks and once again thanks for your hard work on this.

    Reply
    • Jes Schultz Borland
      March 25, 2013 8:17 am

      The only type of backup that log shipping controls is log backups. Full (and differential, if you take them) backups are not part of the automatic log shipping jobs. Full and differential backups still need to be run regularly, in separate jobs.

      Reply
  • Jes,

    Once you’ve failed from Primary to Secondary and them from ‘Secondary’ back to ‘Primary’, do you need to remove the log shipping from Secondary to Primary?

    I’m getting alerts from the LSAgent on both servers that sync hasn’t happened from S to P. So a final step for the planned failover would be to remove the shipping that was set up to do the second failure?

    Thanks for the screencast, it was well done.

    Reply
  • Derek Steinmetz
    April 18, 2013 3:56 pm

    If my SQL 2012 agent is running as NT Service\SQLSERVERAgent how do I go about giving the file share permissions with that account or should I have not used the SQL Server 2012 default installation accounts to run SQL Server? Thanks.

    Reply
  • Just to confirm you can not do log shipping from SQL 2012 to SQL 2008 R2 right?

    Reply
  • Njama Braasch
    June 25, 2013 9:48 am

    I love your site Brent! Any guidelines for log shipping frequency? Maybe transactions / hour / second correlating to log ships per / whatever?

    Reply
  • Hi Brent.. I like your videos and the way you present them.

    One of my database(A) contain 2 log files. Database(A) is also participating in Log shipping. what is safest way to drop a log file with out breaking Log shipping.

    Reply
    • This is a really interesting question. I wish I had the time to dedicate to give you an answer that’s just as interesting, but there’s a lot of possibilities here. The best way to get help on something like this is to narrow down the question as tightly as possible, then post it on http://StackOverflow.com or http://ServerFault.com. Include as much example code as you can so that others can quickly run it and get an idea of what you’re asking, and they might even post your code back with fixes. If you don’t get good answers within a day or two, let me know and I’ll take a look at the question to see if there’s a way we can get better answers.

      Reply
      • Abhiram. Chitipirlla (Raam )
        July 12, 2013 6:29 pm

        Brent

        I got a way when I was doing TRAIL & Error

        1. shrink additional log file by using DBCC shrink file ( log_name, 1 )

        Note : Proceed only if you find tail log is inactive (DBCC loginfo) .( take more frequent t-log backups to make inactive)

        2, once you shrink the log file drop the additional Log file by using GUI or T-SQL

        3. Run LS backup to remove the additional Log file info from sys.sysaltfiles

        4. Enable copy and restore Job.

        Note : Test this procedure in test before you run in Production.

        Reply
  • I have set up log shipping between 2 SQL Servers (SQL Server 2008 and SQL Server 2012),

    At first i took a full backup of the first one and restored it on the other one and than i set up the log shipping,

    The databases first backup-restore took a few hours and i wish to know:

    how does the log shipping process knows from when to start the logs it is shipping? does it start from when the last backup-restore process happened or from the moment the log shipping itself is set ?
    Can i be sure that no information was lost in these few hours ?
    Thanks

    Reply
    • Matan – if you try to restore a transaction log backup and there’s missing data, SQL Server won’t allow you to do it. It’ll require the earlier transaction log backups to be applied first.

      Reply
  • Hi,
    We are in the process of setting up log shipping, using SQL 2008 R2 ent edition. I’m a newbie dba working with a contractor dba. Here’s what I’m being told.

    When a database is initialized on the source server, it first creates a backup, then restores it on the secondary server. Here’s the question. Our source database is separated on three data files on three different drive letters. Our secondary server also set up in the same fashion, to take advantage to three datafiles across three drive letters (D: E: & F).

    The dba is telling me that the configuration doesn’t allow to spread across three drives, I must use only one drive letter. The workaound would be to take a backup of the database, then FTP it to the secondary server, restore it across the three drive letter, THEN, enable log shipping.

    Can any confirm this if this is my only option to spread the datafile across three drive letters.

    Apologies, I’m still learning the SQL technology and may not have my context correct.

    best regards,
    Jay

    Reply
    • Jay – sounds like that DBA isn’t quite up to speed either. You can totally do log shipping with files on multiple drives.

      Reply
      • Brent, thanks for that quick reply back. A brief follow up question:

        Our scenario.
        1. We manually created the empty database and spread the data files into 3 drive letters (D, E, F) mentioned above on the secondary. (Each drive has 700GB)
        2. Size of primary database: 1.8 terabytes
        3. During the initialize of log shipping:
        a. A FULL backup was created on the Primary
        b. Restore step failed with error the D: stating there isn’t enough space for 1.8 terabytes and it is only 700GB available

        question: how can we by pass this error message without having to manually copy the backup over to the secondary and restore from the secondary?

        Reply
        • Jay – I’m not quite clear on where the error is occurring (on a secondary or on the primary) and troubleshooting this is a little beyond what I can do in a blog post comment. Your best bet for personalized one-on-one help like this is to use the contractor DBA. If he can’t help with these kinds of issues, you probably have the wrong contractor. 😉

          Reply
      • Actually the contractor is right as per the ‘workaround’ above. If you setup logshipping with the GUI and initialise a new copy of the database you can only specify one location for data files and one for logs. I’m guessing that’s why you are getting the error below too – it’s trying to restore all files onto one drive.
        If you need multiple locations you need to pre-initialise the database on the secondary(copy backup across and restore it with the files in the 3 locations – use NO RECOVERY or it won’t work).

        Reply
        • And to answer your other question, the only way to prestage the database on the subscriber without copying a backup to the subscriber is to perform the restore across the network. And at 1.8 terrabytes that is probably going to be a terrible idea. Far better to find assign a temporary disk to the secondary, copy the backup to that and restore locally.

          Reply
        • You don’t have to use the GUI though.

          Reply
          • Sure, you can script the whole process, but he’s still going to need to decide what is the lesser of 2 evils – finding space on the secondary to store his backup so he can restore from a local file or go through the pain of restoring a huge database across the network.

  • What is .TUF(Transaction Undo File) ? IF we lost .TUF file What Happen ?

    Reply
  • Philippe Bechamp
    August 6, 2013 3:45 pm

    Hi Brent,

    Is there a way to backup a logshipped destination database ? I know it cannot be backep up while it is in restore or stand-by mode. But can it be taken offline, backed up and the log shipping restarted ?

    Thanks !

    Phil.

    Reply
    • Jes Schultz Borland
      August 6, 2013 4:40 pm

      You can’t back up an offline database either. My question is, why do you want a backup of the database that is your backup?

      Reply
      • Philippe Bechamp
        August 6, 2013 8:06 pm

        Well, initial plan was to avoid impacting prod DB as it is already backed up by another method.

        I’ve looked into minimizing DB backup impact since. Probably a better way to go.

        Reply
    • Andy Chiarwanto Pranata
      October 16, 2013 1:56 am

      Hi.
      I want to know how much logshipping Jobs (Backup, Copy, Restore) affects server’s performance.
      Do you have any idea?

      Reply
  • What are the pre-requisites/pro & cons for having two Secondary Servers pointing a one primary in a log-shipping setup

    Reply
    • Jes Schultz Borland
      August 8, 2013 8:29 am

      Prerequisites for the primary and the secondaries are the same regardless of whether you have one or multiple. The benefit of multiple secondaries is that you have your data in multiple locations. One could be a warm standby; one could be used for reporting. The downside is that you’ll now have multiple copy and restore jobs to monitor and manage.

      Reply
  • Hi! Great FAQs!

    My company recently implemented Log Shipping on a 700 GB database using 15 minute intervals. This seemed to cause excessive blocking with our normal business processes that were executing during the same time period that the database’s indexes were being rebuilt online (some of these indexes take longer than 15 minutes to rebuild).

    1. Can you explain how log shipping could possibly cause such blocking while indexes are being rebuilt and other data manipulation was occurring?
    2. Do you have any recommendations to enable Log Shipping to co-exist without generating conflicts?
    3. Is it possible to set up Log Shipping to execute on an irregular schedule instead of a constant time interval?

    Thank you for your input and wisdom!

    Reply
    • Kendra Little
      August 15, 2013 6:14 pm

      Hey Eric,

      Logshipping itself doesn’t cause table level locking and blocking: it just uses backups of the transaction log (which don’t use locks like that). Based on what you’ve said, I’m guessing you changed the recovery model from Simple to Full to implement log shipping, and that you have a performance condition that’s just related to the extra work of doing log backups.

      For your question about intervals, there might be some ways to rig up a solution where you can get things to execute at non-regular intervals, but I think the better question is how you can make it so that you can run transaction log backups frequently and reliably without pain. That’s the better goal!

      Kendra

      Reply
  • Hi Jes,

    I’ve question one question. Let us assume if we configured logshipping for 4 databases on one server (Instance) and suppose we have 50 servers, every server had 4 databases configured with logshipping (50*4 = 200 databases)
    1Q) How can we monitor these logshipped databases, whether databases are in sync or not? is there any easy method or script?
    2Q) How can we check the available desk free space in all the servers to prevent space issues..?

    We can’t TN into the every server to check the above 🙂 . Is there any method to avoid TN into every server.

    Thanks & regards,
    Samba

    Reply
    • Samba – actually, let’s take a step back. What software are you using to monitor the health across these 50 servers? That’ll impact the answer because each monitoring program has different ways of looking at this.

      Reply
      • Thanks Brent for swift reply, We are using SCOM for sql server monitoring. Is there any chance (method) to monitor by using SSMS? can we use any script or central server management system to monitor?

        Reply
    • When you set up logshipping it creates a job called LSAlert_(servername). That job starts failing if any of the databases logshipped to that instance are behind the thresholds you set. You can set up an operator to be notified on job failure on each instance and you’ll get an email sent right to you.
      But, per Brents answer there is a bigger monitoring question here. In my experience SCOM is quite a noisy monitoring tool that takes quite a bit of tuning to get working “right”.

      Reply
      • Thanks Rob.. The job (LSAlert_servername) has created on SCOM right? is it possible to check this through SSMS? I mean is there any method or script? and have a great day…

        Reply
        • It’s created as a SQL Server Agent Job. You can check it by going via management studio. Connect to your logshipped SQL instance, expand SQL Server Agent and doubleclick Job Activity Monitor.

          You can also monitor the current status of logshipping by rightclicking on the SQL instance name selecting ‘Reports’ -> ‘Standard Reports’ -> Transactional Log Shipping Status’.

          Reply
  • Logshipping:
    In logshipping primary server crash.Then how will you findout logshippin g secondary server

    Reply
    • Kamesh – can you reword your question?

      Reply
      • Hi
        My primary database is located on primary server instance.After some time primary server instance gets crashed.How will you findout the secondary database on secondary server instance

        Reply
        • Jes Schultz Borland
          August 28, 2013 8:58 am

          First, you should have all log shipping instances fully documented somewhere – a Word document, a spreadsheet, your intranet – somewhere.

          Second, if you get into a situation where you lose the server your primary database is on, look for a server that has the Copy and Restore jobs on it. The Copy job should have the primary instance name in it. But this is no substitute for documentation.

          Reply
  • What is the best approach with T log shipping config when patching sql servers?
    Should you disable all LS jobs on primary while patching primary and then just enable when patching , restarting is complete?
    No plans to fail servers over as patching is done in defined service window.

    Ant other tips in regards to log shipping and sql patching, a check list or similar would be great.

    Thanks

    Reply
    • Hi, Mickey. Getting the right answer and a checklist here is a little beyond the scope of what we can do in a blog post comment, but that’s a great idea for a future post. Thanks for stopping by!

      Reply
  • Ashish Jaiswal
    August 29, 2013 8:02 am

    Check the other server’s job list. If you get the job name like Lscopy_”PrimaryserverName” and LSRestore_”PrimaryserverName”, that is your secondary server.

    Reply
  • Hi

    How to configure the log shipping in different Domains. Please let me know if any document related to the same.

    Reply
  • Hi
    Thanks for your reply.

    Reply
  • Okay, I’m settting up log shipping primary and secondary. The database is large, and the pipe is small between primary and secondary. I want to pre-initialize the secondary db with data / schema, then turn on log shipping. What are the steps to do this?

    Reply
  • On the Primary, here is the setup:
    Full Every Sunday 23:00pm
    Differential Every night Monday – Saturday 23:00pm
    LogShipping Via LS Jobs every 1 hour during the day

    Say, Wednesday night for some reason one of the .trn files get accidentally corrupted/deleted from the backup folder,
    Thursday morning you come in and the chain broke from last night and won’t continue.(you are missing a file in the restore folder)

    How do you overcome this to fix the broken chain?

    Can you restore last night ‘s differential and start from there,
    or take the differential right away up to the current moment?

    Reply
    • Bass – I’m a big fan of doing experiments to find out stuff like this. Have you tried running an experiment on this in your environment? Create a database on your development server and give it a shot. This is one of the best ways to learn, getting under the hood and proving these things for yourself.

      I know it sounds cruel not to just tell you the answer, but it’ll get you started on a lifetime of experimenting and learning. Have fun!

      Reply
  • Hi
    How to transfer the logins,server level permissions,database level permissions,object level permissions from one server to another server in log shipping plan

    Reply
    • Kamesh – log shipping transfers data inside the database, which includes permissions on things inside the database.

      It does not cover things outside of the database, like logins and server level permissions.

      When you Google for answers, leave out the log shipping part, and just look for answers on how to copy logins or server level permissions.

      Reply
  • Brent, we currently copy our Prod backup to other environments to refresh our main database about every 2 months. Is there a way to reinitialize log shipping with out starting with another full backup of the Source server?

    Reply
  • Thank you

    Reply
  • Hi
    1)Is It possible to take the full,diff,tlog backups in logshipping primary database.

    2)what is 14420,14421 errors in logshipping

    Reply
  • When we configure logshipping LS-Alert jobs are created for backup and restore. I want to know how these jobs works and where does it store the error before writing to the logs? Does it right way make an entry into msdb db?

    Reply
  • Hi, Brent Ozar, Jes Schultz Borland You both are great. Giving such a valuable information.
    In my practice : i configured Logshipping 2008, my backup job, and restore jobs are running successfully. But copy job is failed.Reason: it cant access the path where the backup files are stored. But i shared my drive and folder where the backup files are stored. still i am facing same problem.

    please give the solution for that

    Thank you

    Reply
  • Hi
    logshipping primary database log file is full. How to troubleshoot

    Reply
  • I want to configure Log shipping from SQL 2008 R2 Ent edition to SQL 2008 R2 Standard. Would not be any problems with secondary when I’ll use compression and partitioning at primary?

    Reply
  • Can you set up log shipping between sql server 2008 R2 (standard) and 2012 (standard)?

    Reply
  • while the Logshipping is running,if i add a data file to the Primary Server, for example the primary server is in E: Drive, the secondary server is in F: drive ,then the secondary server have no permissions to access that drive E: where the new data file is added. Copy job is running successfully,restore job fails shows the error message like : “Cant access the path”

    How to resolve this issue ? can’t we add a data file to the Primary Server without breaking the Logshipping ?

    Reply
    • Jes Schultz Borland
      September 27, 2013 12:14 pm

      This is a really interesting question. I wish I had the time to dedicate to give you an answer that’s just as interesting, but there’s a lot of possibilities here. The best way to get help on something like this is to narrow down the question as tightly as possible, then post it on http://StackOverflow.com or http://ServerFault.com. Include as much example code as you can so that others can quickly run it and get an idea of what you’re asking, and they might even post your code back with fixes. If you don’t get good answers within a day or two, let me know and I’ll take a look at the question to see if there’s a way we can get better answers.

      Reply
  • Hi Brent,

    I would like to know whether we can achieve Log Shipping from SQL SERVER 2008 SP2 and SQL SERVER 2008 R2 RTM both enterprise version with the destination DB being in STAND BY MODE.

    Thanks in Advance,
    Deepak Almeida

    Reply
  • We Log Ship to a Network Drive. I can pull the log for the restore directly from the network drive where the backup is. I’d like to skip the “copy” step, but the Log Ship set up did not allow me to. So, I just copy right back to the same location as the backup. This works mostly. but about twice a week, I LS_Copy job fails with a “Already Exists” message.

    Is there a way to skip the “Copy” step?

    Or copy to ?

    thanks,

    tim

    Reply
  • Another Situation: I backup to a remote drive every 15 minutes, 24×7. We keep 45 days worth. The folders have gotten quite large (over 4000 files in each folder). This is affecting performance of the Log Shipping on the delete step. The next backup starts before the delete step is complete causing the Log Shipping process to get log-jamed.

    I’d like to remove the “delete files older than” step and run a separate delete job at night.
    Again, I don’t see in the Log Shipping wizard that I can remove the ‘delete files….’ step.

    Suggestions ?

    thanks very,

    tim

    Reply
    • Hi, Tim. Just set the files to never get deleted in the Log Shipping Wizard (pick a really long time span) and manage the deletes yourself.

      Reply
      • It looks like no matter how many days you put in, it still has to read the directory and determine if there are any files to delete. So, that doesn’t really help.

        I ended up dropping my retention and cleaning up the folders to habe less than 2000 files. that did the trick. no more performance problems.

        Our other option was to rake off older files into an archive folder, leaving the main backup/restore folder smaller.
        We may still do this.

        thanks very,

        Tim

        Reply
  • Andy Chiarwanto Pranata
    October 16, 2013 1:58 am

    Hi.
    I want to know how much logshipping Jobs (Backup, Copy, Restore) affects server’s performance.
    Do you have any idea?

    Reply
    • The answer would depend on your individual server and your backups. Your best bet is to do the performance testing in your environment. You’ll find that there’s lots of settings you can tune to make the backups go faster, plus you’ll want to know what your server’s bottleneck is.

      Reply
  • Hi thanks for the article. I have a small doubt. Hope you can suggest me on it.
    Is there a way to find which log-shipping methods is been used to establish the data synchronisation?
    Eg:
    – Generate full backup of primary and restore in secondary.
    – Log shipping with existing database of the primary database.
    – Secondary database initialized (without changing secondary)

    If so, how can I find it? I have tried to find from log files but couldn’t find.

    Reply
    • Ravi – what’s the problem that you’re trying to solve?

      Reply
      • I am trying to find which option is selected during log shipping configuration?

        Reply
        • Jes Schultz Borland
          November 14, 2013 1:14 pm

          Are you asking which option was selected on the “Initialize Secondary Database” tab of the setup wizard? If so, why is that important to you? What problem do you need to solve?

          Reply
          • Yes. You are right. I have to tried to set up all the three methods in a test bin and found that last option which is Secondary database initialized (without changing secondary) having some performance issues compared to first 2 options. That’s the reason I would like to know which option has been used so that I can prevent such kind of performance issues in the future in my production env. Hope I am clear. Thanks.

  • By the way, it’s in 2005

    Reply
  • Hi Jes
    I am having problems with my log-shipping job the copy job is failing due to an Access Error
    The access to the path is denied
    I have tired everything i know, I have given the secondary server permission to the folder where the backup is, In the primary folder, I have shared the folder I even gave secondary server permission to the drive
    I don’t know if you can help

    Reply
    • Jes Schultz Borland
      November 26, 2013 7:40 am

      The secondary server’s Agent account needs access to the folders – that may be different than the account running SQL Server. Did you check that it’s the correct account?

      You can provide more details, such as an error message, at dba.stackexchange.com. There are always lots of people there, watching and helping to answer specific questions!

      Reply
    • I had the same issue. We resolved it by using the same *user account* name and password on the primary and secondary servers to access the share folder. We also run the sql server agent under that account.

      Reply
  • Hi Brent and Jes,
    Thanks for all your effort!

    I’ve got a question; I have two nodes in a cluster (sql server 2008) and i want to backup my cluster to the another site(DR Site) using with log shipment method, is it possible and what is the roadmap? Sorry maybe the question is ridiculous :/. Could you please clarify that for me?

    Thanks.

    Reply
  • Hey Brent,

    Sometimes its taking longer time for copy job to run because of big t-bkup files, so the restore job is reaching its threshold and gets alerted. How to find out the restore history i.e the files that were restored in that run and the time it took to restore ?

    I can look at job history to find how long restore job took to run to get rough idea, but its not helping.

    Thanks for looking.

    Reply
    • Chris – those are in the MSDB tables for backup history and restore history. Fire up SSMS and type select * from msdb.dbo.backup or msdb.dbo.restore and IntelliSense will kick in from there, helping show you through some of the tables. You can also check those out in Books Online.

      Reply
      • Haha, thanks. I always query msdb to get backup history, not a regular thing for me to see restore history. Now I know the restore_date, the job supposed to kick in every 30 min, for few in restore_date there is difference of 2 hrs or so and based on the history_id, I should be able to get the file names. Got it :).

        Reply
  • Hi Jes,

    did you ever come across the issue where everything works fine except no logs are being restored?
    I’ve set up log shipping manually between 2 instances that are totally separated and cannot ‘see’ each other. I went through the LS setup steps using the UI but scripted the config and then cancelled the UI. So there is no real primary and the log backup and copy is handled using our standard routines.
    I ran the LS setup script on the secondary (just the required part). Everything works fine but the LS restore job just doesn’t restore. It finds the correct log but doesn’t restore it. If I manually restore a log file, the job notices that and just skips that file and continues with the next: it says it has found the log but it again doesn’t restore it.
    I checked the error logs, traced it, executed logship.exe from the command line, manually executed sp_can_tlog_be_applied etc etc. All works fine and no errors, just no restore…

    Any ideas?
    Chrz
    Thierry

    Reply
    • Jes Schultz Borland
      January 10, 2014 7:41 am

      No, but I haven’t tried to cancel out of the setup on the primary and use another method to backup and copy the log files. If the restore job is successful, how do you know the log isn’t being applied? Is there perhaps an issue with naming conventions?

      Reply
      • thx for the quick reply

        I know from the job log it’s not being applied:

        2014-01-10 14:04:19.12 Found first log backup file to restore. Secondary DB: ‘xyz’, File: ‘xyz_backup.trn’

        2014-01-10 14:04:19.13 The restore operation was successful. Secondary Database: ‘xyz’, Number of log backup files restored: 0

        I replaced the network location with a local disk since I thought it was a network/permission issue but this doesn’t solve the issue.
        I also restored the latest backups because I thought maybe it was too far behind (never know), still no luck.

        I’m beginning to think there’s a bug somewhere. Hard to tell if I cannot find any errors though…

        Also found this thread that doesn’t really help
        http://social.msdn.microsoft.com/Forums/sqlserver/en-US/44d3b8b2-59aa-4ac5-90de-be9eab7e06a3/log-shipping-not-working?forum=sqldatabasemirroring&prof=required

        Reply
        • Thierry – if you suspect a bug, contact Microsoft support. It costs $500 per call, but they work the problem with you until it’s done, and if it does turn out to be a bug, they’ll refund the $500.

          Reply
          • thx for the reply Brent. Before I do so, I’m going to install a VM environment and see if I can simulate the behavior (maybe try different versions as well to see if it’s related to a specific version or build).
            Thx again for your time (and Jes) to react to my questions

        • Kendra Little
          January 10, 2014 6:30 pm

          Hey– just a quick guess/question. Are you possibly using log backup files that aren’t generated by the built-in logshipping jobs? (Your own log backup job, or one from another script?)

          I tried to do that once years ago and it drove me nuts– the restore jobs would succeed but never actually restore the backup files. It really cares about the filenames and that they are in the exact timezone and format it’s expecting if it created the log backups itself.

          If this is your issue, more info on it here: http://mikehillwig.com/2012/03/15/sports-analogies/

          If not, Mike Hillwig has some other great stuff on his blog about logshipping, too. 🙂

          –kendra

          edit: updated/fixed link
          edit2: I bet this is your issue! Reread your original comment and found the sentence “the log backup and copy is handled using our standard routines.” That’ll do it! Drove me nuts, too.

          Reply
  • Hi Jes,
    to test I just renamed one of the log files to match the LS backup file pattern and OOH MY GOD 🙂 it worked. Steeee-rike for the ‘Ozar-team’ once again!
    I wonder why on earth it was designed like this.
    Anyway, thx again!

    Reply
  • Jes Schultz Borland
    January 13, 2014 7:35 am

    Excellent! Glad we could help you out!

    Reply
  • I have log shipping already configure. And working fine
    But client ask to change path, so i go to copy files destination folder and change from D driver to C driver
    Then i copy all transaction log from D to C

    After that secondary db is not restoring

    I thought maybe because db is not sync
    So i sync secondary db with primary db
    But still not restoring

    Do i need to clear all transaction log in C drive first before i start copy and restore job ?

    Thank you

    Reply
    • Jes Schultz Borland
      February 5, 2014 9:25 am

      Have you updated the log shipping configuration so the backup job, copy job, and restore job know where to look for the files?

      Reply
  • We have a table cleanup( deleting a bunch of rows) that happens monthly, which creates a huge transaction log. Any way to implement log shipping or the cleanup in a way that won’t force us to schlep that giant log across the wire?

    Reply
    • Jes Schultz Borland
      February 5, 2014 9:16 am

      There isn’t – that’s an operation that’s going to be fully logged. You could disable log shipping, set the database to simple recovery, do the operation, set it back to full recovery, and reinitialize log shipping, but I don’t know if that is any less time-consuming. You also lose point-in-time restore capabilities while the database is in simple.

      Reply
      • Thanks Jes. Seems like kind of a dumb question, but thought I’d ask, like setting it as some kind of bulk-logged operation or such…

        Reply
  • Hi Brent/Jes

    Please could you clarify my understanding of the restore alert threshold. I have just set up a LS configuration to delay the log file restore by 240 minutes. I’ve left the alert on the default 45 minute threshold but the alert seems to be firing – I guess the restore threshold is being exceeded.

    The wording of the alert suggests to me that it will alert if there is NO restore within the 45 minute period. Restores are actually happening ever 15 minutes successfully. Should I read this as “The restore of a particular log needs to happen within 45 minutes of the backup being taken”? I.E, should my restore alert be configured to 285 minutes rather than 45?

    I’ve played around with the threshold and it works fine when set to 285, but for obvious reasons I wanted to understand it so I don’t surprise myself if restores genuinely fail for a period of time and I miss the alert for a few hours.

    Hope I explained that relatively clearly – its always seems simpler in my head 🙂

    Many thanks

    Reply
    • Kendra Little
      February 6, 2014 8:34 am

      Hey there,

      If you think about the alert monitoring, it’s trying to help you keep within your desired “Recovery Point Objective” and keep the data from being too far behind. I might have restored two log backups within 45 minutes, but maybe those log backups were just taken one minute apart– so if it just measured whether I’d restore a backup then it would be very hard to know what it meant.

      It sounds like you’ve tested the jobs and understand how they work. I wouldn’t just assume that you should up the threshold though– you might want to increase the frequency of your log backups, copy jobs, and restore times depending on your RPO and RTO. (More info on that here).

      Hope this helps!
      Kendra

      Reply
  • Hi Guys,
    If a restore_delay is used and a situation occurs where the primary server is lost and the secondary is being brought into service, what is the correct way to reset restore_delay to 0 to cause the log shipping restore job to immediately apply all available log backups to the secondary database please? We run a delay to give us time to react to issues on the primary before they are applied to the secondary.

    I looked through the msdb log shipping related sps and I’m not sure I can see anything appropriate. Would a simple update statement for restore_delay into msdb.dbo.log_shipping_secondary_databases be sufficient and safe or is there a better way please?

    Thanks for all your great community support,
    Mart

    Reply
  • Julian Orange
    February 9, 2014 9:50 pm

    Hi Jes & Brent, Thanks for the great info and videos.
    I am quite new to MSSQL and have just set up my first log shipping to use for DR purposes. I was surprised to see that on the secondary (read only) copy of the DB if I show the file location it displays the info that is correct for the live copy and not the RO copy. The following command returns the correct location but viewing DB_name \ Properties \ Files with the mgmt studio shows the paths of the primary DB.

    Can you explain this? Do I need to fix that if the secondary_RO is ever bought into production?

    This is the SQL that shows the correct paths.
    SELECT name, physical_name AS CurrentLocation, state_desc FROM sys.master_files

    Thanks

    Reply
  • Hello,

    I have just configured log shipping and the wizard is currently backing up and will restore the database within a few hours onto my secondary server. Will changes on the primary servers database during the backup and during the restore be reflected in the secondary database after the wizard completes? Don’t want to miss anything!

    Thanks much!

    Reply
    • Jes Schultz Borland
      February 20, 2014 8:24 am

      The changes to the database will be captured in the transaction log, so they will be in the log backups. After your full backup is restored, apply all log backups that have been taken since it began, and you’ll be caught up.

      Reply
  • Hi Jes/Brent,
    we have configured logshipping on primary as a production server and secondary as a DR server and
    it was all running good till yesterday morning, but it started throwing errors on the secondary server restore job is failing. what could be the reason for it and i want a solution for this please, the error is like this Executed as user: User_Name. The log shipping destination Server_Name.DB_Name is out of sync by 135 minutes. [SQLSTATE 42000] (Error 14421). The step failed.” what could be the reason. but mean while i have disabled the restore job on my DR server as i couldn’t get any help on this issue.
    i wish you people will resolve my issue please.

    Reply
    • Krishna – solving this one is a little beyond what we can do in a blog post comment. We would need to see the exact error messages.

      Here’s a few things to consider – has security changed on the servers? Were any of the log backups accidentally deleted? Did someone take a log backup manually and put it somewhere else?

      Hope that helps!
      Brent

      Reply
  • Jeremie Grund
    May 1, 2014 11:13 pm

    Hey Everyone,

    I had a quick question, when setting up log shipping for multiple databases is it a best practice that the log backups are all stored in separate shares per database? Or is it possible to backup all logs to a single folder and SQL Server knows which belongs to what database?

    Similar question with the agent jobs, does there have to be one log backup job per database?

    Thanks!

    Jeremie Grund

    Reply
    • Jes Schultz Borland
      May 5, 2014 9:16 am

      When you set up log shipping, SQL Server will create separate Agent jobs for each database. However, the files can all go to one folder – there will be different names for each database’s log backups.

      Reply
  • log shipping setup- Does both the primary site and DR site storage drives should have same names ?

    Reply
    • Jes Schultz Borland
      May 27, 2014 3:59 pm

      It’s not a requirement. However, for many, many reasons, having the same drives on all SQL Servers in your environment is easiest – always having a D:, L:, and T:, for example, and backing up to the same drive or share on each server. Consistency is key!

      Reply
  • I just configured Log Shipping and wanted to test the setup to verify data integrity. For this reason, I selected Standby Mode as the database state so I could read the database and check the row updates. When I make a change on the Primary, the restore job fails on the Secondary with the message that our TUF file is not a valid undo file for the database – verify the file path and specify the correct file. During Log Shipping configuration, you never specifically enter information for this particular file. Does anyone have suggestions on how to resolve this error? We will not use Standby mode in production, but I wanted to use it for my testing. We are using SQL Server 2005. Thanks for your time and assistance!

    Reply
    • Jes Schultz Borland
      July 7, 2014 4:01 pm

      It’s been a while since I’ve seen that error, but check the permissions on the folder the logs are saved to. Per this post, you may not have the right permissions.

      Reply
  • Thank you for that suggestion. I will look into that further.

    Reply
  • Is there a way to enable compression after the Log Shipping process has been already setup and running?

    Reply
    • Jes Schultz Borland
      August 21, 2014 7:23 am

      Yes! If you go to the database properties and open the transaction log shipping page, then open Backup Settings, you can pick the backup compression level there. The change will be made the next time a log backup is taken.

      Reply
  • hi Brent, Jes, your posts are awesome. way of explaining very useful to understand.

    i have a doubt what is log shipping.exe file in log shipping. what is the work this???? if this missing log shipping will work. how to recover .exe file.

    Reply
    • Jes Schultz Borland
      August 22, 2014 10:49 am

      Thanks Raj – I’m glad we can help!

      sqllogship.exe is the application that runs log shipping. If you were to set up log shipping and look at each of the created jobs, you’ll notice they call sqllogship.exe with a set of parameters. More information on it can be found at http://msdn.microsoft.com/en-us/library/bb283327.aspx. What happens if you delete the .exe? I’ve never done that, but my guess is that all log shipping set up on that server starts to fail. You could try to copy the file from another SQL Server instance into the correct directory, but I don’t know if that will make it work.

      Reply
      • Thanks for helping Jes,

        sqllogship.exe and logshipping.wrk is two files are same. or not. i have big confusion about it. if different what is use of .wrk file in logshipping.

        Reply
  • Jess,

    we are in the process of downgrading all our DR servers involved in logshipping due to licensing restrictions. the only way to downgrade the instance is to uninstall and reinstall.
    now, the DBs are quite large and we would like to avoid re-initializing the entire logshipping from scratch.

    Is there a way to downgrade the SQL instance and yet not break any logshipping chain?

    Thanks,
    Anand A

    Reply
    • Jes Schultz Borland
      August 26, 2014 7:48 am

      That’s an interesting question. I’ve never tried this! You should head over to http://dba.stackexchange.com/ and post this question – tons of smart people are there constantly, and I’m sure someone has encountered this situation.

      Reply
  • Hi Jes Schultz Borland
    My undo file file was missing wen my cluster moved to second node ….. as my database was showing restoring\standby.

    My question is while making database into standby my cluster got moved after that it was not restoring the further logs as my tuf was not missing.

    Is it automatically deleted or it didnt got generated.as the database was showing in the management studio restoring/standy.
    Here i think undo file was generated but not completly.

    Please explain

    Reply
    • MFK – if your undo file was missing, it was placed on storage that wasn’t visible to both nodes. You’ll want to start by using the Cluster Validation Wizard during your next maintenance window. It’ll take a few minutes to run, and it will produce a report showing problems with the cluster (like storage that isn’t available to all nodes.) From there you can fix the root cause of the issue.

      It’s not unusual to find LOTS of problems on a cluster that you’re validating for the first time – take heart that you’re not the only person in this situation, heh. Once you go through it the first time, then you’ll make sure your clusters are properly validated BEFORE SQL Server gets installed, and it’ll make problems like this go away.

      Reply
      • Also there are corruption log errors in the system events logs are the cluster service failed.
        Can I say it can be the reason might be an storage issue where a bad sector occurred and got deleted.

        Reply
      • Hi Brent … storage says there is no way a file be hidden when the cluster moves.

        Reply
        • MFK – wow, I’ve never heard storage talk before! 😉

          There is, actually. If the file is saved on local storage (think C drive), then when the clustered instance fails over, the file that was on the C drive doesn’t move over to the other server’s C drive.

          Reply
          • File is not saved on the local drive.so how do I confirm to the storage that the file is hidden on the storage.as they say is not possible if the cluster gets file can be hide on the drive because storage will never do that.

          • MFK – unfortunately troubleshooting this is beyond something we can do in a blog post comment.

  • As part of our testing plan, I recently configured log shipping in a production to DR environment using our largest database to test the timing intervals. While the configuration was successful, the restore job failed because we needed to set up our encryption key on the DR server. After setting up the appropriate key on the DR server, do we need to re-configure Log Shipping again from Primary to Secondary? Or will the Restore job automatically pick up the change. Having said that, if it’s been a few days between noticing the error and applying the fix, will log shipping fail anyway because of the backlog?

    Reply
    • Jes Schultz Borland
      September 2, 2014 8:18 am

      The log backups would already have been encrypted, so they should be applied after the key is restored. Log shipping will not fail because of the backlog, but it could take a long time to catch up – especially if you have frequent log backups. You could take a differential backup of the production database and restore it to the secondary, then re-enable the log shipping jobs, to speed up the restore process.

      Reply
  • Is it common for Log Shipping removal to take over 19 hours for a database that is 21,662,271 KB in size? Could that be enhanced with changes to Network bandwidth settings? We are running tests for our timing before implementing production. The initial configuration took approximately 12 hours, so I wasn’t expecting the removal of Log Shipping to take this long. Thanks!

    Reply
    • Jes Schultz Borland
      September 2, 2014 12:47 pm

      No, I’ve never seen it take more than a minute. Use a tool like sp_who2 or sp_WhoIsActive to check if there are any active connections to that database, and close any you find. You may also want to use a tool like sp_AskBrent to figure out what is going on while it’s being removed. If you’re using the GUI to attempt removal, I suggest the stored procedures instead.

      Reply
  • We have been conducting planned failover testing in our production/dr environment. Upon setting up our Secondary as the Primary and trying to log ship to the former Primary/now Secondary, we receive the following error:

    The media family on device is incorrectly formed. SQL Server cannot process this media family. RESTORE FILELIST is terminating abnormally. (Microsoft SQL Server, Error: 3241)

    I’ve confrmed with our head DBA that our Sql Server versions are the same across the servers we have implemented Log Shipping. The file doesn’t appear to be corrupt as it does ship from the initial Primary to Secondary. It just fails with the above error when we try and reverse the direction.

    Could this be related to encryption settings? Since we’ve verified the SQL Server versions and file integrity, I wasn’t sure what other items to consider in our troubleshooting efforts.

    Thank you for your time and assistance.

    Reply
  • Thank you for that information. Our head dba actually revisited and re-configured encryption on each of the servers. We seem to be able to at least configure log shipping in one direction between all servers now.

    I did revisit a planned failover test, but after role reversal, I checked the files from the backup/copy/restore on the new Primary and Secondary servers and continue to receive a Primary Key Violation when trying to view the Transaction Log Shipping Status report. I’ve encountered this before, but it seems to be happening every time.

    Am I missing a step that would prevent this violation from occurring? I do not see duplicate or inaccurate entries on the log shipping monitor tables.

    I should note that we are using SQL Server 2005 and I read where this issue was corrected in later versions. Could this be part of our problem?

    Thanks so much for your help.

    Reply
    • K_ps – I’d run a Profiler trace while you run the log shipping status report, and that way you’ll be able to get the exact SELECT statement that generates the error. From there you can troubleshoot the duplicates manually.

      Reply
  • In our log shipping configuration, we have setup the Recovery mode for the Secondary server to be Standby, Disconnect users …

    During Planned Failover testing, after executing role reversal and returning to the original A to B configuration, I have been executing a Backup with NoRecovery on the Primary Server which I am preparing to revert back to the Secondary. That leaves the database in a Restoring state. When initially configured with the Standby/Disconnect users selection, the database reflects Standby/Read Only. I see where there is a Standby argument for the Backup command. Would I use this to revert back to that original configuration? I wasn’t sure what the Standby file name would be referring to.

    Also, we currently have Replication on our databases. I see that you can implement Log Shipping with Replication. Are there any settings required when implementing log shipping with replication? I have not been able to find anything specific in this regard.

    (I am aware that a problem occurs in that replication does not continue after a log shipping failover. )

    Thank you again for your assistance.

    Reply
  • Update to post:

    After finally completing our first successful planned failover test, I learned something in answer to the previous post. After manually running the backup, copy, and restore jobs, the database that was previously in a Restoring State returned to the log shipping setting of Standby/Read Only.

    Still confirming Replication settings, but I wanted to follow up with that update.

    Thanks again for your assistance.

    Reply
    • Jes Schultz Borland
      September 22, 2014 2:51 pm

      “Standby” is not a mode a user can put the database in. As you’ve seen, the database will revert to that state when restores are complete.

      I haven’t combined log shipping and replication, but yes, it can be done. Start here for more information: http://msdn.microsoft.com/en-us/library/ms151224.aspx.

      Reply
      • For my experience, a user CAN put “Standby” to a database. It’s the step when I set up my initial Log Shipping Database.
        After “RESTORE WITH NORECOVERY”, you can use
        “RESTORE DATABASE [xxx]
        WITH STANDBY = N’E:\path\UNDODB.BAK'”
        And it will create “UNDODB.BAK” file automatically for you without a existed “UNDODB.BAK”.
        Then the DB will become “STANDBY/Read-Only” status.

        Reply
  • Does the initial FULL backup that is done to initiate log shipping have to be a non-COPY_ONLY backup?

    I’ve been tasked with getting log shipping set up on 40+ databases and trying to automate the process as much as possible, but we also have CommVault doing FULL/DIFF/TLOG backups (I know I need to stop the TLOG backups if I want to keep my sanity) and I’m trying to figure out what I can/can’t do.

    My plan right now is to do normal, non-COPY_ONLY backups of all DBs, get log-shipping going/working, and then after that doing a FULL backup again back in CommVault (and turn its normal FULL/DIFF backups back on). Any comments if I’m on the right track?

    Reply
  • Three databases log shipped to Standby/ReadOnly databases on separate Warehouse server; two are CRM/ERP and the third has no ongoing transactions. Backups occur every 15 minutes and are restored hourly; TRN file sizes range from 5K-7K KB on one, to 200KB on the other to 4KB on the third. *All* files take 30 seconds to process; 9 of those are consistently in the step “Recovery completed for database X (database ID Y) in 9 second(s)…” (using trace flags 3004, 3605 to get data to logs). Analysis, redo, undo values differ, from (48,0,57) to (96,0,204); LSN differences from 0 to 37,743. (TUF sizes from 88KB to 1,376KB). All of the files are on the SAN, albeit in different locations; the TUF files are in the Data folder for the Warehouse destination. VLFs from 17 (avgsize 812KB) to 491 (29,658) and 471 (36,941); only 1 active for all 3 DBs when I ran the query.

    SQL 2008R2 (SP1), 350GB RAM, 24 cores, plenty of hardware; network at 10GB. What am I missing?

    Any insights on next steps I might take, or information I might gather, to reduce the file process time per file?

    Reply
  • We recently configured our 5th database for log shipping in production, using SQL Server 2005. The process took about 12 hours, which was consistent with our testing. The size of the .bak file was 27,714,601 KB. Upon completion, the first log backup job kicked off and kept running. After two hours, the .trn file size had grown to 34,809,167 KB. I removed the log shipping configuration and am troubleshooting why the backup job never completes. We’ve run numerous tests and also have 4 databases successfully implemented in Production with Log Shipping, but we have not run into this problem yet. Did we miss something? The details in the log shipping configuration for this particular database are the same as those used in the other log shipping configurations (same network share folder, same timing, same copied folder location, etc.)

    If can provide any guidance on where I could troubleshoot this further, I would greatly appreciate it. As always, thanks for your time and input.

    Reply
    • Jes Schultz Borland
      October 30, 2014 1:17 pm

      How large is your log file? How large are transaction log files, in general?

      A trick may be to initialize your log shipping from a full backup, take and restore a differential backup, then take and restore the log backups.

      Reply
  • Thank you for following up.

    The sizes of our production log files currently being log shipped range anywhere from 192 KB to 1,604,406 KB after heavy processing at night.

    This particular database with the never ending log backup is one of our largest databases with the heaviest transaction load. If the restore from the initial log shipping configuration takes over 12 hours, then the log has built up over that time since the full backup was restored. I suppose that may explain why the first log backup job is running so long. I was just surprised that the size of the first log file was exceeding the size of the full backup file.

    Reply
    • K_ps – there you go! I’ve seen similar problems with surprisingly large differential backups when folks didn’t realize just how much they were changing the database with full index rebuilds.

      Reply
  • Jes,

    I liked your idea of taking and restoring a differential after the full backup and then kicking off the log backup jobs. I configured log shipping for this database on Sunday, during low transaction load hours, and had planned on catching the end of the configuration process so I could take the differential before that first log backup ran. Unfortunately, the restore ran 3 hours longer than anticipated and I missed the window. As a result, the first transaction log backup job to kick off after successful completion of log shipping configuration has been running for 10 hours with a log file size of 229,603,143 KB. Would it be possible to delete, disable, or kill this job, take the differential and restore it to secondary, then re-enable the log shipping backup job? That shouldn’t break the log chain but I didn’t know if I was missing any potential for data loss.

    Thanks so much for your input.

    Reply
  • Hi Jes,

    I wanted to follow up on our runaway transaction log backup job. I tried the approach with the differential tonight as per the steps below:

    7:46 am – log shipping configuration kicks off
    9:32 am – backup file is stored in network share folder. File size is 26.1 GB
    9:30 pm – Log shipping configuration completes.
    – I disable the log shipping backup, copy, and restore jobs
    9:31 pm – I enter command to backup database with differential
    (In this query window where I typed the command to backup with differential, I still had the “use dbname” command and I believe it should have said “use master.” Would that have caused a problem?)
    9:33 pm – Differential completes with a file size of 768 MB.
    – I re-enable the backup and copy jobs to get that process moving along after the differential
    – I copy the differential file to the secondary location
    9:45 pm – The first transaction log backup job kicks off
    9:59 pm – After the Differential file is copied, I restore the database on Secondary using the differential
    (In this query window, I did have the “use master” command prior to the restore command.)
    11:02 pm – The restore is still running
    – The transaction log backup job that was created at 9:45 am is still running with a file size of 28 GB and still growing.

    I had thought that by taking the differential, the transaction log backup job would not run as long as it wouldn’t have to capture all of the data from the time of the full backup to the time of the differential. Am I missing something in this process? I am rather new and making plenty of mistakes, so it’s possible. I’ve tried searching for other examples of this scenario but have been unable to find anything to date.

    If you have any other pointers, I would greatly appreciate it. I am wondering if we need to consider a separate DR solution for this database.

    Thank you for your time.

    Reply
    • Jes Schultz Borland
      November 10, 2014 9:16 am

      This is becoming far more involved than can be answered with a blog post comment. I’d suggest you post the info at stackoverflow.com, where people may be able to help more quickly, or contact us at help@brentozar.com to discuss an engagement!

      Reply
  • Thank you for providing that direction. I will try that out.

    Reply
  • Can you set up log shipping between SQL Server 2005 (Enterprise) and 2012 (Standard)? can you use any combination of editions / versions as long as you are log shipping from lower version of an upper version? thanks

    Reply
  • Jes,

    Thanks for the great article!! Quick question – can you have transaction log and log shipping turned on at the same time on a database server?

    Thanks

    Reply
  • A typo on my previous post –

    Jes,

    Thanks for the great article!! Quick question – can you have transaction log back up and log shipping turned on at the same time for a database server?

    Thanks

    Reply
    • Jes Schultz Borland
      December 8, 2014 3:29 pm

      If you had a log backup job set up and a log shipping job set up, both would run. The problem, however, would be restores. Log backups are sequential – they contain an LSN that tells SQL Server what transactions are in them, and what order to restore them in. Thus, the log shipping restore job on your secondary would fail because it’s “missing” a log backup.

      If you have another procedure or task that already takes log backups, you could roll-your-own log shipping.

      Reply
  • We are in the process of upgrading and migrating an application. It is currently running on SQL Server 2008R2, Standard Edition. The new database will be SQL Server 2012 SP2, Enterprise Edition with AlwaysOn Availability Groups. To limit the downtime for the migration, it has been requested that I set up log shipping from the old to the new, then stop that and create the AlwaysOn on the new at cutover time.
    I see that it is possible to log ship from SS 2008 R2 to 2012, which is exactly what we want to do. Is there an issue going from Standard Edition to Enterprise Edition?

    Reply
    • Jes Schultz Borland
      December 12, 2014 3:39 pm

      Scott, that should be successful, but as always, test test test! Trying to log ship from Enterprise to Standard could be an issue, if you had Enterprise features enabled.

      Reply
  • Hi Jes\Brent,

    We have a huge database (size 3 TB) and data files having 1.5 TB free space (Unused space). we have to release free space to disks so we are planing to shrink the data files of log shipped database. .
    My question are If we perform the shrinking operation on primary server, does those changes automatically reflect on secondary server also and does this operation breaks logshipping as that database is part of logshipping. I knew we can shrink log file without any issues but I have not tried for data files.
    Can you please explain how to perform data files shrink operation if that database is the part of Logshipping as per best practices.

    Could you please answer above questions. Thanks in advance and Have a great day..

    Reply
    • Jes Schultz Borland
      January 21, 2015 1:58 pm

      I haven’t tried to shrink the data files of a database involved in log shipping. However, as that is not a logged transaction, it should not affect log shipping.

      Reply
      • Thanks Jes for your reply, What about data files of secondary database? Does that are shrink automatically if we perform on primary server?

        Reply
        • Jes Schultz Borland
          January 22, 2015 1:43 pm

          Changing the data file(s) on the primary will not change anything about the data file(s) on the secondary. The only thing log shipping will move between the two servers is logged transactions – no settings, no jobs, no users.

          Reply
  • Question: What is the “dba’s preferred” alternative to sqllogship.exe ? (for indexing, Ola’s script is king… what is king for log shipping?)

    I find the opacity of sqllogship to be a problem. And need to use various copy methods to diff destinations. Would be awesome if sqllogship were open source, or if there was a powershell drop in replacement.

    What is the preferred tool set? (there seem to be a few, but can’t tell which one is popular/road proven) ?

    Reply
    • Sam – typically people use third party apps like Dell LiteSpeed, Idera SQLsafe, or Red Gate SQLbackup. You get better log shipping, plus all kinds of cool backup compression and encryption options.

      Reply
      • I used redgate sql backup for years for log shipping. Loved it. But now SQL 2014 (which I am on) has backup compression and encryption native (in standard edition, yeah!).

        Also, I need more control over how the logs move than those tools provide. Also, fwiw, I have experienced hard-to-diagnose problems when using redgate.

        The right set of powershell scripts would totally do the job. Is there such a kit that has a decent track record?

        Reply
      • Research is in progress. Status so far: Idera SQLsafe currently has zero support for backup or transport through cloud. (SQL 2014 has native support for azure blob storage, so that is a pretty big strike for sql safe)

        Reply
        • Sam – correct, I haven’t seen good vendor support for cloud APIs because they change so fast. Instead, what you do for that is to land the backups onto a file share, and use a file share sync tool to push the data up to the cloud.

          Reply
          • Yes and no. SQL 2014 backup to azure blob storage works great (I ran it for a month). But there is no native ability to log ship over azuer (and, oddly, an msdn post by a softie showing proof of concept log shipping over azure was removed by ms a couple weeks ago). The biggest issue here is that you have to roll the whole solution (job tracking, monitoring etc) because the sqllogshipping.exe is closed source.

  • philliam Mwesigwa
    January 30, 2015 1:37 am

    am using merger replication from my production server to the DR server.But how can I have the DR server in state whereby users cannot make changes to it while the primary server is up and running?

    Reply
  • We have a 1.8 TB DB (that is rapidly growing per day). We currently have Transactional Log Shipping (TLS) setup to a secondary server (no failure on that side yet). We are taking 1 weekly Full backup, then daily Differentials, and then we have TLS set to every 5 minutes. Never had an issue with TLS. Yet when I had a user request a restore of the full database to another location, when we restored a full backup and then tried to restore the last Diff we needed, we got a broken LSN issue. We then tried all of the Diffs for that week with no success. Yet when I started to restore from those hundreds of TLS files we have not received an error yet. We researched the BackupSet table in the MSDB and can’t find any LSN issues through that, the chain seems to be correct.
    A coworker is thinking that the TLS occurring while the Full and Differential is actually occurring may be causing this break, but we cannot find anything saying such. We are also wondering if maybe because both the TLS and Diff/Full (we do not run Diffs on the same day we run Fulls) times overlap (due to the tremendous size of the DB and data churn) if the LSNs are being captured in a weird order.
    Should TLS be running during Full and Diff backups?
    Has anyone else heard of anything like this before?

    Reply
    • Kendra Little
      February 3, 2015 2:25 pm

      You are totally supported to run log backups while a full backup or a differential backup is running.

      What you are seeing is consistent with an extra full backup having been run after your weekly full, and that “extra” backup reset the differential base. This frequently happens if someone takes a backup to refresh a dev environment without telling anyone. Sometimes there is an enterprise level tool that sneaks in and runs a backup used for disaster recovery, too.

      Basically, if you are relying on differential backups, any extra full backups (that are run without the “copy_only” flag) will render your diffs unusable. The log shipping is a red herring, it’s not part of the issue.

      Reply
  • What process or service locks the WRK File when log shipping is running…it tells me that “The action can’t be completed because the file is open in another program.” I just want to delete it…it’s old…I can’t remember where I can kill a process that will allow me to delete it. I’ve already did another backup on the primary and restored on the secondary….all is running fine no alerts.

    Reply
    • Jes Schultz Borland
      February 4, 2015 7:08 am

      The .wrk file is created when a log backup file is in the process of being copied from the backup location to the restore location. Did you have a failed copy job?

      Reply
      • I understand what the file is for and how it works. Yes I had a failed copy, I fixed that problem but that file is still “in use”…I can’t remember what process I canceled that allowed me to delete the .wrk file.

        Reply
  • Have read through the above blog and thought I would describe our situation to see if anyone else has seen the same setup. I am planning on using Copy-Only full backups that will be taken on a regular basis using SAN snapshots. I would also like to have some of the databases use log shipping to a local DB server for redundancy and also a tertiary server in our DR site. Would the use of copy-only backups via SAN snapshots disrupt the log shipping setup?

    Overall i’m looking for an efficient way to use SAN snaphots for backup and short term retention and use log shipping for HA and point-in-time recovery. AlwaysOn Availability groups are out of the question due to licensing unfortunately.

    Thanks, Pete

    Reply
    • Kendra Little
      February 5, 2015 9:02 pm

      Hi Pete,

      If you only ever do copy_only full backups, it will never reset your differential base. That won’t break log shipping, but it might mean that you can’t use a differential backup periodically to “catch-up” log shipping if you need to, like I describe here: https://www.brentozar.com/archive/2014/09/log-shipping-magic-using-differential-backup-delayed-secondary/

      Can I ask why copy_only full backups? Do you have other full backups that will also run, and you want those to reset the differential base?

      Kendra

      Reply
      • Hi Kendra, The copy-only backups are taken via LUN snapshots on the SAN. We run another maintenance plan taking fulls and logs (logs will obviously have to be removed from the maintenance plan to implement TLS). The SAN snapshots are taken so we can quickly refresh UAT and DEV environments.

        Reply
  • We have three SQL Server instances on 3 different servers. Call them A, B, and C.

    A is dedicated to a critical application; it log ships to B (as the secondary).

    Now they want to do log shipping for C, which is dedicated to another critical application.

    The bean counters are asking why I can’t use A or B as the secondary.

    Aside from it not seeming like a very good idea (let alone the fact that we shouldn’t be using a poor-man’s solution like log-shipping for databases this important in the first place) is there a valid technical reason why I cannot?

    Reply
    • Jes Schultz Borland
      February 17, 2015 8:36 am

      There is no technical reason not to use B as the secondary for both A and C. You want to carefully schedule the copy and restore jobs, but if you do that, there should be no problem.

      Log shipping is a valid HA and DR solution as long as it meets the RPO and RTO for the business. I don’t feel it’s a poor-man’s solution – it’s one of the most well-established and reliable solutions in SQL Server!

      Reply
    • Jes is right. It isn’t a poor man’s DR. You have to remember DBAs are a service oriented position. We are a cost to the company. A needed cost, but to the financial side of the company we are dollar figures flowing out of the companies profits. All of infrastructure needs to squeeze as much work out of a server as possible, otherwise we are wasting a lot of money. So why can’t you host multiple servers DBs on the same box and save some money instead of having perfectly good servers go to waste? 🙂
      I use Log Shipping to my DR site because in a real DR scenario (where the primary site goes completely down) I can get a 5 min RPO with a 30 minute RTO (depending on the time of day). In agreeing to load many servers databases onto one or two servers (and using a few strategically placed SQL Aliases), I have also managed expectations. The systems will be up and running in 30 minutes, but they are going to be running really slow until we can bring new hardware in and migrate the databases onto other servers (using what you may ask? Log Shipping). The business leaders have signed off on this (via my DR plans and within emails) so I am covered in case complaints occur during DR. It means more long hours for us, but in the end we are saving the company a lot of money in working with the tools that MS provides and keeping our systems (especially our backup ones) to a minimalistic foot print… 🙂

      Reply
  • I seem to have discovered something on SQL 2014 SE: Log shipping supports backup compression (which is native to SQL 2014 SE), but does NOT appear to support backup encryption (which is also native to SQL 2014 SE). Here I thought I was finally free of redgate, but it appears I yet wear chains.

    Reply
    • Jes Schultz Borland
      February 17, 2015 8:42 am

      Backup encryption is available in Standard, Enterprise, and BI editions for SQL Server 2014. As far as I can tell, it’s supported with log shipping – but I haven’t set it up. In order for it to work, the encryption cert must be restored on all the secondaries.

      Reply
      • Jes,
        Yes, you are correct. The certs must be on both the source and destination systems for encryption to work.
        The point is that when you tsql a log backup job, it is only encrypted (afaik) if you add the WITH ENCRYPTION tag. When you use the log shipping wizard, there seems to be no checkbox to enable encryption (or to choose the cert…. as you may have more than one…). So, via wiz, it seems that encrypted log shipping is not avail.

        Reply
        • Jes Schultz Borland
          February 23, 2015 3:23 pm

          Correct – there is no option for using encryption in the log shipping wizard. I don’t see an option to set that using the stored procedures, either. Might be worth filing a bug on Connect!

          Reply
  • Harold Eugenio
    February 23, 2015 4:22 am

    Hi,

    I have a situation here. I need to bring standby DB online then put it back to standby again for some DR exercise without switching roles with the ProductionDB. Is this possible without using a full backup of the ProductionDB to reinitialize log shipping?

    Reply
    • Jes Schultz Borland
      February 23, 2015 3:25 pm

      You could try taking a differential backup of the production database to re-initialize with. Kendra talks about that (for a slightly different scenario) in this blog post.

      Reply
  • Excellent picture 🙂

    Reply
  • nathanayyanar
    March 8, 2015 9:57 am

    hi,

    I have manually copied the. Trn to secondary because I am getting error like ‘no system resource to do this operation’ when lscopy triggered. My prod log file is 11G b in size. After I copied the log file from prod to secondary, how lscopy will identify the file was copied to secondary eventhough that was not done by lscopy job.

    Thanks in advance

    Reply
    • Nathanayyanar – this is one of those great times where I’d suggest trying it to find out. It’s often better to just run an experiment in your own environment rather than waiting for someone else to let you know.

      Reply
  • Dear sir please tell me,
    (i)There is a log shipping keep on running and later one user has added one log
    file in the same database at primary server? Both server has same configuration. What will happen?

    (ii)I have to configure log shipping for 3 databases. How many backup and copy jobs will be created?

    Reply
    • Satya – why not give it a shot? Go ahead and run an experiment yourself and you’ll learn a lot about log shipping. Getting your hands dirty is the best way to learn answers like this.

      Reply
    • Brent’s right, log shipping is one of those things that you can get your hands dirty testing dozens of times and not break anything. Find a small DEV database and have a blast.
      As to the answer to number two, if you use the default log shipping setup (you do not pre-build the database in the new location) in SSMS then the following will happen:
      1) One BAK file will be created in your log storage location (which you setup in your LS configuration)
      2) While that BAK file is being restored to the secondary server, TRN files will be created in the background (note make sure to turn off LOG backups on this database on the first server no matter what, or you will be looking for them all the time).
      3) After the database has restored, the next timed restore point (that you setup in your initial log shipping config) will then begin running through all of those TRN files (depending on the initial time).
      So the final answer is (if you use the SSMS LS config to set your initial restore point) 1 BAK file per database and as many TRN files as it takes for the initial restore to occur.
      Depending on the size of the database and the network speed you may want to use full and differentials in conjunction with log files (leaving it in “NORECOVERY” mode), and then setup the LS in SSMS using the pre-restored DB. A good example to this is we have a 2TB beast of a database that I have done this with. It takes 3-4 hours to backup, 6-8 hours for the initial restore, and is crossing a 1 Tera-bit per second pipe (which comes out to about 2 hours to transfer the file (depending on the network traffic). So in my case this becomes a two day and many file event to setup the initial restore copy on the other server. If setup correctly, the LS config setup takes about ten minutes…

      Reply
      • And then I noticed the real question, which was the jobs themselves… lol
        1 per DB for backup (on initial server)
        1 per DB for copy (on initial server)
        1 per DB for restore (on secondary server)
        1 for all for your LS Alert (on whichever server you setup as your monitoring server)
        So in your case 10 jobs will be created.
        I do find that the alerting lacks so scripted a SP to show exactly which DB is having the issue, instead of just telling us something is wrong… 🙂

        Reply
  • Nathan ayyanar
    May 8, 2015 10:32 am

    Dear sir,

    my primary database ‘xyz’ has a secondary Database ‘xyz’ on another instance with the same name which is in sync and working fine . I tried to create one more secondary database with the different name ‘xyz_new’ for the same primary Database ‘xyz’ on the same secondary instance by giving different name for lscopy and lsrestore .

    The configuration happened successfully without any error message but when i checked there was no lscopy and lsrestore was created for the new secondary DB ‘xyz_new’

    The destination path for the lscopy for ‘xyz’ and xyz_new’ on secondary server is different..
    can you please help..

    Reply
    • Hi! Unfortunately, troubleshooting this is a little beyond something we can do in a blog post comment. To keep things simple, though, I’d stick with the same paths on both servers. That way when you add a file to that database in the future, it won’t break log shipping.

      Reply
  • Yuval Menchik
    June 1, 2015 12:57 am

    We are using LS as our DR solution.
    The transaction log is being backedup every 15 minutes, copy to the secondary server and restored as a standby DB.

    Is there a way not to restore trn file unless a newer trn already exists?
    I want to avoid a case where the master DB is not reachable and the secondary already restored the last trn as standby/read only.

    Reply
  • 1) Is there any way to solve without breaking logshipping if the secondary server log file is filled (Incase of drive filled) ?
    2) Is there any way to solve without breaking logshipping if the secondary server data file is filled (Incase of drive filled) ?
    3) can we put secondary database in simple recovery model ?

    Reply
    • Senthil – by the scope of your questions, it sounds like you’re working with a secondary server that’s got much less capacity than the primary server. This isn’t something we’d recommend – database administration is hard enough as it is without worrying about drive space.

      Reply
  • Hi Brent,

    Could you please suggest me the best method for My DR plan. Currently we have set of databases (6) used for our application and we in the process of building DR and HA plan. As we are into small business we are using standard edition 2012 and we are unable to make use of AG feature. We have visualized our servers and using Hyper V replica for HA and planning to setup DR site using either Log shipping or Database Mirroring but not both. Could you please let me know the best option to proceed and least complexity to maintain as we have to fail-over 6 databases to support application. We can offer downtime more than an hour during weekends and maximum data loss can be 15 minutes to 30 minutes.

    Thanks for your help.

    Thanks,
    Ajay

    Reply
  • Mark Freeman
    July 9, 2015 3:17 pm

    Let’s say that I have databases Foo and Foo2 on Server Region1 which is log shipping to Server DR. I also have databases named Foo and Foo2 on Server Region2. I would like to log ship from Region2 to DR as well, but I’m concerned about the database name conflict.

    Is there a way to have the databases have a different name on the secondary than they do on the primary?

    If not, is there any workaround other than adding a new instance to the DR server to use as the secondary for Region2?

    I assume that either way I’d have to use separate folders on DR to avoid name conflicts between the files from each primary. True?

    Reply
    • Kendra Little
      July 10, 2015 8:26 am

      Hi Mark,

      I believe that I’ve done this before without any problem (having a different DB name on the secondary)– just manually restoring the secondary database with a different name and then setting up the jobs. It is possible that my memory is playing tricks on me, but I don’t think there would be any issues. Should be easy to test with a small canary database first. Of course, if you have to fail over, then your apps might have a problem with the different names.

      Kendra

      Reply
    • Yes you can. Log shipping doesn’t take the names of the database file names into account during the actual log shipping. We have a database on each server that we use to capture server information on a daily basis. I log ship all of the Production based databases to the same server in our DR site so that they are all Read-Only. The database names are different on that server, yet are the same on all of the corresponding servers. When you do your initial setup, restore a fresh copy of the database “WITH NORECOVERY” (so the database can still be restored to) using different database names (in your case “Region1_Foo1”, “Region1_Foo2”, “Region2_Foo1”, and “Region2_Foo2”) and file names that correspond with your new database names. When you go through the LS setup, point it to that database using the “No, the secondary database is initialized”. Done, you are now logshipping both databases to the same server and can clearly see which database is from where. When you bring them active during a DR, it can be a little tricky depending on what your front-end application is like.

      Reply
      • It’s scary that I know this much about logshipping, but then of course I have broken it enough times to have learned many of the nuances. lol

        Reply
  • I setup a log shipping test database a few days ago. The database on the secondary server is in StandBy Mode as I’d like to be able to run queries against it. In StandBy Mode I am unable to backup the transaction log so my questions is…at what point will the secondary server decide to clear the transaction log or will the transaction log just continue to grow on the secondary server?

    Reply
  • My testing showed that log shipping from a lower version to a higher version (2008 to 2014 for example) does not work if you want the secondary database to be in STANDBY mode. Is that correct?

    Reply
  • In the Transaction Log Backup Settings there is an entry ‘Delete files older than:’. Currently I have this set to ‘1 Days(s)’ and this works just fine in most instances. However, in my situation, there is another backup that saves the file in this same directory. Thus the problem can be an occasional space issue which makes the 2nd job fail.

    So before I change this setting to reflect this space issue (i.e. ‘1 Minutes(s)’), I was wondering if this meant 1 minute after the file is created or more likely after the backup process is completed. Ideally I want this backup to be deleted the moment the restore portion of the Log Shipping process is completed. Unfortunately there is no option to set the entry at ‘0 Days’.

    Reply
    • Brian – step back a second and think about more complex environments where you have multiple log shipping subscribers. It’s impossible to know when the last restore completed – especially if you’re restoring them through other products. If you want the log backups deleted when the restore finishes successfully, you’ll need to roll your own code in the restore job to do that.

      I wouldn’t recommend that though – wouldn’t you want to be able to restore to a point in time when someone drops a table?

      Reply
      • Brent,
        You made a good point in analyzing the timing of the restore process. In my situation (which I did not mention earlier), the original database is a datawarehouse that is populated just once a day from a different system. So the Log Shipping process is run just once a day as well. So creating a ‘delete backup file’ program to run at the end of the Restore portion of the process seems to have merit. Not sure if it is the best approach though.

        Reply
        • Brian – typically in those scenarios, I restore off the database’s full backup rather than using log shipping just because it also tests my full backups too. That’s not an absolute rule though – say you’ve got a 1TB database, and only 10GB of logged changes happen each day. I might use log shipping there for that.

          Reply
    • Another question to ask yourself is when are those TRN files being moved out to tape/snapped/etc…? Since we have our logs being shipped every 5 minutes (trn built every 5, copied every 5, and then restored every 5 (15 minute RPO)) it gives us a lot of files (288) varying in size from KBs to 100s of GBs and our off server backups (snap/tape/whatever) occur once a day. We then set our clearing to 25 hours. This makes sure that anything bad that happens to the database that is not picked up for a day or two and has been pushed through the log shipping we can take it back to the point of issue. You may want to look into just increasing your disk space (I know this can be a royal pain when a company is on a shoestring budget).
      Another thing to think about: What if you accidently schedule the delete at an exact time, and the log shipping takes longer to backup/restore, this would royally screw up your logshipping and you would have to go through the entire process of setting it up again… PITA!!! 🙂
      BTW, the deletion of the file is set to occur through the restore process in the Log Shipping. After all of the cued restores occur, it then starts to delete them in order of the oldest to newest based on that rule you set. So if you set it to 5 minutes (days, hours, or minutes are in the drop down) then it will delete anything that is older than five minutes that has already been restored already.

      Reply
  • thank you for posting concerning Log Shipping; I am just getting my hands dirty. 1.Since Truncateing statements are not logged then they are not passed to the secondary server, is this true? 2. I have a regular job that backs up and truncates my logs for me on my primary. So, in this scenario, when the t-log backups take place(and are truncated) I have nothing to ship to my secondary server. So I should really not have both of these happening at any one time. I only want the t-log backup to truncate the logs, not the log shipping job (can I have both… and a chocolate pie?)

    Reply
    • Ian – you only want one process doing transaction log backups.

      Reply
    • As Brent says, you can only use one or the other. I use the same backup location to store my transaction logs for log shipping as I did when I was just pulling transaction log backups. Their is a slight issue though, the Log shipping backups only read those created by that process, so if you have both going (even to the same location) the log shipping will skip those and then won’t be able to restore future TRN files it creates (the magic of LSNs). If you make this mistake (which I have done often enough) it’s easy to correct because you would then just manually run all of the TRN’s from that location, catch the database back up, and then stop log backups for that database (since they are now running through log shipping).

      Reply
  • Log shipping or replication for having secondary copies of branches’ databases on a central server for reporting purposes on unstable network connection and acceptable delay of one day?

    Reply
  • Wow, just reading all the issues you can run into is sobering. Really encorages me to standardise my environment before starting. Simple is robust.

    Reply
  • Which is the best DR-only configuration for a 1.6TB databases please?
    1) backing up the log locally 2) copy it over to a DR server 3) restore it
    1) Backup to a network share 2) copy to the DR server 3) restore it
    1) backup to a share 2) restore from share

    And reverse log-shipping …?

    Reply
  • Rafael Simões
    November 23, 2015 4:34 am

    Hello guys,
    We have recently configured one database to be replicated with log shipping in production, using SQL Server 2008 R2.
    We have up and running more than 150 database replicas without any problems. But now, when we try to configure the new replica in a new server environment we have experiencing one error:
    The log shipping is configured from a 2008 r2 SP2 server to a 2008 r2 SP3 CLUSTER (the only difference compared with our other replicas)

    1 – the LS backup works Ok in the primary
    2 – the LS Copy works OK and successfully copy the files from the primary to the secondary
    3 – the LS Restore restores one, two or three trn files, and than it fails:

    Restored log backup file. Secondary DB: ‘xxxxyyyy_mirror’, File: ‘P:\ xxxyyyy _TLogs_Mirror_LS\xxxyyyy\ xxxyyyy _20151120163351.trn’
    The restore operation was successful. Secondary Database: ‘ xxxyyyy _mirror’, Number of log backup files restored: 1

    The next job execution:
    *** Error: An error occurred while processing the log for database ‘ xxxyyyy _mirror’. If possible, restore from backup. If a backup is not available, it might be necessary to rebuild the log.
    RESTORE LOG is terminating abnormally.
    Processed 0 pages for database xxxyyyy _mirror’, file ‘Vitacare2k6_Dev’ on file 1.
    Processed 1 pages for database ‘ xxxyyyy _mirror’, file ‘Vitacare2k6_Dev_log’ on file 1.(.Net SqlClient Data Provider) ***

    And the the database gets Suspect.

    4 – the files (MDFs, LDFs and TRNs) are in an Clustered Disk shared by the cluster nodes.
    I tried to restore the TRNs in the same full backup from the cluster my PC and I get no errors.
    Can you guys provide me any guidance on where I could troubleshoot this further, I would greatly appreciate it.
    Thanks for your time and input.
    Rafael

    Reply
  • hi guys,

    I’m currently configuring log shipping between 2 SQL Servers. To do a windows update on each of the servers, do i have to fail the server over from primary to secondary each time and vice versa when updating the second server? or can i just disable the LS jobs and resume the jobs after Windows update is completed and server is rebooted?

    Reply
  • Hi

    What will happen if we change the service account in primary server which log shipping was already configured in it. What will be the impact of the same in mirroring configured servers…
    Thanks….

    Reply
  • Bhavin Paschal
    December 17, 2015 1:14 am

    Somehow my secondary DB get online and its a big DB. I wat to reinitialize logshipping without full Restoration..
    Is this will use move some fact and lo file? It’s nonprod environment issue.

    Reply
  • Brent this Article Misses some important points on .tuf and .wrk file creation on secondary server could you please let me know as i have few doubts on these points

    Reply
  • is it true if . tuf file gets deleted logshipping needs to be reconfigured and if yes is there a way in which we can restrict it ? does .wrk file gets created on secondary regardless of the two operating modes i.e standby and No recovery or is it specific to an operating mode

    Thanks Brent for such a quick response

    Reply
  • Bhavin Panchal
    January 8, 2016 12:23 am

    Question regarding tuf file:- as my above question, is there way that we can copy past Tuf file and DB files to create new secondary server. And if existing secondary get online for big DB, how resolve it using Tuf file…
    Thanks Brent for this article and share it.

    Reply
  • Bhavin Panchal
    January 9, 2016 2:36 am

    Not sure why its No, may be I haven’t elaborate right way. Below what I did in my DR activity with logshipping:

    1st case:
    A) IF LOGSHIPPING ALREADY CONFIGURED AND SECONDARY IS ONLINE FOR BIG DB:
    Run below steps:
    1.Take full bkp of secondary database — prerequisite for log bkp
    2. Run below log bkp script with TUF file information and make secondary DB as stanby mode:
    backup log dabasename to disk=’path’ with standby=’TUFfilepath/tuffilename’ — in this case TUF file will be already present on secondary server where transactions logs files saved.
    3.Stop the services on primary and secondary server.
    4.Copy the database data and log file and replace it on secondary server.
    5.Start the services and run the logshipping jobs.

    2nd case:
    B) If need to add one more new server as Secondary
    1.Take create database script from primary and deploy it on secondary and create one new empty database on sec.
    2. Run below script on new secondary:
    backup log dabasename to disk=’path’ with standby=’path/tuffilename’ — In this case TUF need to take from existing secondary server.
    3. Stop the services on secondary server.
    4. Copy the database data and log file and TUF file from already exists secondary server and replace it on new secondary server.
    5. Just add new secondary server as logshipping from primary, that will create copy and restore job on newly added secondary.

    Brent, can you please add yours expert advise on my above comment. As we have big DB and never get right DR time slots.

    Thanks.

    Reply
    • I have been using Log shipping now for over 10 years and never ever messed with the TUF file. We have a 2 TB database and have to go through 12 to 48 hours to restore and catch up. What you are trying to do is something very customized that I wouldn’t be surprised is not even condoned by MS.
      A TUF (Transaction Undo File) file is created after logs have begun being restored to the restored DB (especially if it is in Read-Only mode). They come and go, and do not stay around forever. (https://social.msdn.microsoft.com/Forums/sqlserver/en-US/5651b189-4cea-4be2-bd05-4e30c1d74daa/whats-a-tuf-file?forum=sqldatabaseengine)

      As to your steps to create and service a secondary log shipped DB, those are fairly convoluted.
      Here are the steps you should use:
      Capture trn log backups (whether you have them going for another log ship already, or you need to set it up for the first time as a normal trn backup) every 5 minutes off the SQL server.
      Running a full backup of the database. Backing up and restoring a full copy of the database (using multiple bak files to make it faster for large databases).
      Restore full backup in Recovery mode
      Restore all of the trn logs in Recovery mode
      Go through SMSS and setup log shipping to the new database by right clicking on the database in the primary site, left clicking on the properties, and then clicking on the log shipping tab.
      Monitor the log shipping report meticulously until you are satisfied that it is staying up to date.

      Simple, straight forward (well my instructions are rough, sorry), and repeatable by any MS DBA (certified or not)… 🙂 Again, don’t mess with the TUF, I don’t see a reason ever to.

      Hopefully this helps.
      I have been writing off and on about backup and DR experiences I have been through here: https://quasited.wordpress.com/author/quasited/

      Reply
  • Brent I did not receive answers to my above question

    Reply
  • Is it possible log shipping?,When both servers are not in Domain,If yes then how.

    Reply
  • Njama Braasch
    January 18, 2016 3:48 pm

    I ran into a scenario this morning where I made a change to 2008r2 log shipped database, which broke the codebase. I had blithely assumed I could just “restore to a point in time” but when it came down the brass tacks of it, I couldn’t figure it out. Ended up failing over to the secondary and re-initializing replication going the other way.

    I’ve since spend the better part of the day reading the internet trying to find a reliable source on this subject. Do you have a good step-by-step on rolling back transactions on a log shipped db without breaking log shipping?

    The only one I found was at https://www.mssqltips.com/sqlservertip/3069/steps-to-rollback-database-changes-without-impacting-sql-server-log-shipping/

    and this article references a “full database backup” which I always assumed game over for log shipping. Any comments, or do you know of a better step-by-step?

    Thanks for all your awesome sauce!

    Reply
  • Njama Braasch
    January 18, 2016 3:48 pm

    I ran into a scenario this morning where I made a change to 2008r2 log shipped database, which broke the codebase. I had blithely assumed I could just “restore to a point in time” but when it came down the brass tacks of it, I couldn’t figure it out. Ended up failing over to the secondary and re-initializing LOG SHIPPING going the other way.

    I’ve since spend the better part of the day reading the internet trying to find a reliable source on this subject. Do you have a good step-by-step on rolling back transactions on a log shipped db without breaking log shipping?

    The only one I found was at https://www.mssqltips.com/sqlservertip/3069/steps-to-rollback-database-changes-without-impacting-sql-server-log-shipping/

    and this article references a “full database backup” which I always assumed game over for log shipping. Any comments, or do you know of a better step-by-step?

    Thanks for all your awesome sauce!

    Reply
  • Can you tell me some common Log Shipping Configuration issues and their resolutions?

    Reply
  • Konstantinos Katsoridis
    March 26, 2016 4:00 am

    Hi Brenda,
    I was wondering if there is an easy way (or even a hard one) to tweak the LSALERT job when you setup a full two-way log shipping solution…
    Because by default, both LSALERT jobs are always failing, and it is sensible.
    When server A is primary, the job complaints that “No restore was performed for xx minutes”, while when server A is secondary the jobs complaints that “has not performed a backup log operation for xxx minutes”.

    It would be nice to split this into two separate jobs and have only one enabled. But if you miss the chance to keep the initial creation script, how easy is this?

    Thanks is advance, I hope this will help others too.

    Reply
    • Konstantinos Katsoridis
      March 26, 2016 4:01 am

      It is not Brenda I should address, but Jes Schultz Borland instead…
      Sorry about that.

      Reply
    • Konstantinos – you want two separate log shipping setups, one for production and one for disaster recovery. The DR jobs are usually disabled. When you have a disaster and you fail over, you disable one set of jobs and enable the other.

      Reply
  • We are attempting to set up log shipping on SQL 2000 Enterprise (yes, 2000). I know this is an old version, but I was wondering if anyone had ideas on the following problem. The initial .bak file is of a reasonable and expected size and the wizard successfully used it to create the standby database. The wizard completed without error, but the first transaction file generated for the log shipping was huge (tens of gigs) and just kept on growing. We had to kill the process to prevent the drive from being filled up. We have performed tlog backups to try rectify this problem. This has freed up space within the tlog but not fixed this problem.

    Reply
  • DoubleBarrellDarrell
    June 14, 2016 2:55 pm

    Thank You, Ozar Team, for all you do!
    Do the drives that contain the database have to be the exact same size on the primary server as secondary?
    I currently have a primary database for OLTP and a standby database used for reporting.
    I’m going to create 2 new standby databases then cut over to them as an AlwaysOn Availability Group.
    The 2 AG databases must be exact matches but can’t the new standby disks be different size than the current primary – as long as the drive letters match and the files fit?

    Reply
  • If you set up log shipping and don’t select the option to disconnect users, what is the simplest way to activate that feature later on? Does the log shipping commend line change or is the option available in the UI somewhere?

    Reply
    • You can do it any time you just have to wait for the next restore to occur for it to come through the process flow properly.

      Reply
    • Old, I know, but useful nonetheless.

      EXEC sp_change_log_shipping_secondary_database @secondary_database=’dbname’, @disconnect_users=1

      Reply
  • Is it possible to have additional objects (specifically views) on a secondary server that is not on the primary?

    We have a client that has log shipping set up from Primary to Secondary with the logs shipped once nightly. This Secondary is NOT for fail over, but rather as an intermediary for a third reporting DB. There is a very large view that this Third is pulling over (also once daily), which is not used on OTLP for the production site in anyway.

    Could this view be on the Secondary, but not the Primary to shift all the overhead associated with this view from the production machine to the Secondary which essentially sits idle 16 hrs a day?

    Reply
    • Erik Darling
      July 7, 2016 11:58 am

      This would be really easy to test out by trying to create a view on your Log Shipping secondary.

      Reply
    • It’d be easier to create a secondary database on the secondary server that contains that view (modified to have the database name included (not just the schema and table names)). Set up the securities so they are the same and point your report at that. Easy rule of log shipping is anything that occurs to database 1 will occur to database 2, so if the view does not exist on database 1, it will eventually disappear from database 2.

      Even though I hate it, it sounds like replication would be better for what you are trying to do. Transactional or Snapshot replications (with multiple rep defs) handle database differences much better (until you add tables to the primary database). This way you can set different Securities, add Views, and add Stored Procedures to the secondary one as long as they do not change any of the data. You can also limit the amount of data being pulled to what is necessary instead of the entire database. You can get rid of a lot of the blob tables (aka size) tables this way.

      If your view is creating a lot of issues in the primary database, it may be a good idea to also look into rewriting the view as a stored procedure with using Temp tables and CTEs. Just like humans SQL like to keep it simple so the more you break the script apart in a SP using CTEs and temp Tables the faster your script will work…

      Reply
      • I was told these are indexed views. Which require schemabinding, which prevents cross DB referencing.

        That being said, it looks like there are both views AND tables. None of the views contain indexes. The tables do.

        But I think this approach of creating a new DB is a good one as the tables and views are generated every night via jobs. Which means moving the tables/view also means moving the jobs.

        Two for one!

        Reply
  • The LSCopy is copying an old .trn file.
    How can I stop/kill that and make the LScopy to copy the recent trn file.
    IT is not Stopping by Stopping the LSCopy job

    Reply
    • You can kill it, but you will screw up your transaction log restores. Log shipping requires TRN file 1 – 44 (as an example) to be restored in order and you can not skip a single one. IF you have 1 – 33, skip 34, the restore will fail for steps 35-44. If that file is completely lost then you will have to restore the full database and rebuild log shipping to that server again.

      Reply
  • Do you know if sqllogship.exe processes groups of log files based solely on filename order or does it does it do it based on file timestamp primary sort then filename secondary sort?

    It appears to handles things in filename order when all log files have the same timestamp.

    Reply
  • if i change the “network path to backup folder” in the the primary server, will it interrupt the log shipping?.
    should i rebuild the log shipping from the scratch?

    Reply
  • Hi Brent, can you please tell how many secondary databases can created in logshipping.

    Reply
  • Thank you Erik. so there is no particular value for the secondary databases(but we need to change the worker thread value on sql server configuration)

    Reply
    • No, absolutely don’t touch that switch. Buy a CPU that provides enough threads to support your workload.

      Artificially inflating Max Worker Threads just slows everything down when you run into pressure.

      Reply
  • SQL server 2014 – Is there any way to set-up log shipping both ways (primary and disaster site)?

    Reply
    • You need to clarify what you are trying to do. I assume that if you have a DB in primary that you are shipping to disaster and then you flip to the disaster site, when primary comes back up the best thing to do is to set back up (by restoring from the disaster to the primary) and ship back to the primary until everyone is ready to flip back to primary and then do such.

      Reply
  • Sean Perkins
    April 11, 2017 7:06 pm

    How would you configure Ola’s scripts and log shipping on the same server? Would I just build custom maintenance plans and not use his scripts?

    At first glance, I’ve not seen a way to have his log backup scripts ignore databases configured for log shipping.
    I ask because I have a server that has databases that don’t need log shipping and have databases that require log shipping. I’m trying to avoid building another server, but custom maintenance plans seem to be the only way.

    Reply
    • You can specify databases to include or exclude from maintenance. Just exclude the databases you’re log shipping. Have a look at the documentation. You’d have to know which ones those are, or you could add in logic to perform that check.

      Reply
  • Dear All,
    I have installed the SQL Web version (Licence), Still log shipping configuration option is not showing in database properties can anyone help me out?

    Reply
  • Brent raninto your blogs thru google searches, and now I book marted your blog. Thank you for blogging (and trying to honest with opinion) . Your blog needs update to indicate log shipping is still living in latest sql server. After reading your blog about the SQL AG, I am still convinced log shipping is still the simple solution for DR.

    Reply
    • Ramesh – thanks, glad you like our blog. There’s nothing in here that says log shipping is deprecated – things are still fine. We don’t go through old blog posts and say they’re still true. 😉

      Reply
  • Hi ,
    I am getting an error while Set up logshipping

    Msg 3257, Level 16, State 1, Line 1 There is insufficient free space on disk volume ‘C:\Adventure\Logs\’ to create the database. The database requires 4162176 additional free bytes, while only 2150784 bytes are available. Msg 3119, Level 16, State 4, Line 1 Problems were identified while planning for the RESTORE statement. Previous messages provide details. Msg 3013, Level 16, State 1, Line 1 RESTORE DATABASE is terminating abnormally.

    Reply
  • Marcus Culver
    April 12, 2018 10:58 am

    Is the Log Shipping monitor constrained by the same requirements as the log shipping it self in the sense that it has to be matching SQL Server versions or can you have a monitor on SQL Server 2016 but the log shippings on SQL Server 2008/2012/2014 etc

    Reply
  • Seems like there is no free space available on the drive to store the data and log files.

    Reply
  • HI Brent,

    wanted to know how many secondary servers are possible in log shipping.I gave an interview recently where i had to answer a lot of questions and i was asked this question.I said unlimited but i was not sure :).I was selected but i did not know if this is correct or not.

    Reply
  • Edward Miles
    July 5, 2018 7:08 pm

    HI Brent ,

    I have the similar question to the one above how Many secondaries are possible in SQL Server logshipping is there a limit .

    Reply
  • Is enabling/configuring log shipping (currently DB is in simple recovery mode) something that needs to be done while the database is not serving traffic (momentarily turn off the apps) or can this safely be done during regular but lower traffic times?
    Using SQL server 2016 Standard.
    Thanks!

    Reply
  • Brent, Congratulations on you provide nice information. I am having an issue and confused and need your suggestions. My manger ask to set up Log Shipping as we are looking to migrate that database to our another office location. The db size is 2 TB. so my manager ask me to minimum downtime he want to set up log shipping.
    we already set up log shipping two copies , 1- on our same data center on report sql server instance and 1 – I already set for our DR server. so currently two log shipping is running perfectly fine.
    now my question is can I set one more log shipping (to third server) without breaking the above two log shipping set up? This third one is just only to migrate the prod database from one office location to another to minimize downtime. once its migrated then we don’t need to keep that third one log shipping.

    Reply
    • Divya – thanks! Unfortunately I don’t do free consulting here. You can either click Consulting at the top of the page, or head to a Q&A site for free help.

      Reply
  • My log backup fails a few times per week, never back-to-back and usually at exactly 5:30 am. I back up every 15 minutes. If the 5:30 backup fails, will the 5:45 backup catch all transactions since 5:15? Or will I be missing transactions on my secondary server?

    Reply
  • Hi Brent,
    Why is restoring on seconday server ommiting files? Not a single file is being restored.
    I reinitialized seconday database with full backup and transaction log and re-enable the jobs (backup/copy/restore)
    Restore job success, I don’t see my changes, and I can see in history it’s ommiting all transaction log files, why?

    *SQL Server R2

    Reply
  • log shipping is running all fine except middle of the night (around 3AM) when rebuild index just completes and generates 3to 5GB log file . it takes close to 45mins to 1 hour for these files to be copied over . This is not really huge, so wondering why it takes so much time . there is no resource contention.

    Reply
  • I was checking if you had come across this scenario in your experience

    Reply
  • As general when we do restore we restore first full backup and restore transaction backups. In case of the log shipping, if we take daily full backup on the primary server how can not this break the SN for the shipped logs.

    Reply
  • I have pause Log shipping for 2 day.. and in between 2 days my 5 log backup expediently Deleted, how my Secondary Server will run Continue, what action i have to perform please guide me.

    Reply
  • I would love to know why it would be necessary to perform full/diff backups while log shipping is active? I just started at a new place and they have setup log shipping for every database on their instance and each agent job runs every 15 minutes. What would be the benefit of running a full and or diff set of backups in addition to this particular setup?

    Reply
    • There are various reasons. What if you just had a “data oops” where someone missed highlighting the WHERE clause and hit F5? Dropped a stored procedure? Dropped a table? If it takes you ~15 minutes to get to it, that’s too late – it has already been “shipped” and restored into the secondary. How about non-prod restores from prod? Those are fairly common, so you would want a full/diff to streamline that process.

      Log Shipping is really nice if you have a data center outage, for example, or your standalone instance goes belly-up. But Log Shipping is only part of your DR strategy – there are many layers and scenarios to consider when building your plan.

      Reply
  • Anthony Petruzzi
    October 7, 2020 9:03 am

    @Jes Schultz Borland

    You saved my life. I couldn’t figure out what was going on and your comment on you have to use the LSBackup job to backup the transaction log cause of the filenaming was spot on. Thank you once again.

    Reply
  • In case anyone else falls into this elephant trap – be aware that Ola Hallengren’s SQL Server Backup procedure automatically (and with no way of overriding, apart from editing the sproc) omits databases designated as log shipping primary databases from transaction log backups.

    So, if you use his maintenance procedures, want to implement log shipping, and decide to continue to use your existing backup processes, you will find transaction log backups of your LS primaries will suddenly and mysteriously stop. (To be fair, this is mentioned in his FAQ page, but who reads that until they run into a problem and have dug through the code to discover the exclusion…).

    Reply
  • John Dillinger
    November 15, 2020 6:42 am

    how does one tune the recieving server for log shipping ? After around ~80 databases are shipped to a log server the server starts to slow down considerably and wont do any more. CPU and RAM are low but i/o or context switching is obviously maxed out somewhere.

    Reply
  • I’ll apologize ahead of time if this has already been asked but based on my research here I didn’t see the question already asked: my scenario involved a database that is almost 3 TB in size. That is our largest database at the moment. Log shipping standards here, before I arrived, were 15 minute intervals. I’ve been having an impossible time getting log shipping to work on this particular database as the log shipping setup completes and all looks good but I never see a transaction log actually ship. The last restore time goes several days in the log shipping report and I have no idea how to get it actually ship and keep shipping. Is the 15 minute interval way too short or is there something else going on I haven’t discovered yet? I thought I’d start with what I thought would be the obvious problem and work my way out. The log, by the way, is around 300 GB. This is my first experience dealing with log shipping so forgive my ignorance and possible lack of useful information.

    Reply
  • Chetan Kathote
    July 3, 2021 4:25 pm

    How to ensure that the Log files copied in secondary server is restored or not ??

    Reply
  • Hello universal SQL Teacher, Greetings !!

    Shall we configure Log shipping from SQL 2012 to 2019 with Standby mode and the edition is developer .I am sure it’s not possible .Please add your feedback..Loads of thanks

    Reply
  • we have a primary instance that has 4 databases on it, I only want to do a log shipping for 2 of them.
    We also have regular maintenance plan to do full weekend, daily differential and hourly transaction log for the 4 databases in the maintenance plan.

    Now I would like to setup transaction log shipping for 2 of the 4 databases. I understand I need to do separate transaction log backups for the 2 databases, So I remove the transaction log backups for the 2 in maintenance plan, but still keep the full and differential in it. In the maintenance plan also have a step to delete files older than…

    Now I get to the step of setting transaction log backup, it has a delete files older than ,,,, .
    How can I set the hrs without confilicting with the delete steps in Maintenance plan and also works with full backup?

    Or should I create a separate maintenance plan for these 2 databases for full and differential and make them into a separate folder. I want this to work for both transaction log shipping or also want to keep valid backup files for the primary instance in case I want to restore primary databases?

    Reply
  • Location of aTuf file

    Reply
  • An amazing collection of words i have got today on this blog. This is a remarkable blog page which delivers an outstanding content. thanks Squid Game Tracksuit

    Reply
  • i am collecting your quality words for my new blog. Such an appreciative work . Salute To Service Bomber Jacket

    Reply
  • Sylas K. Terry
    March 15, 2024 9:08 pm

    Your content always keeps me coming back for more!

    Reply
  • I can’t get enough of your insightful articles and engaging stories. Thank you for sharing your passion with the world!

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.