Blog

I’ve been working with and talking about SQL Server Log Shipping a lot lately! If you haven’t yet, go ahead and watch my webcasts – Part 1: Preparing for Disaster and Part 2: When Disaster Strikes.

One way to ship logs…

I’ve gotten a lot of great questions about log shipping through these webcasts, so I’ve put together an FAQ.

  • What editions of SQL Server is log shipping available in?
    • 2012 – Enterprise, Business Intelligence, Standard, and Web
    • 2008R2 – Datacenter, Enterprise, Standard, Web, and Workgroup
    • 2008 – Enterprise, Standard, Web, and Workgroup
    • 2005 – Enterprise, Standard, and Workgroup
  • Does the secondary need to be licensed?
    • I am not the licensing police, and I am not Microsoft – check with your licensing representative to clarify your exact situation. Generally, you can have one warm standby server. However, the second someone starts using it for reporting, testing, or anything else, you need to license it like any other server.
  • Log shipping is compatible with backup compression. What edition of SQL Server do I need to take advantage of compression?
    • 2012 – Enterprise, Business Intelligence, or Standard
    • 2008R2 – Datacenter, Enterprise, or Standard
    • 2008 – Enterprise
    • 2005 – Not available
  • When log shipping is set up, Agent jobs are created to alert me if a backup, copy, or restore fails. How do I get notified?
    • You need to go into the Agent job, pull up Notifications, and choose your method – email an operator, or write to the event log, for example.
  • Are my logins shipped from the primary to the secondary?
    • No, they are not. You’ll need to set up a separate method to sync the logins.
  • Does this replace, or can it be combined with, our existing daily full and log backups?
    • TL; DR – no.
    • You’ll still want to take regular full and/or differential backups. Log shipping only takes one full backup – at the beginning – and that’s only if you specify that it does so. It can also be initialized from an existing full backup.
    • Taking two log backups in separate jobs will break the log chain, however. If you implement log shipping, it will replace your current transaction log backup job.
  • What’s the difference between the secondary being in “Restoring” vs. “Standby”?
    • Restoring means the database is not accessible. Standby means it is read-only. You make this decision when you set up the log shipping.
    • If the database is in Standby mode, users can query it – except when a log backup is being restored. You need to decide if a restore job will disconnect users, or if the restore is delayed until after the users are disconnected.

What other questions do you have? 

Interested in learning more about backups and recovery?

Grab my Backup & Recovery Step By Step training for more information on why backups and restores are so important, and how to perform them! 

↑ Back to top
  1. Great FAQ, Jes!

    One small disagreement, taking a log backup in a separate job won’t break the log chain. Failing to restore that log backup on the secondary will cause a gap in the log chain on the secondary. You can manually restore that log backup (or even a differential that spans the gap) on the secondary to get log shipping to continues.

    • Robert, thanks for the clarification. You can take as many log backups as you want – it’s the order they are restored in that matters.

      • I learned this the hard way when trying 3 times to set up log shipping. the restore kept giving errors saying this is not the earliest transaction log in the set. The problem was I had not turned off our hourly log backups. Once I realized it, I used the old hourly log (stored in a different folder) to manually restore. Since then the job has worked fine.

    • you are right, if you take any log backup or and log backup schedule on primary database in log shipping surely log shipping get fail.

  2. Jes,

    What are the pros and cons of using backup compression with log shipping?

    Thanks,

    Patrick

    • Patrick, I’d ask, “What are the pros and cons of backup compression?” Enabling backup compression will use extra CPU, but the backups will take up less space on disk, and the files that need to be copied across the network are smaller. Don’t use it on a system constrained by CPU, however.

  3. Pingback: Something for the Weekend - SQL Server Links 22/03/13 • John Sansom

  4. I haven’t gotten to work with this yet, but it surprises me that there are log chain issues. Can you not just set the log shipping job to copy_only and leave your existing log backups untouched? Though I guess you would then need to backup the secondary’s log to keep it under control.

    • If you leave an existing log backup job in place, and implement log shipping, you would have to figure out a way to restore both log backups to the secondary, in order, so the log shipping chain was never out of order.

    • No, you cannot set the log shipping job to do copy only log backups. There’s no logical reason you would want it to do that. You would then need a second log backup job to maintain the log file.Why woul you want to do twice as many log backups?

      The secondary’s log cannot be backed up. The database is not online. It is maintained by replaying the records of the primary. As long as you maintain the log on the primary, it will be maintained on any secondaries.

      • This is actually an excellent question because it allows for thought.
        Today we as DBAs deal with potentially different backup process that may be out of our control. These products may be performing T-Log backups so the question of can log shipping be used with the WITH COPY_ONLY?

        Scenario: Enterprise B/U tool running SQL Server Backups (full,tlog) directly to some inaccessible vault by any means other then using the B/U tool.

        Requirement: Log shipping to a secondary server.

        SQL Server allows for the copy_only t-log backup with a “BACKUP LOG successfully” result.

        How will sql server respond to rolling forward t-logs that were taken with the copy only flag. … thinking about the LSNs here. What will happen. This is a good experiment. :)

        • Clarky – the bigger problem is that if the enterprise backup tool *doesn’t* use the copy-only flag (and most don’t, for obvious reasons) then what happens when the enterprise backup tool copies and clears the transaction log as part of a log backup, and you haven’t gotten your separate transaction log backup first? You’ll be missing transactions.

  5. •Taking two log backups in separate jobs will break the log chain, however. If you implement log shipping, it will replace your current transaction log backup job.

    Jes – if you do a “copy only” backup – that will not break a log chain IMO

  6. HI Jes. I am a big fan, congrats.

    Jes. You really lost me in the log shipping job (second). If i have a log shipping set up and in the same time i have a recovery model. (full every Sunday- Differentials every day and logs every hour) it will make a conflict.
    Please can you clarified this foe me you put me aout track.
    Thanks and once again thanks for your hard work on this.

    • The only type of backup that log shipping controls is log backups. Full (and differential, if you take them) backups are not part of the automatic log shipping jobs. Full and differential backups still need to be run regularly, in separate jobs.

  7. Jes,

    Once you’ve failed from Primary to Secondary and them from ‘Secondary’ back to ‘Primary’, do you need to remove the log shipping from Secondary to Primary?

    I’m getting alerts from the LSAgent on both servers that sync hasn’t happened from S to P. So a final step for the planned failover would be to remove the shipping that was set up to do the second failure?

    Thanks for the screencast, it was well done.

  8. If my SQL 2012 agent is running as NT Service\SQLSERVERAgent how do I go about giving the file share permissions with that account or should I have not used the SQL Server 2012 default installation accounts to run SQL Server? Thanks.

  9. Just to confirm you can not do log shipping from SQL 2012 to SQL 2008 R2 right?

  10. I love your site Brent! Any guidelines for log shipping frequency? Maybe transactions / hour / second correlating to log ships per / whatever?

  11. Hi Brent.. I like your videos and the way you present them.

    One of my database(A) contain 2 log files. Database(A) is also participating in Log shipping. what is safest way to drop a log file with out breaking Log shipping.

    • This is a really interesting question. I wish I had the time to dedicate to give you an answer that’s just as interesting, but there’s a lot of possibilities here. The best way to get help on something like this is to narrow down the question as tightly as possible, then post it on http://StackOverflow.com or http://ServerFault.com. Include as much example code as you can so that others can quickly run it and get an idea of what you’re asking, and they might even post your code back with fixes. If you don’t get good answers within a day or two, let me know and I’ll take a look at the question to see if there’s a way we can get better answers.

      • Brent

        I got a way when I was doing TRAIL & Error

        1. shrink additional log file by using DBCC shrink file ( log_name, 1 )

        Note : Proceed only if you find tail log is inactive (DBCC loginfo) .( take more frequent t-log backups to make inactive)

        2, once you shrink the log file drop the additional Log file by using GUI or T-SQL

        3. Run LS backup to remove the additional Log file info from sys.sysaltfiles

        4. Enable copy and restore Job.

        Note : Test this procedure in test before you run in Production.

  12. I have set up log shipping between 2 SQL Servers (SQL Server 2008 and SQL Server 2012),

    At first i took a full backup of the first one and restored it on the other one and than i set up the log shipping,

    The databases first backup-restore took a few hours and i wish to know:

    how does the log shipping process knows from when to start the logs it is shipping? does it start from when the last backup-restore process happened or from the moment the log shipping itself is set ?
    Can i be sure that no information was lost in these few hours ?
    Thanks

    • Matan – if you try to restore a transaction log backup and there’s missing data, SQL Server won’t allow you to do it. It’ll require the earlier transaction log backups to be applied first.

  13. Hi,
    We are in the process of setting up log shipping, using SQL 2008 R2 ent edition. I’m a newbie dba working with a contractor dba. Here’s what I’m being told.

    When a database is initialized on the source server, it first creates a backup, then restores it on the secondary server. Here’s the question. Our source database is separated on three data files on three different drive letters. Our secondary server also set up in the same fashion, to take advantage to three datafiles across three drive letters (D: E: & F).

    The dba is telling me that the configuration doesn’t allow to spread across three drives, I must use only one drive letter. The workaound would be to take a backup of the database, then FTP it to the secondary server, restore it across the three drive letter, THEN, enable log shipping.

    Can any confirm this if this is my only option to spread the datafile across three drive letters.

    Apologies, I’m still learning the SQL technology and may not have my context correct.

    best regards,
    Jay

    • Jay – sounds like that DBA isn’t quite up to speed either. You can totally do log shipping with files on multiple drives.

      • Brent, thanks for that quick reply back. A brief follow up question:

        Our scenario.
        1. We manually created the empty database and spread the data files into 3 drive letters (D, E, F) mentioned above on the secondary. (Each drive has 700GB)
        2. Size of primary database: 1.8 terabytes
        3. During the initialize of log shipping:
        a. A FULL backup was created on the Primary
        b. Restore step failed with error the D: stating there isn’t enough space for 1.8 terabytes and it is only 700GB available

        question: how can we by pass this error message without having to manually copy the backup over to the secondary and restore from the secondary?

        • Jay – I’m not quite clear on where the error is occurring (on a secondary or on the primary) and troubleshooting this is a little beyond what I can do in a blog post comment. Your best bet for personalized one-on-one help like this is to use the contractor DBA. If he can’t help with these kinds of issues, you probably have the wrong contractor. 😉

      • Actually the contractor is right as per the ‘workaround’ above. If you setup logshipping with the GUI and initialise a new copy of the database you can only specify one location for data files and one for logs. I’m guessing that’s why you are getting the error below too – it’s trying to restore all files onto one drive.
        If you need multiple locations you need to pre-initialise the database on the secondary(copy backup across and restore it with the files in the 3 locations – use NO RECOVERY or it won’t work).

        • And to answer your other question, the only way to prestage the database on the subscriber without copying a backup to the subscriber is to perform the restore across the network. And at 1.8 terrabytes that is probably going to be a terrible idea. Far better to find assign a temporary disk to the secondary, copy the backup to that and restore locally.

        • You don’t have to use the GUI though.

          • Sure, you can script the whole process, but he’s still going to need to decide what is the lesser of 2 evils – finding space on the secondary to store his backup so he can restore from a local file or go through the pain of restoring a huge database across the network.

  14. What is .TUF(Transaction Undo File) ? IF we lost .TUF file What Happen ?

  15. Hi Brent,

    Is there a way to backup a logshipped destination database ? I know it cannot be backep up while it is in restore or stand-by mode. But can it be taken offline, backed up and the log shipping restarted ?

    Thanks !

    Phil.

    • You can’t back up an offline database either. My question is, why do you want a backup of the database that is your backup?

      • Well, initial plan was to avoid impacting prod DB as it is already backed up by another method.

        I’ve looked into minimizing DB backup impact since. Probably a better way to go.

    • Hi.
      I want to know how much logshipping Jobs (Backup, Copy, Restore) affects server’s performance.
      Do you have any idea?

  16. What are the pre-requisites/pro & cons for having two Secondary Servers pointing a one primary in a log-shipping setup

    • Prerequisites for the primary and the secondaries are the same regardless of whether you have one or multiple. The benefit of multiple secondaries is that you have your data in multiple locations. One could be a warm standby; one could be used for reporting. The downside is that you’ll now have multiple copy and restore jobs to monitor and manage.

  17. Hi! Great FAQs!

    My company recently implemented Log Shipping on a 700 GB database using 15 minute intervals. This seemed to cause excessive blocking with our normal business processes that were executing during the same time period that the database’s indexes were being rebuilt online (some of these indexes take longer than 15 minutes to rebuild).

    1. Can you explain how log shipping could possibly cause such blocking while indexes are being rebuilt and other data manipulation was occurring?
    2. Do you have any recommendations to enable Log Shipping to co-exist without generating conflicts?
    3. Is it possible to set up Log Shipping to execute on an irregular schedule instead of a constant time interval?

    Thank you for your input and wisdom!

    • Hey Eric,

      Logshipping itself doesn’t cause table level locking and blocking: it just uses backups of the transaction log (which don’t use locks like that). Based on what you’ve said, I’m guessing you changed the recovery model from Simple to Full to implement log shipping, and that you have a performance condition that’s just related to the extra work of doing log backups.

      For your question about intervals, there might be some ways to rig up a solution where you can get things to execute at non-regular intervals, but I think the better question is how you can make it so that you can run transaction log backups frequently and reliably without pain. That’s the better goal!

      Kendra

  18. Hi Jes,

    I’ve question one question. Let us assume if we configured logshipping for 4 databases on one server (Instance) and suppose we have 50 servers, every server had 4 databases configured with logshipping (50*4 = 200 databases)
    1Q) How can we monitor these logshipped databases, whether databases are in sync or not? is there any easy method or script?
    2Q) How can we check the available desk free space in all the servers to prevent space issues..?

    We can’t TN into the every server to check the above :) . Is there any method to avoid TN into every server.

    Thanks & regards,
    Samba

    • Samba – actually, let’s take a step back. What software are you using to monitor the health across these 50 servers? That’ll impact the answer because each monitoring program has different ways of looking at this.

      • Thanks Brent for swift reply, We are using SCOM for sql server monitoring. Is there any chance (method) to monitor by using SSMS? can we use any script or central server management system to monitor?

    • When you set up logshipping it creates a job called LSAlert_(servername). That job starts failing if any of the databases logshipped to that instance are behind the thresholds you set. You can set up an operator to be notified on job failure on each instance and you’ll get an email sent right to you.
      But, per Brents answer there is a bigger monitoring question here. In my experience SCOM is quite a noisy monitoring tool that takes quite a bit of tuning to get working “right”.

      • Thanks Rob.. The job (LSAlert_servername) has created on SCOM right? is it possible to check this through SSMS? I mean is there any method or script? and have a great day…

        • It’s created as a SQL Server Agent Job. You can check it by going via management studio. Connect to your logshipped SQL instance, expand SQL Server Agent and doubleclick Job Activity Monitor.

          You can also monitor the current status of logshipping by rightclicking on the SQL instance name selecting ‘Reports’ -> ‘Standard Reports’ -> Transactional Log Shipping Status’.

  19. Logshipping:
    In logshipping primary server crash.Then how will you findout logshippin g secondary server

    • Kamesh – can you reword your question?

      • Hi
        My primary database is located on primary server instance.After some time primary server instance gets crashed.How will you findout the secondary database on secondary server instance

        • First, you should have all log shipping instances fully documented somewhere – a Word document, a spreadsheet, your intranet – somewhere.

          Second, if you get into a situation where you lose the server your primary database is on, look for a server that has the Copy and Restore jobs on it. The Copy job should have the primary instance name in it. But this is no substitute for documentation.

  20. What is the best approach with T log shipping config when patching sql servers?
    Should you disable all LS jobs on primary while patching primary and then just enable when patching , restarting is complete?
    No plans to fail servers over as patching is done in defined service window.

    Ant other tips in regards to log shipping and sql patching, a check list or similar would be great.

    Thanks

    • Hi, Mickey. Getting the right answer and a checklist here is a little beyond the scope of what we can do in a blog post comment, but that’s a great idea for a future post. Thanks for stopping by!

  21. Check the other server’s job list. If you get the job name like Lscopy_”PrimaryserverName” and LSRestore_”PrimaryserverName”, that is your secondary server.

  22. Hi

    How to configure the log shipping in different Domains. Please let me know if any document related to the same.

  23. Hi
    Thanks for your reply.

  24. Okay, I’m settting up log shipping primary and secondary. The database is large, and the pipe is small between primary and secondary. I want to pre-initialize the secondary db with data / schema, then turn on log shipping. What are the steps to do this?

  25. On the Primary, here is the setup:
    Full Every Sunday 23:00pm
    Differential Every night Monday – Saturday 23:00pm
    LogShipping Via LS Jobs every 1 hour during the day

    Say, Wednesday night for some reason one of the .trn files get accidentally corrupted/deleted from the backup folder,
    Thursday morning you come in and the chain broke from last night and won’t continue.(you are missing a file in the restore folder)

    How do you overcome this to fix the broken chain?

    Can you restore last night ‘s differential and start from there,
    or take the differential right away up to the current moment?

    • Bass – I’m a big fan of doing experiments to find out stuff like this. Have you tried running an experiment on this in your environment? Create a database on your development server and give it a shot. This is one of the best ways to learn, getting under the hood and proving these things for yourself.

      I know it sounds cruel not to just tell you the answer, but it’ll get you started on a lifetime of experimenting and learning. Have fun!

  26. Hi
    How to transfer the logins,server level permissions,database level permissions,object level permissions from one server to another server in log shipping plan

    • Kamesh – log shipping transfers data inside the database, which includes permissions on things inside the database.

      It does not cover things outside of the database, like logins and server level permissions.

      When you Google for answers, leave out the log shipping part, and just look for answers on how to copy logins or server level permissions.

  27. Brent, we currently copy our Prod backup to other environments to refresh our main database about every 2 months. Is there a way to reinitialize log shipping with out starting with another full backup of the Source server?

  28. Hi
    1)Is It possible to take the full,diff,tlog backups in logshipping primary database.

    2)what is 14420,14421 errors in logshipping

  29. When we configure logshipping LS-Alert jobs are created for backup and restore. I want to know how these jobs works and where does it store the error before writing to the logs? Does it right way make an entry into msdb db?

  30. Hi, Brent Ozar, Jes Schultz Borland You both are great. Giving such a valuable information.
    In my practice : i configured Logshipping 2008, my backup job, and restore jobs are running successfully. But copy job is failed.Reason: it cant access the path where the backup files are stored. But i shared my drive and folder where the backup files are stored. still i am facing same problem.

    please give the solution for that

    Thank you

  31. Hi
    logshipping primary database log file is full. How to troubleshoot

  32. I want to configure Log shipping from SQL 2008 R2 Ent edition to SQL 2008 R2 Standard. Would not be any problems with secondary when I’ll use compression and partitioning at primary?

  33. Can you set up log shipping between sql server 2008 R2 (standard) and 2012 (standard)?

  34. while the Logshipping is running,if i add a data file to the Primary Server, for example the primary server is in E: Drive, the secondary server is in F: drive ,then the secondary server have no permissions to access that drive E: where the new data file is added. Copy job is running successfully,restore job fails shows the error message like : “Cant access the path”

    How to resolve this issue ? can’t we add a data file to the Primary Server without breaking the Logshipping ?

    • This is a really interesting question. I wish I had the time to dedicate to give you an answer that’s just as interesting, but there’s a lot of possibilities here. The best way to get help on something like this is to narrow down the question as tightly as possible, then post it on http://StackOverflow.com or http://ServerFault.com. Include as much example code as you can so that others can quickly run it and get an idea of what you’re asking, and they might even post your code back with fixes. If you don’t get good answers within a day or two, let me know and I’ll take a look at the question to see if there’s a way we can get better answers.

  35. Hi Brent,

    I would like to know whether we can achieve Log Shipping from SQL SERVER 2008 SP2 and SQL SERVER 2008 R2 RTM both enterprise version with the destination DB being in STAND BY MODE.

    Thanks in Advance,
    Deepak Almeida

  36. We Log Ship to a Network Drive. I can pull the log for the restore directly from the network drive where the backup is. I’d like to skip the “copy” step, but the Log Ship set up did not allow me to. So, I just copy right back to the same location as the backup. This works mostly. but about twice a week, I LS_Copy job fails with a “Already Exists” message.

    Is there a way to skip the “Copy” step?

    Or copy to ?

    thanks,

    tim

  37. Another Situation: I backup to a remote drive every 15 minutes, 24×7. We keep 45 days worth. The folders have gotten quite large (over 4000 files in each folder). This is affecting performance of the Log Shipping on the delete step. The next backup starts before the delete step is complete causing the Log Shipping process to get log-jamed.

    I’d like to remove the “delete files older than” step and run a separate delete job at night.
    Again, I don’t see in the Log Shipping wizard that I can remove the ‘delete files….’ step.

    Suggestions ?

    thanks very,

    tim

    • Hi, Tim. Just set the files to never get deleted in the Log Shipping Wizard (pick a really long time span) and manage the deletes yourself.

      • It looks like no matter how many days you put in, it still has to read the directory and determine if there are any files to delete. So, that doesn’t really help.

        I ended up dropping my retention and cleaning up the folders to habe less than 2000 files. that did the trick. no more performance problems.

        Our other option was to rake off older files into an archive folder, leaving the main backup/restore folder smaller.
        We may still do this.

        thanks very,

        Tim

  38. Hi.
    I want to know how much logshipping Jobs (Backup, Copy, Restore) affects server’s performance.
    Do you have any idea?

    • The answer would depend on your individual server and your backups. Your best bet is to do the performance testing in your environment. You’ll find that there’s lots of settings you can tune to make the backups go faster, plus you’ll want to know what your server’s bottleneck is.

  39. Hi thanks for the article. I have a small doubt. Hope you can suggest me on it.
    Is there a way to find which log-shipping methods is been used to establish the data synchronisation?
    Eg:
    – Generate full backup of primary and restore in secondary.
    – Log shipping with existing database of the primary database.
    – Secondary database initialized (without changing secondary)

    If so, how can I find it? I have tried to find from log files but couldn’t find.

    • Ravi – what’s the problem that you’re trying to solve?

      • I am trying to find which option is selected during log shipping configuration?

        • Are you asking which option was selected on the “Initialize Secondary Database” tab of the setup wizard? If so, why is that important to you? What problem do you need to solve?

          • Yes. You are right. I have to tried to set up all the three methods in a test bin and found that last option which is Secondary database initialized (without changing secondary) having some performance issues compared to first 2 options. That’s the reason I would like to know which option has been used so that I can prevent such kind of performance issues in the future in my production env. Hope I am clear. Thanks.

  40. By the way, it’s in 2005

  41. Hi Jes
    I am having problems with my log-shipping job the copy job is failing due to an Access Error
    The access to the path is denied
    I have tired everything i know, I have given the secondary server permission to the folder where the backup is, In the primary folder, I have shared the folder I even gave secondary server permission to the drive
    I don’t know if you can help

    • The secondary server’s Agent account needs access to the folders – that may be different than the account running SQL Server. Did you check that it’s the correct account?

      You can provide more details, such as an error message, at dba.stackexchange.com. There are always lots of people there, watching and helping to answer specific questions!

    • I had the same issue. We resolved it by using the same *user account* name and password on the primary and secondary servers to access the share folder. We also run the sql server agent under that account.

  42. Hi Brent and Jes,
    Thanks for all your effort!

    I’ve got a question; I have two nodes in a cluster (sql server 2008) and i want to backup my cluster to the another site(DR Site) using with log shipment method, is it possible and what is the roadmap? Sorry maybe the question is ridiculous :/. Could you please clarify that for me?

    Thanks.

  43. Hey Brent,

    Sometimes its taking longer time for copy job to run because of big t-bkup files, so the restore job is reaching its threshold and gets alerted. How to find out the restore history i.e the files that were restored in that run and the time it took to restore ?

    I can look at job history to find how long restore job took to run to get rough idea, but its not helping.

    Thanks for looking.

    • Chris – those are in the MSDB tables for backup history and restore history. Fire up SSMS and type select * from msdb.dbo.backup or msdb.dbo.restore and IntelliSense will kick in from there, helping show you through some of the tables. You can also check those out in Books Online.

      • Haha, thanks. I always query msdb to get backup history, not a regular thing for me to see restore history. Now I know the restore_date, the job supposed to kick in every 30 min, for few in restore_date there is difference of 2 hrs or so and based on the history_id, I should be able to get the file names. Got it :).

  44. Hi Jes,

    did you ever come across the issue where everything works fine except no logs are being restored?
    I’ve set up log shipping manually between 2 instances that are totally separated and cannot ‘see’ each other. I went through the LS setup steps using the UI but scripted the config and then cancelled the UI. So there is no real primary and the log backup and copy is handled using our standard routines.
    I ran the LS setup script on the secondary (just the required part). Everything works fine but the LS restore job just doesn’t restore. It finds the correct log but doesn’t restore it. If I manually restore a log file, the job notices that and just skips that file and continues with the next: it says it has found the log but it again doesn’t restore it.
    I checked the error logs, traced it, executed logship.exe from the command line, manually executed sp_can_tlog_be_applied etc etc. All works fine and no errors, just no restore…

    Any ideas?
    Chrz
    Thierry

    • No, but I haven’t tried to cancel out of the setup on the primary and use another method to backup and copy the log files. If the restore job is successful, how do you know the log isn’t being applied? Is there perhaps an issue with naming conventions?

      • thx for the quick reply

        I know from the job log it’s not being applied:

        2014-01-10 14:04:19.12 Found first log backup file to restore. Secondary DB: ‘xyz’, File: ‘xyz_backup.trn’

        2014-01-10 14:04:19.13 The restore operation was successful. Secondary Database: ‘xyz’, Number of log backup files restored: 0

        I replaced the network location with a local disk since I thought it was a network/permission issue but this doesn’t solve the issue.
        I also restored the latest backups because I thought maybe it was too far behind (never know), still no luck.

        I’m beginning to think there’s a bug somewhere. Hard to tell if I cannot find any errors though…

        Also found this thread that doesn’t really help
        http://social.msdn.microsoft.com/Forums/sqlserver/en-US/44d3b8b2-59aa-4ac5-90de-be9eab7e06a3/log-shipping-not-working?forum=sqldatabasemirroring&prof=required

        • Thierry – if you suspect a bug, contact Microsoft support. It costs $500 per call, but they work the problem with you until it’s done, and if it does turn out to be a bug, they’ll refund the $500.

          • thx for the reply Brent. Before I do so, I’m going to install a VM environment and see if I can simulate the behavior (maybe try different versions as well to see if it’s related to a specific version or build).
            Thx again for your time (and Jes) to react to my questions

        • Hey– just a quick guess/question. Are you possibly using log backup files that aren’t generated by the built-in logshipping jobs? (Your own log backup job, or one from another script?)

          I tried to do that once years ago and it drove me nuts– the restore jobs would succeed but never actually restore the backup files. It really cares about the filenames and that they are in the exact timezone and format it’s expecting if it created the log backups itself.

          If this is your issue, more info on it here: http://mikehillwig.com/2012/03/15/sports-analogies/

          If not, Mike Hillwig has some other great stuff on his blog about logshipping, too. :)

          –kendra

          edit: updated/fixed link
          edit2: I bet this is your issue! Reread your original comment and found the sentence “the log backup and copy is handled using our standard routines.” That’ll do it! Drove me nuts, too.

  45. Hi Jes,
    to test I just renamed one of the log files to match the LS backup file pattern and OOH MY GOD :) it worked. Steeee-rike for the ‘Ozar-team’ once again!
    I wonder why on earth it was designed like this.
    Anyway, thx again!

  46. Excellent! Glad we could help you out!

  47. I have log shipping already configure. And working fine
    But client ask to change path, so i go to copy files destination folder and change from D driver to C driver
    Then i copy all transaction log from D to C

    After that secondary db is not restoring

    I thought maybe because db is not sync
    So i sync secondary db with primary db
    But still not restoring

    Do i need to clear all transaction log in C drive first before i start copy and restore job ?

    Thank you

    • Have you updated the log shipping configuration so the backup job, copy job, and restore job know where to look for the files?

  48. We have a table cleanup( deleting a bunch of rows) that happens monthly, which creates a huge transaction log. Any way to implement log shipping or the cleanup in a way that won’t force us to schlep that giant log across the wire?

    • There isn’t – that’s an operation that’s going to be fully logged. You could disable log shipping, set the database to simple recovery, do the operation, set it back to full recovery, and reinitialize log shipping, but I don’t know if that is any less time-consuming. You also lose point-in-time restore capabilities while the database is in simple.

      • Thanks Jes. Seems like kind of a dumb question, but thought I’d ask, like setting it as some kind of bulk-logged operation or such…

  49. Hi Brent/Jes

    Please could you clarify my understanding of the restore alert threshold. I have just set up a LS configuration to delay the log file restore by 240 minutes. I’ve left the alert on the default 45 minute threshold but the alert seems to be firing – I guess the restore threshold is being exceeded.

    The wording of the alert suggests to me that it will alert if there is NO restore within the 45 minute period. Restores are actually happening ever 15 minutes successfully. Should I read this as “The restore of a particular log needs to happen within 45 minutes of the backup being taken”? I.E, should my restore alert be configured to 285 minutes rather than 45?

    I’ve played around with the threshold and it works fine when set to 285, but for obvious reasons I wanted to understand it so I don’t surprise myself if restores genuinely fail for a period of time and I miss the alert for a few hours.

    Hope I explained that relatively clearly – its always seems simpler in my head :)

    Many thanks

    • Hey there,

      If you think about the alert monitoring, it’s trying to help you keep within your desired “Recovery Point Objective” and keep the data from being too far behind. I might have restored two log backups within 45 minutes, but maybe those log backups were just taken one minute apart– so if it just measured whether I’d restore a backup then it would be very hard to know what it meant.

      It sounds like you’ve tested the jobs and understand how they work. I wouldn’t just assume that you should up the threshold though– you might want to increase the frequency of your log backups, copy jobs, and restore times depending on your RPO and RTO. (More info on that here).

      Hope this helps!
      Kendra

  50. Hi Guys,
    If a restore_delay is used and a situation occurs where the primary server is lost and the secondary is being brought into service, what is the correct way to reset restore_delay to 0 to cause the log shipping restore job to immediately apply all available log backups to the secondary database please? We run a delay to give us time to react to issues on the primary before they are applied to the secondary.

    I looked through the msdb log shipping related sps and I’m not sure I can see anything appropriate. Would a simple update statement for restore_delay into msdb.dbo.log_shipping_secondary_databases be sufficient and safe or is there a better way please?

    Thanks for all your great community support,
    Mart

  51. Hi Jes & Brent, Thanks for the great info and videos.
    I am quite new to MSSQL and have just set up my first log shipping to use for DR purposes. I was surprised to see that on the secondary (read only) copy of the DB if I show the file location it displays the info that is correct for the live copy and not the RO copy. The following command returns the correct location but viewing DB_name \ Properties \ Files with the mgmt studio shows the paths of the primary DB.

    Can you explain this? Do I need to fix that if the secondary_RO is ever bought into production?

    This is the SQL that shows the correct paths.
    SELECT name, physical_name AS CurrentLocation, state_desc FROM sys.master_files

    Thanks

  52. Hello,

    I have just configured log shipping and the wizard is currently backing up and will restore the database within a few hours onto my secondary server. Will changes on the primary servers database during the backup and during the restore be reflected in the secondary database after the wizard completes? Don’t want to miss anything!

    Thanks much!

    • The changes to the database will be captured in the transaction log, so they will be in the log backups. After your full backup is restored, apply all log backups that have been taken since it began, and you’ll be caught up.

  53. Pingback: What is SQL Server log shipping? - SQL Shack

  54. Hi Jes/Brent,
    we have configured logshipping on primary as a production server and secondary as a DR server and
    it was all running good till yesterday morning, but it started throwing errors on the secondary server restore job is failing. what could be the reason for it and i want a solution for this please, the error is like this Executed as user: User_Name. The log shipping destination Server_Name.DB_Name is out of sync by 135 minutes. [SQLSTATE 42000] (Error 14421). The step failed.” what could be the reason. but mean while i have disabled the restore job on my DR server as i couldn’t get any help on this issue.
    i wish you people will resolve my issue please.

    • Krishna – solving this one is a little beyond what we can do in a blog post comment. We would need to see the exact error messages.

      Here’s a few things to consider – has security changed on the servers? Were any of the log backups accidentally deleted? Did someone take a log backup manually and put it somewhere else?

      Hope that helps!
      Brent

  55. Hey Everyone,

    I had a quick question, when setting up log shipping for multiple databases is it a best practice that the log backups are all stored in separate shares per database? Or is it possible to backup all logs to a single folder and SQL Server knows which belongs to what database?

    Similar question with the agent jobs, does there have to be one log backup job per database?

    Thanks!

    Jeremie Grund

    • When you set up log shipping, SQL Server will create separate Agent jobs for each database. However, the files can all go to one folder – there will be different names for each database’s log backups.

  56. log shipping setup- Does both the primary site and DR site storage drives should have same names ?

    • It’s not a requirement. However, for many, many reasons, having the same drives on all SQL Servers in your environment is easiest – always having a D:, L:, and T:, for example, and backing up to the same drive or share on each server. Consistency is key!

  57. I just configured Log Shipping and wanted to test the setup to verify data integrity. For this reason, I selected Standby Mode as the database state so I could read the database and check the row updates. When I make a change on the Primary, the restore job fails on the Secondary with the message that our TUF file is not a valid undo file for the database – verify the file path and specify the correct file. During Log Shipping configuration, you never specifically enter information for this particular file. Does anyone have suggestions on how to resolve this error? We will not use Standby mode in production, but I wanted to use it for my testing. We are using SQL Server 2005. Thanks for your time and assistance!

    • It’s been a while since I’ve seen that error, but check the permissions on the folder the logs are saved to. Per this post, you may not have the right permissions.

  58. Thank you for that suggestion. I will look into that further.

  59. Is there a way to enable compression after the Log Shipping process has been already setup and running?

    • Yes! If you go to the database properties and open the transaction log shipping page, then open Backup Settings, you can pick the backup compression level there. The change will be made the next time a log backup is taken.

  60. Pingback: What is SQL Server log shipping? | SQL Shack - articles about database auditing, server performance, data recovery, and more

  61. hi Brent, Jes, your posts are awesome. way of explaining very useful to understand.

    i have a doubt what is log shipping.exe file in log shipping. what is the work this???? if this missing log shipping will work. how to recover .exe file.

    • Thanks Raj – I’m glad we can help!

      sqllogship.exe is the application that runs log shipping. If you were to set up log shipping and look at each of the created jobs, you’ll notice they call sqllogship.exe with a set of parameters. More information on it can be found at http://msdn.microsoft.com/en-us/library/bb283327.aspx. What happens if you delete the .exe? I’ve never done that, but my guess is that all log shipping set up on that server starts to fail. You could try to copy the file from another SQL Server instance into the correct directory, but I don’t know if that will make it work.

  62. Jess,

    we are in the process of downgrading all our DR servers involved in logshipping due to licensing restrictions. the only way to downgrade the instance is to uninstall and reinstall.
    now, the DBs are quite large and we would like to avoid re-initializing the entire logshipping from scratch.

    Is there a way to downgrade the SQL instance and yet not break any logshipping chain?

    Thanks,
    Anand A

  63. Hi Jes Schultz Borland
    My undo file file was missing wen my cluster moved to second node ….. as my database was showing restoring\standby.

    My question is while making database into standby my cluster got moved after that it was not restoring the further logs as my tuf was not missing.

    Is it automatically deleted or it didnt got generated.as the database was showing in the management studio restoring/standy.
    Here i think undo file was generated but not completly.

    Please explain

    • MFK – if your undo file was missing, it was placed on storage that wasn’t visible to both nodes. You’ll want to start by using the Cluster Validation Wizard during your next maintenance window. It’ll take a few minutes to run, and it will produce a report showing problems with the cluster (like storage that isn’t available to all nodes.) From there you can fix the root cause of the issue.

      It’s not unusual to find LOTS of problems on a cluster that you’re validating for the first time – take heart that you’re not the only person in this situation, heh. Once you go through it the first time, then you’ll make sure your clusters are properly validated BEFORE SQL Server gets installed, and it’ll make problems like this go away.

      • Also there are corruption log errors in the system events logs are the cluster service failed.
        Can I say it can be the reason might be an storage issue where a bad sector occurred and got deleted.

      • Hi Brent … storage says there is no way a file be hidden when the cluster moves.

        • MFK – wow, I’ve never heard storage talk before! 😉

          There is, actually. If the file is saved on local storage (think C drive), then when the clustered instance fails over, the file that was on the C drive doesn’t move over to the other server’s C drive.

          • File is not saved on the local drive.so how do I confirm to the storage that the file is hidden on the storage.as they say is not possible if the cluster gets file can be hide on the drive because storage will never do that.

          • MFK – unfortunately troubleshooting this is beyond something we can do in a blog post comment.

  64. As part of our testing plan, I recently configured log shipping in a production to DR environment using our largest database to test the timing intervals. While the configuration was successful, the restore job failed because we needed to set up our encryption key on the DR server. After setting up the appropriate key on the DR server, do we need to re-configure Log Shipping again from Primary to Secondary? Or will the Restore job automatically pick up the change. Having said that, if it’s been a few days between noticing the error and applying the fix, will log shipping fail anyway because of the backlog?

    • The log backups would already have been encrypted, so they should be applied after the key is restored. Log shipping will not fail because of the backlog, but it could take a long time to catch up – especially if you have frequent log backups. You could take a differential backup of the production database and restore it to the secondary, then re-enable the log shipping jobs, to speed up the restore process.

  65. Is it common for Log Shipping removal to take over 19 hours for a database that is 21,662,271 KB in size? Could that be enhanced with changes to Network bandwidth settings? We are running tests for our timing before implementing production. The initial configuration took approximately 12 hours, so I wasn’t expecting the removal of Log Shipping to take this long. Thanks!

    • No, I’ve never seen it take more than a minute. Use a tool like sp_who2 or sp_WhoIsActive to check if there are any active connections to that database, and close any you find. You may also want to use a tool like sp_AskBrent to figure out what is going on while it’s being removed. If you’re using the GUI to attempt removal, I suggest the stored procedures instead.

  66. We have been conducting planned failover testing in our production/dr environment. Upon setting up our Secondary as the Primary and trying to log ship to the former Primary/now Secondary, we receive the following error:

    The media family on device is incorrectly formed. SQL Server cannot process this media family. RESTORE FILELIST is terminating abnormally. (Microsoft SQL Server, Error: 3241)

    I’ve confrmed with our head DBA that our Sql Server versions are the same across the servers we have implemented Log Shipping. The file doesn’t appear to be corrupt as it does ship from the initial Primary to Secondary. It just fails with the above error when we try and reverse the direction.

    Could this be related to encryption settings? Since we’ve verified the SQL Server versions and file integrity, I wasn’t sure what other items to consider in our troubleshooting efforts.

    Thank you for your time and assistance.

  67. Thank you for that information. Our head dba actually revisited and re-configured encryption on each of the servers. We seem to be able to at least configure log shipping in one direction between all servers now.

    I did revisit a planned failover test, but after role reversal, I checked the files from the backup/copy/restore on the new Primary and Secondary servers and continue to receive a Primary Key Violation when trying to view the Transaction Log Shipping Status report. I’ve encountered this before, but it seems to be happening every time.

    Am I missing a step that would prevent this violation from occurring? I do not see duplicate or inaccurate entries on the log shipping monitor tables.

    I should note that we are using SQL Server 2005 and I read where this issue was corrected in later versions. Could this be part of our problem?

    Thanks so much for your help.

    • K_ps – I’d run a Profiler trace while you run the log shipping status report, and that way you’ll be able to get the exact SELECT statement that generates the error. From there you can troubleshoot the duplicates manually.

  68. In our log shipping configuration, we have setup the Recovery mode for the Secondary server to be Standby, Disconnect users …

    During Planned Failover testing, after executing role reversal and returning to the original A to B configuration, I have been executing a Backup with NoRecovery on the Primary Server which I am preparing to revert back to the Secondary. That leaves the database in a Restoring state. When initially configured with the Standby/Disconnect users selection, the database reflects Standby/Read Only. I see where there is a Standby argument for the Backup command. Would I use this to revert back to that original configuration? I wasn’t sure what the Standby file name would be referring to.

    Also, we currently have Replication on our databases. I see that you can implement Log Shipping with Replication. Are there any settings required when implementing log shipping with replication? I have not been able to find anything specific in this regard.

    (I am aware that a problem occurs in that replication does not continue after a log shipping failover. )

    Thank you again for your assistance.

  69. Update to post:

    After finally completing our first successful planned failover test, I learned something in answer to the previous post. After manually running the backup, copy, and restore jobs, the database that was previously in a Restoring State returned to the log shipping setting of Standby/Read Only.

    Still confirming Replication settings, but I wanted to follow up with that update.

    Thanks again for your assistance.

    • “Standby” is not a mode a user can put the database in. As you’ve seen, the database will revert to that state when restores are complete.

      I haven’t combined log shipping and replication, but yes, it can be done. Start here for more information: http://msdn.microsoft.com/en-us/library/ms151224.aspx.

      • For my experience, a user CAN put “Standby” to a database. It’s the step when I set up my initial Log Shipping Database.
        After “RESTORE WITH NORECOVERY”, you can use
        “RESTORE DATABASE [xxx]
        WITH STANDBY = N’E:\path\UNDODB.BAK'”
        And it will create “UNDODB.BAK” file automatically for you without a existed “UNDODB.BAK”.
        Then the DB will become “STANDBY/Read-Only” status.

  70. Does the initial FULL backup that is done to initiate log shipping have to be a non-COPY_ONLY backup?

    I’ve been tasked with getting log shipping set up on 40+ databases and trying to automate the process as much as possible, but we also have CommVault doing FULL/DIFF/TLOG backups (I know I need to stop the TLOG backups if I want to keep my sanity) and I’m trying to figure out what I can/can’t do.

    My plan right now is to do normal, non-COPY_ONLY backups of all DBs, get log-shipping going/working, and then after that doing a FULL backup again back in CommVault (and turn its normal FULL/DIFF backups back on). Any comments if I’m on the right track?

  71. Three databases log shipped to Standby/ReadOnly databases on separate Warehouse server; two are CRM/ERP and the third has no ongoing transactions. Backups occur every 15 minutes and are restored hourly; TRN file sizes range from 5K-7K KB on one, to 200KB on the other to 4KB on the third. *All* files take 30 seconds to process; 9 of those are consistently in the step “Recovery completed for database X (database ID Y) in 9 second(s)…” (using trace flags 3004, 3605 to get data to logs). Analysis, redo, undo values differ, from (48,0,57) to (96,0,204); LSN differences from 0 to 37,743. (TUF sizes from 88KB to 1,376KB). All of the files are on the SAN, albeit in different locations; the TUF files are in the Data folder for the Warehouse destination. VLFs from 17 (avgsize 812KB) to 491 (29,658) and 471 (36,941); only 1 active for all 3 DBs when I ran the query.

    SQL 2008R2 (SP1), 350GB RAM, 24 cores, plenty of hardware; network at 10GB. What am I missing?

    Any insights on next steps I might take, or information I might gather, to reduce the file process time per file?

  72. We recently configured our 5th database for log shipping in production, using SQL Server 2005. The process took about 12 hours, which was consistent with our testing. The size of the .bak file was 27,714,601 KB. Upon completion, the first log backup job kicked off and kept running. After two hours, the .trn file size had grown to 34,809,167 KB. I removed the log shipping configuration and am troubleshooting why the backup job never completes. We’ve run numerous tests and also have 4 databases successfully implemented in Production with Log Shipping, but we have not run into this problem yet. Did we miss something? The details in the log shipping configuration for this particular database are the same as those used in the other log shipping configurations (same network share folder, same timing, same copied folder location, etc.)

    If can provide any guidance on where I could troubleshoot this further, I would greatly appreciate it. As always, thanks for your time and input.

    • How large is your log file? How large are transaction log files, in general?

      A trick may be to initialize your log shipping from a full backup, take and restore a differential backup, then take and restore the log backups.

  73. Thank you for following up.

    The sizes of our production log files currently being log shipped range anywhere from 192 KB to 1,604,406 KB after heavy processing at night.

    This particular database with the never ending log backup is one of our largest databases with the heaviest transaction load. If the restore from the initial log shipping configuration takes over 12 hours, then the log has built up over that time since the full backup was restored. I suppose that may explain why the first log backup job is running so long. I was just surprised that the size of the first log file was exceeding the size of the full backup file.

    • K_ps – there you go! I’ve seen similar problems with surprisingly large differential backups when folks didn’t realize just how much they were changing the database with full index rebuilds.

  74. Jes,

    I liked your idea of taking and restoring a differential after the full backup and then kicking off the log backup jobs. I configured log shipping for this database on Sunday, during low transaction load hours, and had planned on catching the end of the configuration process so I could take the differential before that first log backup ran. Unfortunately, the restore ran 3 hours longer than anticipated and I missed the window. As a result, the first transaction log backup job to kick off after successful completion of log shipping configuration has been running for 10 hours with a log file size of 229,603,143 KB. Would it be possible to delete, disable, or kill this job, take the differential and restore it to secondary, then re-enable the log shipping backup job? That shouldn’t break the log chain but I didn’t know if I was missing any potential for data loss.

    Thanks so much for your input.

  75. Hi Jes,

    I wanted to follow up on our runaway transaction log backup job. I tried the approach with the differential tonight as per the steps below:

    7:46 am – log shipping configuration kicks off
    9:32 am – backup file is stored in network share folder. File size is 26.1 GB
    9:30 pm – Log shipping configuration completes.
    – I disable the log shipping backup, copy, and restore jobs
    9:31 pm – I enter command to backup database with differential
    (In this query window where I typed the command to backup with differential, I still had the “use dbname” command and I believe it should have said “use master.” Would that have caused a problem?)
    9:33 pm – Differential completes with a file size of 768 MB.
    – I re-enable the backup and copy jobs to get that process moving along after the differential
    – I copy the differential file to the secondary location
    9:45 pm – The first transaction log backup job kicks off
    9:59 pm – After the Differential file is copied, I restore the database on Secondary using the differential
    (In this query window, I did have the “use master” command prior to the restore command.)
    11:02 pm – The restore is still running
    – The transaction log backup job that was created at 9:45 am is still running with a file size of 28 GB and still growing.

    I had thought that by taking the differential, the transaction log backup job would not run as long as it wouldn’t have to capture all of the data from the time of the full backup to the time of the differential. Am I missing something in this process? I am rather new and making plenty of mistakes, so it’s possible. I’ve tried searching for other examples of this scenario but have been unable to find anything to date.

    If you have any other pointers, I would greatly appreciate it. I am wondering if we need to consider a separate DR solution for this database.

    Thank you for your time.

    • This is becoming far more involved than can be answered with a blog post comment. I’d suggest you post the info at stackoverflow.com, where people may be able to help more quickly, or contact us at help@brentozar.com to discuss an engagement!

  76. Thank you for providing that direction. I will try that out.

  77. Can you set up log shipping between SQL Server 2005 (Enterprise) and 2012 (Standard)? can you use any combination of editions / versions as long as you are log shipping from lower version of an upper version? thanks

  78. Jes,

    Thanks for the great article!! Quick question – can you have `transaction log` and `log shipping` turned on at the same time on a database server?

    Thanks

  79. A typo on my previous post –

    Jes,

    Thanks for the great article!! Quick question – can you have `transaction log` back up and `log shipping` turned on at the same time for a database server?

    Thanks

    • If you had a log backup job set up and a log shipping job set up, both would run. The problem, however, would be restores. Log backups are sequential – they contain an LSN that tells SQL Server what transactions are in them, and what order to restore them in. Thus, the log shipping restore job on your secondary would fail because it’s “missing” a log backup.

      If you have another procedure or task that already takes log backups, you could roll-your-own log shipping.

  80. We are in the process of upgrading and migrating an application. It is currently running on SQL Server 2008R2, Standard Edition. The new database will be SQL Server 2012 SP2, Enterprise Edition with AlwaysOn Availability Groups. To limit the downtime for the migration, it has been requested that I set up log shipping from the old to the new, then stop that and create the AlwaysOn on the new at cutover time.
    I see that it is possible to log ship from SS 2008 R2 to 2012, which is exactly what we want to do. Is there an issue going from Standard Edition to Enterprise Edition?

    • Scott, that should be successful, but as always, test test test! Trying to log ship from Enterprise to Standard could be an issue, if you had Enterprise features enabled.

  81. Hi Jes\Brent,

    We have a huge database (size 3 TB) and data files having 1.5 TB free space (Unused space). we have to release free space to disks so we are planing to shrink the data files of log shipped database. .
    My question are If we perform the shrinking operation on primary server, does those changes automatically reflect on secondary server also and does this operation breaks logshipping as that database is part of logshipping. I knew we can shrink log file without any issues but I have not tried for data files.
    Can you please explain how to perform data files shrink operation if that database is the part of Logshipping as per best practices.

    Could you please answer above questions. Thanks in advance and Have a great day..

    • I haven’t tried to shrink the data files of a database involved in log shipping. However, as that is not a logged transaction, it should not affect log shipping.

      • Thanks Jes for your reply, What about data files of secondary database? Does that are shrink automatically if we perform on primary server?

        • Changing the data file(s) on the primary will not change anything about the data file(s) on the secondary. The only thing log shipping will move between the two servers is logged transactions – no settings, no jobs, no users.

  82. Question: What is the “dba’s preferred” alternative to sqllogship.exe ? (for indexing, Ola’s script is king… what is king for log shipping?)

    I find the opacity of sqllogship to be a problem. And need to use various copy methods to diff destinations. Would be awesome if sqllogship were open source, or if there was a powershell drop in replacement.

    What is the preferred tool set? (there seem to be a few, but can’t tell which one is popular/road proven) ?

    • Sam – typically people use third party apps like Dell LiteSpeed, Idera SQLsafe, or Red Gate SQLbackup. You get better log shipping, plus all kinds of cool backup compression and encryption options.

      • I used redgate sql backup for years for log shipping. Loved it. But now SQL 2014 (which I am on) has backup compression and encryption native (in standard edition, yeah!).

        Also, I need more control over how the logs move than those tools provide. Also, fwiw, I have experienced hard-to-diagnose problems when using redgate.

        The right set of powershell scripts would totally do the job. Is there such a kit that has a decent track record?

      • Research is in progress. Status so far: Idera SQLsafe currently has zero support for backup or transport through cloud. (SQL 2014 has native support for azure blob storage, so that is a pretty big strike for sql safe)

        • Sam – correct, I haven’t seen good vendor support for cloud APIs because they change so fast. Instead, what you do for that is to land the backups onto a file share, and use a file share sync tool to push the data up to the cloud.

          • Yes and no. SQL 2014 backup to azure blob storage works great (I ran it for a month). But there is no native ability to log ship over azuer (and, oddly, an msdn post by a softie showing proof of concept log shipping over azure was removed by ms a couple weeks ago). The biggest issue here is that you have to roll the whole solution (job tracking, monitoring etc) because the sqllogshipping.exe is closed source.

  83. am using merger replication from my production server to the DR server.But how can I have the DR server in state whereby users cannot make changes to it while the primary server is up and running?

  84. We have a 1.8 TB DB (that is rapidly growing per day). We currently have Transactional Log Shipping (TLS) setup to a secondary server (no failure on that side yet). We are taking 1 weekly Full backup, then daily Differentials, and then we have TLS set to every 5 minutes. Never had an issue with TLS. Yet when I had a user request a restore of the full database to another location, when we restored a full backup and then tried to restore the last Diff we needed, we got a broken LSN issue. We then tried all of the Diffs for that week with no success. Yet when I started to restore from those hundreds of TLS files we have not received an error yet. We researched the BackupSet table in the MSDB and can’t find any LSN issues through that, the chain seems to be correct.
    A coworker is thinking that the TLS occurring while the Full and Differential is actually occurring may be causing this break, but we cannot find anything saying such. We are also wondering if maybe because both the TLS and Diff/Full (we do not run Diffs on the same day we run Fulls) times overlap (due to the tremendous size of the DB and data churn) if the LSNs are being captured in a weird order.
    Should TLS be running during Full and Diff backups?
    Has anyone else heard of anything like this before?

    • You are totally supported to run log backups while a full backup or a differential backup is running.

      What you are seeing is consistent with an extra full backup having been run after your weekly full, and that “extra” backup reset the differential base. This frequently happens if someone takes a backup to refresh a dev environment without telling anyone. Sometimes there is an enterprise level tool that sneaks in and runs a backup used for disaster recovery, too.

      Basically, if you are relying on differential backups, any extra full backups (that are run without the “copy_only” flag) will render your diffs unusable. The log shipping is a red herring, it’s not part of the issue.

  85. What process or service locks the WRK File when log shipping is running…it tells me that “The action can’t be completed because the file is open in another program.” I just want to delete it…it’s old…I can’t remember where I can kill a process that will allow me to delete it. I’ve already did another backup on the primary and restored on the secondary….all is running fine no alerts.

    • The .wrk file is created when a log backup file is in the process of being copied from the backup location to the restore location. Did you have a failed copy job?

      • I understand what the file is for and how it works. Yes I had a failed copy, I fixed that problem but that file is still “in use”…I can’t remember what process I canceled that allowed me to delete the .wrk file.

  86. Have read through the above blog and thought I would describe our situation to see if anyone else has seen the same setup. I am planning on using Copy-Only full backups that will be taken on a regular basis using SAN snapshots. I would also like to have some of the databases use log shipping to a local DB server for redundancy and also a tertiary server in our DR site. Would the use of copy-only backups via SAN snapshots disrupt the log shipping setup?

    Overall i’m looking for an efficient way to use SAN snaphots for backup and short term retention and use log shipping for HA and point-in-time recovery. AlwaysOn Availability groups are out of the question due to licensing unfortunately.

    Thanks, Pete

    • Hi Pete,

      If you only ever do copy_only full backups, it will never reset your differential base. That won’t break log shipping, but it might mean that you can’t use a differential backup periodically to “catch-up” log shipping if you need to, like I describe here: http://www.brentozar.com/archive/2014/09/log-shipping-magic-using-differential-backup-delayed-secondary/

      Can I ask why copy_only full backups? Do you have other full backups that will also run, and you want those to reset the differential base?

      Kendra

      • Hi Kendra, The copy-only backups are taken via LUN snapshots on the SAN. We run another maintenance plan taking fulls and logs (logs will obviously have to be removed from the maintenance plan to implement TLS). The SAN snapshots are taken so we can quickly refresh UAT and DEV environments.

  87. We have three SQL Server instances on 3 different servers. Call them A, B, and C.

    A is dedicated to a critical application; it log ships to B (as the secondary).

    Now they want to do log shipping for C, which is dedicated to another critical application.

    The bean counters are asking why I can’t use A or B as the secondary.

    Aside from it not seeming like a very good idea (let alone the fact that we shouldn’t be using a poor-man’s solution like log-shipping for databases this important in the first place) is there a valid technical reason why I cannot?

    • There is no technical reason not to use B as the secondary for both A and C. You want to carefully schedule the copy and restore jobs, but if you do that, there should be no problem.

      Log shipping is a valid HA and DR solution as long as it meets the RPO and RTO for the business. I don’t feel it’s a poor-man’s solution – it’s one of the most well-established and reliable solutions in SQL Server!

    • Jes is right. It isn’t a poor man’s DR. You have to remember DBAs are a service oriented position. We are a cost to the company. A needed cost, but to the financial side of the company we are dollar figures flowing out of the companies profits. All of infrastructure needs to squeeze as much work out of a server as possible, otherwise we are wasting a lot of money. So why can’t you host multiple servers DBs on the same box and save some money instead of having perfectly good servers go to waste? :-)
      I use Log Shipping to my DR site because in a real DR scenario (where the primary site goes completely down) I can get a 5 min RPO with a 30 minute RTO (depending on the time of day). In agreeing to load many servers databases onto one or two servers (and using a few strategically placed SQL Aliases), I have also managed expectations. The systems will be up and running in 30 minutes, but they are going to be running really slow until we can bring new hardware in and migrate the databases onto other servers (using what you may ask? Log Shipping). The business leaders have signed off on this (via my DR plans and within emails) so I am covered in case complaints occur during DR. It means more long hours for us, but in the end we are saving the company a lot of money in working with the tools that MS provides and keeping our systems (especially our backup ones) to a minimalistic foot print… :-)

  88. I seem to have discovered something on SQL 2014 SE: Log shipping supports backup compression (which is native to SQL 2014 SE), but does NOT appear to support backup encryption (which is also native to SQL 2014 SE). Here I thought I was finally free of redgate, but it appears I yet wear chains.

    • Backup encryption is available in Standard, Enterprise, and BI editions for SQL Server 2014. As far as I can tell, it’s supported with log shipping – but I haven’t set it up. In order for it to work, the encryption cert must be restored on all the secondaries.

      • Jes,
        Yes, you are correct. The certs must be on both the source and destination systems for encryption to work.
        The point is that when you tsql a log backup job, it is only encrypted (afaik) if you add the WITH ENCRYPTION tag. When you use the log shipping wizard, there seems to be no checkbox to enable encryption (or to choose the cert…. as you may have more than one…). So, via wiz, it seems that encrypted log shipping is not avail.

        • Correct – there is no option for using encryption in the log shipping wizard. I don’t see an option to set that using the stored procedures, either. Might be worth filing a bug on Connect!

  89. Hi,

    I have a situation here. I need to bring standby DB online then put it back to standby again for some DR exercise without switching roles with the ProductionDB. Is this possible without using a full backup of the ProductionDB to reinitialize log shipping?

  90. Excellent picture :-)

  91. hi,

    I have manually copied the. Trn to secondary because I am getting error like ‘no system resource to do this operation’ when lscopy triggered. My prod log file is 11G b in size. After I copied the log file from prod to secondary, how lscopy will identify the file was copied to secondary eventhough that was not done by lscopy job.

    Thanks in advance

    • Nathanayyanar – this is one of those great times where I’d suggest trying it to find out. It’s often better to just run an experiment in your own environment rather than waiting for someone else to let you know.

  92. Dear sir please tell me,
    (i)There is a log shipping keep on running and later one user has added one log
    file in the same database at primary server? Both server has same configuration. What will happen?

    (ii)I have to configure log shipping for 3 databases. How many backup and copy jobs will be created?

    • Satya – why not give it a shot? Go ahead and run an experiment yourself and you’ll learn a lot about log shipping. Getting your hands dirty is the best way to learn answers like this.

    • Brent’s right, log shipping is one of those things that you can get your hands dirty testing dozens of times and not break anything. Find a small DEV database and have a blast.
      As to the answer to number two, if you use the default log shipping setup (you do not pre-build the database in the new location) in SSMS then the following will happen:
      1) One BAK file will be created in your log storage location (which you setup in your LS configuration)
      2) While that BAK file is being restored to the secondary server, TRN files will be created in the background (note make sure to turn off LOG backups on this database on the first server no matter what, or you will be looking for them all the time).
      3) After the database has restored, the next timed restore point (that you setup in your initial log shipping config) will then begin running through all of those TRN files (depending on the initial time).
      So the final answer is (if you use the SSMS LS config to set your initial restore point) 1 BAK file per database and as many TRN files as it takes for the initial restore to occur.
      Depending on the size of the database and the network speed you may want to use full and differentials in conjunction with log files (leaving it in “NORECOVERY” mode), and then setup the LS in SSMS using the pre-restored DB. A good example to this is we have a 2TB beast of a database that I have done this with. It takes 3-4 hours to backup, 6-8 hours for the initial restore, and is crossing a 1 Tera-bit per second pipe (which comes out to about 2 hours to transfer the file (depending on the network traffic). So in my case this becomes a two day and many file event to setup the initial restore copy on the other server. If setup correctly, the LS config setup takes about ten minutes…

      • And then I noticed the real question, which was the jobs themselves… lol
        1 per DB for backup (on initial server)
        1 per DB for copy (on initial server)
        1 per DB for restore (on secondary server)
        1 for all for your LS Alert (on whichever server you setup as your monitoring server)
        So in your case 10 jobs will be created.
        I do find that the alerting lacks so scripted a SP to show exactly which DB is having the issue, instead of just telling us something is wrong… :-)

  93. Dear sir,

    my primary database ‘xyz’ has a secondary Database ‘xyz’ on another instance with the same name which is in sync and working fine . I tried to create one more secondary database with the different name ‘xyz_new’ for the same primary Database ‘xyz’ on the same secondary instance by giving different name for lscopy and lsrestore .

    The configuration happened successfully without any error message but when i checked there was no lscopy and lsrestore was created for the new secondary DB ‘xyz_new’

    The destination path for the lscopy for ‘xyz’ and xyz_new’ on secondary server is different..
    can you please help..

    • Hi! Unfortunately, troubleshooting this is a little beyond something we can do in a blog post comment. To keep things simple, though, I’d stick with the same paths on both servers. That way when you add a file to that database in the future, it won’t break log shipping.

  94. We are using LS as our DR solution.
    The transaction log is being backedup every 15 minutes, copy to the secondary server and restored as a standby DB.

    Is there a way not to restore trn file unless a newer trn already exists?
    I want to avoid a case where the master DB is not reachable and the secondary already restored the last trn as standby/read only.

  95. 1) Is there any way to solve without breaking logshipping if the secondary server log file is filled (Incase of drive filled) ?
    2) Is there any way to solve without breaking logshipping if the secondary server data file is filled (Incase of drive filled) ?
    3) can we put secondary database in simple recovery model ?

    • Senthil – by the scope of your questions, it sounds like you’re working with a secondary server that’s got much less capacity than the primary server. This isn’t something we’d recommend – database administration is hard enough as it is without worrying about drive space.

  96. Hi Brent,

    Could you please suggest me the best method for My DR plan. Currently we have set of databases (6) used for our application and we in the process of building DR and HA plan. As we are into small business we are using standard edition 2012 and we are unable to make use of AG feature. We have visualized our servers and using Hyper V replica for HA and planning to setup DR site using either Log shipping or Database Mirroring but not both. Could you please let me know the best option to proceed and least complexity to maintain as we have to fail-over 6 databases to support application. We can offer downtime more than an hour during weekends and maximum data loss can be 15 minutes to 30 minutes.

    Thanks for your help.

    Thanks,
    Ajay

  97. Let’s say that I have databases Foo and Foo2 on Server Region1 which is log shipping to Server DR. I also have databases named Foo and Foo2 on Server Region2. I would like to log ship from Region2 to DR as well, but I’m concerned about the database name conflict.

    Is there a way to have the databases have a different name on the secondary than they do on the primary?

    If not, is there any workaround other than adding a new instance to the DR server to use as the secondary for Region2?

    I assume that either way I’d have to use separate folders on DR to avoid name conflicts between the files from each primary. True?

    • Hi Mark,

      I believe that I’ve done this before without any problem (having a different DB name on the secondary)– just manually restoring the secondary database with a different name and then setting up the jobs. It is possible that my memory is playing tricks on me, but I don’t think there would be any issues. Should be easy to test with a small canary database first. Of course, if you have to fail over, then your apps might have a problem with the different names.

      Kendra

    • Yes you can. Log shipping doesn’t take the names of the database file names into account during the actual log shipping. We have a database on each server that we use to capture server information on a daily basis. I log ship all of the Production based databases to the same server in our DR site so that they are all Read-Only. The database names are different on that server, yet are the same on all of the corresponding servers. When you do your initial setup, restore a fresh copy of the database “WITH NORECOVERY” (so the database can still be restored to) using different database names (in your case “Region1_Foo1”, “Region1_Foo2”, “Region2_Foo1”, and “Region2_Foo2”) and file names that correspond with your new database names. When you go through the LS setup, point it to that database using the “No, the secondary database is initialized”. Done, you are now logshipping both databases to the same server and can clearly see which database is from where. When you bring them active during a DR, it can be a little tricky depending on what your front-end application is like.

      • It’s scary that I know this much about logshipping, but then of course I have broken it enough times to have learned many of the nuances. lol

  98. I setup a log shipping test database a few days ago. The database on the secondary server is in StandBy Mode as I’d like to be able to run queries against it. In StandBy Mode I am unable to backup the transaction log so my questions is…at what point will the secondary server decide to clear the transaction log or will the transaction log just continue to grow on the secondary server?

  99. My testing showed that log shipping from a lower version to a higher version (2008 to 2014 for example) does not work if you want the secondary database to be in STANDBY mode. Is that correct?

Leave a Reply

Your email address will not be published. Required fields are marked *

css.php