Blog

I’ve been working with and talking about SQL Server Log Shipping a lot lately! If you haven’t yet, go ahead and watch my webcasts – Part 1: Preparing for Disaster and Part 2: When Disaster Strikes.

One way to ship logs…

I’ve gotten a lot of great questions about log shipping through these webcasts, so I’ve put together an FAQ.

  • What editions of SQL Server is log shipping available in?
    • 2012 – Enterprise, Business Intelligence, Standard, and Web
    • 2008R2 – Datacenter, Enterprise, Standard, Web, and Workgroup
    • 2008 – Enterprise, Standard, Web, and Workgroup
    • 2005 – Enterprise, Standard, and Workgroup
  • Does the secondary need to be licensed?
    • I am not the licensing police, and I am not Microsoft – check with your licensing representative to clarify your exact situation. Generally, you can have one warm standby server. However, the second someone starts using it for reporting, testing, or anything else, you need to license it like any other server.
  • Log shipping is compatible with backup compression. What edition of SQL Server do I need to take advantage of compression?
    • 2012 – Enterprise, Business Intelligence, or Standard
    • 2008R2 – Datacenter, Enterprise, or Standard
    • 2008 – Enterprise
    • 2005 – Not available
  • When log shipping is set up, Agent jobs are created to alert me if a backup, copy, or restore fails. How do I get notified?
    • You need to go into the Agent job, pull up Notifications, and choose your method – email an operator, or write to the event log, for example.
  • Are my logins shipped from the primary to the secondary?
    • No, they are not. You’ll need to set up a separate method to sync the logins.
  • Does this replace, or can it be combined with, our existing daily full and log backups?
    • TL; DR – no.
    • You’ll still want to take regular full and/or differential backups. Log shipping only takes one full backup – at the beginning – and that’s only if you specify that it does so. It can also be initialized from an existing full backup.
    • Taking two log backups in separate jobs will break the log chain, however. If you implement log shipping, it will replace your current transaction log backup job.
  • What’s the difference between the secondary being in “Restoring” vs. “Standby”?
    • Restoring means the database is not accessible. Standby means it is read-only. You make this decision when you set up the log shipping.
    • If the database is in Standby mode, users can query it – except when a log backup is being restored. You need to decide if a restore job will disconnect users, or if the restore is delayed until after the users are disconnected.

What other questions do you have? 

Interested in learning more about backups and recovery?

Grab my Backup & Recovery Step By Step training for more information on why backups and restores are so important, and how to perform them! 

↑ Back to top
  1. Great FAQ, Jes!

    One small disagreement, taking a log backup in a separate job won’t break the log chain. Failing to restore that log backup on the secondary will cause a gap in the log chain on the secondary. You can manually restore that log backup (or even a differential that spans the gap) on the secondary to get log shipping to continues.

    • Robert, thanks for the clarification. You can take as many log backups as you want – it’s the order they are restored in that matters.

  2. Jes,

    What are the pros and cons of using backup compression with log shipping?

    Thanks,

    Patrick

    • Patrick, I’d ask, “What are the pros and cons of backup compression?” Enabling backup compression will use extra CPU, but the backups will take up less space on disk, and the files that need to be copied across the network are smaller. Don’t use it on a system constrained by CPU, however.

  3. Pingback: Something for the Weekend - SQL Server Links 22/03/13 • John Sansom

  4. I haven’t gotten to work with this yet, but it surprises me that there are log chain issues. Can you not just set the log shipping job to copy_only and leave your existing log backups untouched? Though I guess you would then need to backup the secondary’s log to keep it under control.

    • If you leave an existing log backup job in place, and implement log shipping, you would have to figure out a way to restore both log backups to the secondary, in order, so the log shipping chain was never out of order.

    • No, you cannot set the log shipping job to do copy only log backups. There’s no logical reason you would want it to do that. You would then need a second log backup job to maintain the log file.Why woul you want to do twice as many log backups?

      The secondary’s log cannot be backed up. The database is not online. It is maintained by replaying the records of the primary. As long as you maintain the log on the primary, it will be maintained on any secondaries.

      • This is actually an excellent question because it allows for thought.
        Today we as DBAs deal with potentially different backup process that may be out of our control. These products may be performing T-Log backups so the question of can log shipping be used with the WITH COPY_ONLY?

        Scenario: Enterprise B/U tool running SQL Server Backups (full,tlog) directly to some inaccessible vault by any means other then using the B/U tool.

        Requirement: Log shipping to a secondary server.

        SQL Server allows for the copy_only t-log backup with a “BACKUP LOG successfully” result.

        How will sql server respond to rolling forward t-logs that were taken with the copy only flag. … thinking about the LSNs here. What will happen. This is a good experiment. :)

        • Clarky – the bigger problem is that if the enterprise backup tool *doesn’t* use the copy-only flag (and most don’t, for obvious reasons) then what happens when the enterprise backup tool copies and clears the transaction log as part of a log backup, and you haven’t gotten your separate transaction log backup first? You’ll be missing transactions.

  5. •Taking two log backups in separate jobs will break the log chain, however. If you implement log shipping, it will replace your current transaction log backup job.

    Jes – if you do a “copy only” backup – that will not break a log chain IMO

  6. HI Jes. I am a big fan, congrats.

    Jes. You really lost me in the log shipping job (second). If i have a log shipping set up and in the same time i have a recovery model. (full every Sunday- Differentials every day and logs every hour) it will make a conflict.
    Please can you clarified this foe me you put me aout track.
    Thanks and once again thanks for your hard work on this.

    • The only type of backup that log shipping controls is log backups. Full (and differential, if you take them) backups are not part of the automatic log shipping jobs. Full and differential backups still need to be run regularly, in separate jobs.

  7. Jes,

    Once you’ve failed from Primary to Secondary and them from ‘Secondary’ back to ‘Primary’, do you need to remove the log shipping from Secondary to Primary?

    I’m getting alerts from the LSAgent on both servers that sync hasn’t happened from S to P. So a final step for the planned failover would be to remove the shipping that was set up to do the second failure?

    Thanks for the screencast, it was well done.

  8. If my SQL 2012 agent is running as NT Service\SQLSERVERAgent how do I go about giving the file share permissions with that account or should I have not used the SQL Server 2012 default installation accounts to run SQL Server? Thanks.

  9. Just to confirm you can not do log shipping from SQL 2012 to SQL 2008 R2 right?

  10. I love your site Brent! Any guidelines for log shipping frequency? Maybe transactions / hour / second correlating to log ships per / whatever?

  11. Hi Brent.. I like your videos and the way you present them.

    One of my database(A) contain 2 log files. Database(A) is also participating in Log shipping. what is safest way to drop a log file with out breaking Log shipping.

    • This is a really interesting question. I wish I had the time to dedicate to give you an answer that’s just as interesting, but there’s a lot of possibilities here. The best way to get help on something like this is to narrow down the question as tightly as possible, then post it on http://StackOverflow.com or http://ServerFault.com. Include as much example code as you can so that others can quickly run it and get an idea of what you’re asking, and they might even post your code back with fixes. If you don’t get good answers within a day or two, let me know and I’ll take a look at the question to see if there’s a way we can get better answers.

      • Brent

        I got a way when I was doing TRAIL & Error

        1. shrink additional log file by using DBCC shrink file ( log_name, 1 )

        Note : Proceed only if you find tail log is inactive (DBCC loginfo) .( take more frequent t-log backups to make inactive)

        2, once you shrink the log file drop the additional Log file by using GUI or T-SQL

        3. Run LS backup to remove the additional Log file info from sys.sysaltfiles

        4. Enable copy and restore Job.

        Note : Test this procedure in test before you run in Production.

  12. I have set up log shipping between 2 SQL Servers (SQL Server 2008 and SQL Server 2012),

    At first i took a full backup of the first one and restored it on the other one and than i set up the log shipping,

    The databases first backup-restore took a few hours and i wish to know:

    how does the log shipping process knows from when to start the logs it is shipping? does it start from when the last backup-restore process happened or from the moment the log shipping itself is set ?
    Can i be sure that no information was lost in these few hours ?
    Thanks

    • Matan – if you try to restore a transaction log backup and there’s missing data, SQL Server won’t allow you to do it. It’ll require the earlier transaction log backups to be applied first.

  13. Hi,
    We are in the process of setting up log shipping, using SQL 2008 R2 ent edition. I’m a newbie dba working with a contractor dba. Here’s what I’m being told.

    When a database is initialized on the source server, it first creates a backup, then restores it on the secondary server. Here’s the question. Our source database is separated on three data files on three different drive letters. Our secondary server also set up in the same fashion, to take advantage to three datafiles across three drive letters (D: E: & F).

    The dba is telling me that the configuration doesn’t allow to spread across three drives, I must use only one drive letter. The workaound would be to take a backup of the database, then FTP it to the secondary server, restore it across the three drive letter, THEN, enable log shipping.

    Can any confirm this if this is my only option to spread the datafile across three drive letters.

    Apologies, I’m still learning the SQL technology and may not have my context correct.

    best regards,
    Jay

    • Jay – sounds like that DBA isn’t quite up to speed either. You can totally do log shipping with files on multiple drives.

      • Brent, thanks for that quick reply back. A brief follow up question:

        Our scenario.
        1. We manually created the empty database and spread the data files into 3 drive letters (D, E, F) mentioned above on the secondary. (Each drive has 700GB)
        2. Size of primary database: 1.8 terabytes
        3. During the initialize of log shipping:
        a. A FULL backup was created on the Primary
        b. Restore step failed with error the D: stating there isn’t enough space for 1.8 terabytes and it is only 700GB available

        question: how can we by pass this error message without having to manually copy the backup over to the secondary and restore from the secondary?

        • Jay – I’m not quite clear on where the error is occurring (on a secondary or on the primary) and troubleshooting this is a little beyond what I can do in a blog post comment. Your best bet for personalized one-on-one help like this is to use the contractor DBA. If he can’t help with these kinds of issues, you probably have the wrong contractor. ;-)

      • Actually the contractor is right as per the ‘workaround’ above. If you setup logshipping with the GUI and initialise a new copy of the database you can only specify one location for data files and one for logs. I’m guessing that’s why you are getting the error below too – it’s trying to restore all files onto one drive.
        If you need multiple locations you need to pre-initialise the database on the secondary(copy backup across and restore it with the files in the 3 locations – use NO RECOVERY or it won’t work).

        • And to answer your other question, the only way to prestage the database on the subscriber without copying a backup to the subscriber is to perform the restore across the network. And at 1.8 terrabytes that is probably going to be a terrible idea. Far better to find assign a temporary disk to the secondary, copy the backup to that and restore locally.

        • You don’t have to use the GUI though.

          • Sure, you can script the whole process, but he’s still going to need to decide what is the lesser of 2 evils – finding space on the secondary to store his backup so he can restore from a local file or go through the pain of restoring a huge database across the network.

  14. What is .TUF(Transaction Undo File) ? IF we lost .TUF file What Happen ?

  15. Hi Brent,

    Is there a way to backup a logshipped destination database ? I know it cannot be backep up while it is in restore or stand-by mode. But can it be taken offline, backed up and the log shipping restarted ?

    Thanks !

    Phil.

    • You can’t back up an offline database either. My question is, why do you want a backup of the database that is your backup?

      • Well, initial plan was to avoid impacting prod DB as it is already backed up by another method.

        I’ve looked into minimizing DB backup impact since. Probably a better way to go.

    • Hi.
      I want to know how much logshipping Jobs (Backup, Copy, Restore) affects server’s performance.
      Do you have any idea?

  16. What are the pre-requisites/pro & cons for having two Secondary Servers pointing a one primary in a log-shipping setup

    • Prerequisites for the primary and the secondaries are the same regardless of whether you have one or multiple. The benefit of multiple secondaries is that you have your data in multiple locations. One could be a warm standby; one could be used for reporting. The downside is that you’ll now have multiple copy and restore jobs to monitor and manage.

  17. Hi! Great FAQs!

    My company recently implemented Log Shipping on a 700 GB database using 15 minute intervals. This seemed to cause excessive blocking with our normal business processes that were executing during the same time period that the database’s indexes were being rebuilt online (some of these indexes take longer than 15 minutes to rebuild).

    1. Can you explain how log shipping could possibly cause such blocking while indexes are being rebuilt and other data manipulation was occurring?
    2. Do you have any recommendations to enable Log Shipping to co-exist without generating conflicts?
    3. Is it possible to set up Log Shipping to execute on an irregular schedule instead of a constant time interval?

    Thank you for your input and wisdom!

    • Hey Eric,

      Logshipping itself doesn’t cause table level locking and blocking: it just uses backups of the transaction log (which don’t use locks like that). Based on what you’ve said, I’m guessing you changed the recovery model from Simple to Full to implement log shipping, and that you have a performance condition that’s just related to the extra work of doing log backups.

      For your question about intervals, there might be some ways to rig up a solution where you can get things to execute at non-regular intervals, but I think the better question is how you can make it so that you can run transaction log backups frequently and reliably without pain. That’s the better goal!

      Kendra

  18. Hi Jes,

    I’ve question one question. Let us assume if we configured logshipping for 4 databases on one server (Instance) and suppose we have 50 servers, every server had 4 databases configured with logshipping (50*4 = 200 databases)
    1Q) How can we monitor these logshipped databases, whether databases are in sync or not? is there any easy method or script?
    2Q) How can we check the available desk free space in all the servers to prevent space issues..?

    We can’t TN into the every server to check the above :) . Is there any method to avoid TN into every server.

    Thanks & regards,
    Samba

    • Samba – actually, let’s take a step back. What software are you using to monitor the health across these 50 servers? That’ll impact the answer because each monitoring program has different ways of looking at this.

      • Thanks Brent for swift reply, We are using SCOM for sql server monitoring. Is there any chance (method) to monitor by using SSMS? can we use any script or central server management system to monitor?

    • When you set up logshipping it creates a job called LSAlert_(servername). That job starts failing if any of the databases logshipped to that instance are behind the thresholds you set. You can set up an operator to be notified on job failure on each instance and you’ll get an email sent right to you.
      But, per Brents answer there is a bigger monitoring question here. In my experience SCOM is quite a noisy monitoring tool that takes quite a bit of tuning to get working “right”.

      • Thanks Rob.. The job (LSAlert_servername) has created on SCOM right? is it possible to check this through SSMS? I mean is there any method or script? and have a great day…

        • It’s created as a SQL Server Agent Job. You can check it by going via management studio. Connect to your logshipped SQL instance, expand SQL Server Agent and doubleclick Job Activity Monitor.

          You can also monitor the current status of logshipping by rightclicking on the SQL instance name selecting ‘Reports’ -> ‘Standard Reports’ -> Transactional Log Shipping Status’.

  19. Logshipping:
    In logshipping primary server crash.Then how will you findout logshippin g secondary server

    • Kamesh – can you reword your question?

      • Hi
        My primary database is located on primary server instance.After some time primary server instance gets crashed.How will you findout the secondary database on secondary server instance

        • First, you should have all log shipping instances fully documented somewhere – a Word document, a spreadsheet, your intranet – somewhere.

          Second, if you get into a situation where you lose the server your primary database is on, look for a server that has the Copy and Restore jobs on it. The Copy job should have the primary instance name in it. But this is no substitute for documentation.

  20. What is the best approach with T log shipping config when patching sql servers?
    Should you disable all LS jobs on primary while patching primary and then just enable when patching , restarting is complete?
    No plans to fail servers over as patching is done in defined service window.

    Ant other tips in regards to log shipping and sql patching, a check list or similar would be great.

    Thanks

    • Hi, Mickey. Getting the right answer and a checklist here is a little beyond the scope of what we can do in a blog post comment, but that’s a great idea for a future post. Thanks for stopping by!

  21. Check the other server’s job list. If you get the job name like Lscopy_”PrimaryserverName” and LSRestore_”PrimaryserverName”, that is your secondary server.

  22. Hi

    How to configure the log shipping in different Domains. Please let me know if any document related to the same.

  23. Hi
    Thanks for your reply.

  24. Okay, I’m settting up log shipping primary and secondary. The database is large, and the pipe is small between primary and secondary. I want to pre-initialize the secondary db with data / schema, then turn on log shipping. What are the steps to do this?

  25. On the Primary, here is the setup:
    Full Every Sunday 23:00pm
    Differential Every night Monday – Saturday 23:00pm
    LogShipping Via LS Jobs every 1 hour during the day

    Say, Wednesday night for some reason one of the .trn files get accidentally corrupted/deleted from the backup folder,
    Thursday morning you come in and the chain broke from last night and won’t continue.(you are missing a file in the restore folder)

    How do you overcome this to fix the broken chain?

    Can you restore last night ‘s differential and start from there,
    or take the differential right away up to the current moment?

    • Bass – I’m a big fan of doing experiments to find out stuff like this. Have you tried running an experiment on this in your environment? Create a database on your development server and give it a shot. This is one of the best ways to learn, getting under the hood and proving these things for yourself.

      I know it sounds cruel not to just tell you the answer, but it’ll get you started on a lifetime of experimenting and learning. Have fun!

  26. Hi
    How to transfer the logins,server level permissions,database level permissions,object level permissions from one server to another server in log shipping plan

    • Kamesh – log shipping transfers data inside the database, which includes permissions on things inside the database.

      It does not cover things outside of the database, like logins and server level permissions.

      When you Google for answers, leave out the log shipping part, and just look for answers on how to copy logins or server level permissions.

  27. Brent, we currently copy our Prod backup to other environments to refresh our main database about every 2 months. Is there a way to reinitialize log shipping with out starting with another full backup of the Source server?

  28. Hi
    1)Is It possible to take the full,diff,tlog backups in logshipping primary database.

    2)what is 14420,14421 errors in logshipping

  29. When we configure logshipping LS-Alert jobs are created for backup and restore. I want to know how these jobs works and where does it store the error before writing to the logs? Does it right way make an entry into msdb db?

  30. Hi, Brent Ozar, Jes Schultz Borland You both are great. Giving such a valuable information.
    In my practice : i configured Logshipping 2008, my backup job, and restore jobs are running successfully. But copy job is failed.Reason: it cant access the path where the backup files are stored. But i shared my drive and folder where the backup files are stored. still i am facing same problem.

    please give the solution for that

    Thank you

  31. Hi
    logshipping primary database log file is full. How to troubleshoot

  32. I want to configure Log shipping from SQL 2008 R2 Ent edition to SQL 2008 R2 Standard. Would not be any problems with secondary when I’ll use compression and partitioning at primary?

  33. Can you set up log shipping between sql server 2008 R2 (standard) and 2012 (standard)?

  34. while the Logshipping is running,if i add a data file to the Primary Server, for example the primary server is in E: Drive, the secondary server is in F: drive ,then the secondary server have no permissions to access that drive E: where the new data file is added. Copy job is running successfully,restore job fails shows the error message like : “Cant access the path”

    How to resolve this issue ? can’t we add a data file to the Primary Server without breaking the Logshipping ?

    • This is a really interesting question. I wish I had the time to dedicate to give you an answer that’s just as interesting, but there’s a lot of possibilities here. The best way to get help on something like this is to narrow down the question as tightly as possible, then post it on http://StackOverflow.com or http://ServerFault.com. Include as much example code as you can so that others can quickly run it and get an idea of what you’re asking, and they might even post your code back with fixes. If you don’t get good answers within a day or two, let me know and I’ll take a look at the question to see if there’s a way we can get better answers.

  35. Hi Brent,

    I would like to know whether we can achieve Log Shipping from SQL SERVER 2008 SP2 and SQL SERVER 2008 R2 RTM both enterprise version with the destination DB being in STAND BY MODE.

    Thanks in Advance,
    Deepak Almeida

  36. We Log Ship to a Network Drive. I can pull the log for the restore directly from the network drive where the backup is. I’d like to skip the “copy” step, but the Log Ship set up did not allow me to. So, I just copy right back to the same location as the backup. This works mostly. but about twice a week, I LS_Copy job fails with a “Already Exists” message.

    Is there a way to skip the “Copy” step?

    Or copy to ?

    thanks,

    tim

  37. Another Situation: I backup to a remote drive every 15 minutes, 24×7. We keep 45 days worth. The folders have gotten quite large (over 4000 files in each folder). This is affecting performance of the Log Shipping on the delete step. The next backup starts before the delete step is complete causing the Log Shipping process to get log-jamed.

    I’d like to remove the “delete files older than” step and run a separate delete job at night.
    Again, I don’t see in the Log Shipping wizard that I can remove the ‘delete files….’ step.

    Suggestions ?

    thanks very,

    tim

    • Hi, Tim. Just set the files to never get deleted in the Log Shipping Wizard (pick a really long time span) and manage the deletes yourself.

      • It looks like no matter how many days you put in, it still has to read the directory and determine if there are any files to delete. So, that doesn’t really help.

        I ended up dropping my retention and cleaning up the folders to habe less than 2000 files. that did the trick. no more performance problems.

        Our other option was to rake off older files into an archive folder, leaving the main backup/restore folder smaller.
        We may still do this.

        thanks very,

        Tim

  38. Hi.
    I want to know how much logshipping Jobs (Backup, Copy, Restore) affects server’s performance.
    Do you have any idea?

    • The answer would depend on your individual server and your backups. Your best bet is to do the performance testing in your environment. You’ll find that there’s lots of settings you can tune to make the backups go faster, plus you’ll want to know what your server’s bottleneck is.

  39. Hi thanks for the article. I have a small doubt. Hope you can suggest me on it.
    Is there a way to find which log-shipping methods is been used to establish the data synchronisation?
    Eg:
    – Generate full backup of primary and restore in secondary.
    – Log shipping with existing database of the primary database.
    – Secondary database initialized (without changing secondary)

    If so, how can I find it? I have tried to find from log files but couldn’t find.

    • Ravi – what’s the problem that you’re trying to solve?

      • I am trying to find which option is selected during log shipping configuration?

        • Are you asking which option was selected on the “Initialize Secondary Database” tab of the setup wizard? If so, why is that important to you? What problem do you need to solve?

          • Yes. You are right. I have to tried to set up all the three methods in a test bin and found that last option which is Secondary database initialized (without changing secondary) having some performance issues compared to first 2 options. That’s the reason I would like to know which option has been used so that I can prevent such kind of performance issues in the future in my production env. Hope I am clear. Thanks.

  40. By the way, it’s in 2005

  41. Hi Jes
    I am having problems with my log-shipping job the copy job is failing due to an Access Error
    The access to the path is denied
    I have tired everything i know, I have given the secondary server permission to the folder where the backup is, In the primary folder, I have shared the folder I even gave secondary server permission to the drive
    I don’t know if you can help

    • The secondary server’s Agent account needs access to the folders – that may be different than the account running SQL Server. Did you check that it’s the correct account?

      You can provide more details, such as an error message, at dba.stackexchange.com. There are always lots of people there, watching and helping to answer specific questions!

    • I had the same issue. We resolved it by using the same *user account* name and password on the primary and secondary servers to access the share folder. We also run the sql server agent under that account.

  42. Hi Brent and Jes,
    Thanks for all your effort!

    I’ve got a question; I have two nodes in a cluster (sql server 2008) and i want to backup my cluster to the another site(DR Site) using with log shipment method, is it possible and what is the roadmap? Sorry maybe the question is ridiculous :/. Could you please clarify that for me?

    Thanks.

  43. Hey Brent,

    Sometimes its taking longer time for copy job to run because of big t-bkup files, so the restore job is reaching its threshold and gets alerted. How to find out the restore history i.e the files that were restored in that run and the time it took to restore ?

    I can look at job history to find how long restore job took to run to get rough idea, but its not helping.

    Thanks for looking.

    • Chris – those are in the MSDB tables for backup history and restore history. Fire up SSMS and type select * from msdb.dbo.backup or msdb.dbo.restore and IntelliSense will kick in from there, helping show you through some of the tables. You can also check those out in Books Online.

      • Haha, thanks. I always query msdb to get backup history, not a regular thing for me to see restore history. Now I know the restore_date, the job supposed to kick in every 30 min, for few in restore_date there is difference of 2 hrs or so and based on the history_id, I should be able to get the file names. Got it :).

  44. Hi Jes,

    did you ever come across the issue where everything works fine except no logs are being restored?
    I’ve set up log shipping manually between 2 instances that are totally separated and cannot ‘see’ each other. I went through the LS setup steps using the UI but scripted the config and then cancelled the UI. So there is no real primary and the log backup and copy is handled using our standard routines.
    I ran the LS setup script on the secondary (just the required part). Everything works fine but the LS restore job just doesn’t restore. It finds the correct log but doesn’t restore it. If I manually restore a log file, the job notices that and just skips that file and continues with the next: it says it has found the log but it again doesn’t restore it.
    I checked the error logs, traced it, executed logship.exe from the command line, manually executed sp_can_tlog_be_applied etc etc. All works fine and no errors, just no restore…

    Any ideas?
    Chrz
    Thierry

    • No, but I haven’t tried to cancel out of the setup on the primary and use another method to backup and copy the log files. If the restore job is successful, how do you know the log isn’t being applied? Is there perhaps an issue with naming conventions?

      • thx for the quick reply

        I know from the job log it’s not being applied:

        2014-01-10 14:04:19.12 Found first log backup file to restore. Secondary DB: ‘xyz’, File: ‘xyz_backup.trn’

        2014-01-10 14:04:19.13 The restore operation was successful. Secondary Database: ‘xyz’, Number of log backup files restored: 0

        I replaced the network location with a local disk since I thought it was a network/permission issue but this doesn’t solve the issue.
        I also restored the latest backups because I thought maybe it was too far behind (never know), still no luck.

        I’m beginning to think there’s a bug somewhere. Hard to tell if I cannot find any errors though…

        Also found this thread that doesn’t really help
        http://social.msdn.microsoft.com/Forums/sqlserver/en-US/44d3b8b2-59aa-4ac5-90de-be9eab7e06a3/log-shipping-not-working?forum=sqldatabasemirroring&prof=required

        • Thierry – if you suspect a bug, contact Microsoft support. It costs $500 per call, but they work the problem with you until it’s done, and if it does turn out to be a bug, they’ll refund the $500.

          • thx for the reply Brent. Before I do so, I’m going to install a VM environment and see if I can simulate the behavior (maybe try different versions as well to see if it’s related to a specific version or build).
            Thx again for your time (and Jes) to react to my questions

        • Hey– just a quick guess/question. Are you possibly using log backup files that aren’t generated by the built-in logshipping jobs? (Your own log backup job, or one from another script?)

          I tried to do that once years ago and it drove me nuts– the restore jobs would succeed but never actually restore the backup files. It really cares about the filenames and that they are in the exact timezone and format it’s expecting if it created the log backups itself.

          If this is your issue, more info on it here: http://mikehillwig.com/2012/03/15/sports-analogies/

          If not, Mike Hillwig has some other great stuff on his blog about logshipping, too. :)

          –kendra

          edit: updated/fixed link
          edit2: I bet this is your issue! Reread your original comment and found the sentence “the log backup and copy is handled using our standard routines.” That’ll do it! Drove me nuts, too.

  45. Hi Jes,
    to test I just renamed one of the log files to match the LS backup file pattern and OOH MY GOD :) it worked. Steeee-rike for the ‘Ozar-team’ once again!
    I wonder why on earth it was designed like this.
    Anyway, thx again!

  46. Excellent! Glad we could help you out!

  47. I have log shipping already configure. And working fine
    But client ask to change path, so i go to copy files destination folder and change from D driver to C driver
    Then i copy all transaction log from D to C

    After that secondary db is not restoring

    I thought maybe because db is not sync
    So i sync secondary db with primary db
    But still not restoring

    Do i need to clear all transaction log in C drive first before i start copy and restore job ?

    Thank you

    • Have you updated the log shipping configuration so the backup job, copy job, and restore job know where to look for the files?

  48. We have a table cleanup( deleting a bunch of rows) that happens monthly, which creates a huge transaction log. Any way to implement log shipping or the cleanup in a way that won’t force us to schlep that giant log across the wire?

    • There isn’t – that’s an operation that’s going to be fully logged. You could disable log shipping, set the database to simple recovery, do the operation, set it back to full recovery, and reinitialize log shipping, but I don’t know if that is any less time-consuming. You also lose point-in-time restore capabilities while the database is in simple.

      • Thanks Jes. Seems like kind of a dumb question, but thought I’d ask, like setting it as some kind of bulk-logged operation or such…

  49. Hi Brent/Jes

    Please could you clarify my understanding of the restore alert threshold. I have just set up a LS configuration to delay the log file restore by 240 minutes. I’ve left the alert on the default 45 minute threshold but the alert seems to be firing – I guess the restore threshold is being exceeded.

    The wording of the alert suggests to me that it will alert if there is NO restore within the 45 minute period. Restores are actually happening ever 15 minutes successfully. Should I read this as “The restore of a particular log needs to happen within 45 minutes of the backup being taken”? I.E, should my restore alert be configured to 285 minutes rather than 45?

    I’ve played around with the threshold and it works fine when set to 285, but for obvious reasons I wanted to understand it so I don’t surprise myself if restores genuinely fail for a period of time and I miss the alert for a few hours.

    Hope I explained that relatively clearly – its always seems simpler in my head :)

    Many thanks

    • Hey there,

      If you think about the alert monitoring, it’s trying to help you keep within your desired “Recovery Point Objective” and keep the data from being too far behind. I might have restored two log backups within 45 minutes, but maybe those log backups were just taken one minute apart– so if it just measured whether I’d restore a backup then it would be very hard to know what it meant.

      It sounds like you’ve tested the jobs and understand how they work. I wouldn’t just assume that you should up the threshold though– you might want to increase the frequency of your log backups, copy jobs, and restore times depending on your RPO and RTO. (More info on that here).

      Hope this helps!
      Kendra

  50. Hi Guys,
    If a restore_delay is used and a situation occurs where the primary server is lost and the secondary is being brought into service, what is the correct way to reset restore_delay to 0 to cause the log shipping restore job to immediately apply all available log backups to the secondary database please? We run a delay to give us time to react to issues on the primary before they are applied to the secondary.

    I looked through the msdb log shipping related sps and I’m not sure I can see anything appropriate. Would a simple update statement for restore_delay into msdb.dbo.log_shipping_secondary_databases be sufficient and safe or is there a better way please?

    Thanks for all your great community support,
    Mart

  51. Hi Jes & Brent, Thanks for the great info and videos.
    I am quite new to MSSQL and have just set up my first log shipping to use for DR purposes. I was surprised to see that on the secondary (read only) copy of the DB if I show the file location it displays the info that is correct for the live copy and not the RO copy. The following command returns the correct location but viewing DB_name \ Properties \ Files with the mgmt studio shows the paths of the primary DB.

    Can you explain this? Do I need to fix that if the secondary_RO is ever bought into production?

    This is the SQL that shows the correct paths.
    SELECT name, physical_name AS CurrentLocation, state_desc FROM sys.master_files

    Thanks

  52. Hello,

    I have just configured log shipping and the wizard is currently backing up and will restore the database within a few hours onto my secondary server. Will changes on the primary servers database during the backup and during the restore be reflected in the secondary database after the wizard completes? Don’t want to miss anything!

    Thanks much!

    • The changes to the database will be captured in the transaction log, so they will be in the log backups. After your full backup is restored, apply all log backups that have been taken since it began, and you’ll be caught up.

  53. Pingback: What is SQL Server log shipping? - SQL Shack

  54. Hi Jes/Brent,
    we have configured logshipping on primary as a production server and secondary as a DR server and
    it was all running good till yesterday morning, but it started throwing errors on the secondary server restore job is failing. what could be the reason for it and i want a solution for this please, the error is like this Executed as user: User_Name. The log shipping destination Server_Name.DB_Name is out of sync by 135 minutes. [SQLSTATE 42000] (Error 14421). The step failed.” what could be the reason. but mean while i have disabled the restore job on my DR server as i couldn’t get any help on this issue.
    i wish you people will resolve my issue please.

    • Krishna – solving this one is a little beyond what we can do in a blog post comment. We would need to see the exact error messages.

      Here’s a few things to consider – has security changed on the servers? Were any of the log backups accidentally deleted? Did someone take a log backup manually and put it somewhere else?

      Hope that helps!
      Brent

  55. Hey Everyone,

    I had a quick question, when setting up log shipping for multiple databases is it a best practice that the log backups are all stored in separate shares per database? Or is it possible to backup all logs to a single folder and SQL Server knows which belongs to what database?

    Similar question with the agent jobs, does there have to be one log backup job per database?

    Thanks!

    Jeremie Grund

    • When you set up log shipping, SQL Server will create separate Agent jobs for each database. However, the files can all go to one folder – there will be different names for each database’s log backups.

  56. log shipping setup- Does both the primary site and DR site storage drives should have same names ?

    • It’s not a requirement. However, for many, many reasons, having the same drives on all SQL Servers in your environment is easiest – always having a D:, L:, and T:, for example, and backing up to the same drive or share on each server. Consistency is key!

  57. I just configured Log Shipping and wanted to test the setup to verify data integrity. For this reason, I selected Standby Mode as the database state so I could read the database and check the row updates. When I make a change on the Primary, the restore job fails on the Secondary with the message that our TUF file is not a valid undo file for the database – verify the file path and specify the correct file. During Log Shipping configuration, you never specifically enter information for this particular file. Does anyone have suggestions on how to resolve this error? We will not use Standby mode in production, but I wanted to use it for my testing. We are using SQL Server 2005. Thanks for your time and assistance!

    • It’s been a while since I’ve seen that error, but check the permissions on the folder the logs are saved to. Per this post, you may not have the right permissions.

  58. Thank you for that suggestion. I will look into that further.

  59. Is there a way to enable compression after the Log Shipping process has been already setup and running?

    • Yes! If you go to the database properties and open the transaction log shipping page, then open Backup Settings, you can pick the backup compression level there. The change will be made the next time a log backup is taken.

  60. Pingback: What is SQL Server log shipping? | SQL Shack - articles about database auditing, server performance, data recovery, and more

  61. hi Brent, Jes, your posts are awesome. way of explaining very useful to understand.

    i have a doubt what is log shipping.exe file in log shipping. what is the work this???? if this missing log shipping will work. how to recover .exe file.

    • Thanks Raj – I’m glad we can help!

      sqllogship.exe is the application that runs log shipping. If you were to set up log shipping and look at each of the created jobs, you’ll notice they call sqllogship.exe with a set of parameters. More information on it can be found at http://msdn.microsoft.com/en-us/library/bb283327.aspx. What happens if you delete the .exe? I’ve never done that, but my guess is that all log shipping set up on that server starts to fail. You could try to copy the file from another SQL Server instance into the correct directory, but I don’t know if that will make it work.

  62. Jess,

    we are in the process of downgrading all our DR servers involved in logshipping due to licensing restrictions. the only way to downgrade the instance is to uninstall and reinstall.
    now, the DBs are quite large and we would like to avoid re-initializing the entire logshipping from scratch.

    Is there a way to downgrade the SQL instance and yet not break any logshipping chain?

    Thanks,
    Anand A

  63. Hi Jes Schultz Borland
    My undo file file was missing wen my cluster moved to second node ….. as my database was showing restoring\standby.

    My question is while making database into standby my cluster got moved after that it was not restoring the further logs as my tuf was not missing.

    Is it automatically deleted or it didnt got generated.as the database was showing in the management studio restoring/standy.
    Here i think undo file was generated but not completly.

    Please explain

    • MFK – if your undo file was missing, it was placed on storage that wasn’t visible to both nodes. You’ll want to start by using the Cluster Validation Wizard during your next maintenance window. It’ll take a few minutes to run, and it will produce a report showing problems with the cluster (like storage that isn’t available to all nodes.) From there you can fix the root cause of the issue.

      It’s not unusual to find LOTS of problems on a cluster that you’re validating for the first time – take heart that you’re not the only person in this situation, heh. Once you go through it the first time, then you’ll make sure your clusters are properly validated BEFORE SQL Server gets installed, and it’ll make problems like this go away.

      • Also there are corruption log errors in the system events logs are the cluster service failed.
        Can I say it can be the reason might be an storage issue where a bad sector occurred and got deleted.

      • Hi Brent … storage says there is no way a file be hidden when the cluster moves.

        • MFK – wow, I’ve never heard storage talk before! ;-)

          There is, actually. If the file is saved on local storage (think C drive), then when the clustered instance fails over, the file that was on the C drive doesn’t move over to the other server’s C drive.

          • File is not saved on the local drive.so how do I confirm to the storage that the file is hidden on the storage.as they say is not possible if the cluster gets file can be hide on the drive because storage will never do that.

          • MFK – unfortunately troubleshooting this is beyond something we can do in a blog post comment.

  64. As part of our testing plan, I recently configured log shipping in a production to DR environment using our largest database to test the timing intervals. While the configuration was successful, the restore job failed because we needed to set up our encryption key on the DR server. After setting up the appropriate key on the DR server, do we need to re-configure Log Shipping again from Primary to Secondary? Or will the Restore job automatically pick up the change. Having said that, if it’s been a few days between noticing the error and applying the fix, will log shipping fail anyway because of the backlog?

    • The log backups would already have been encrypted, so they should be applied after the key is restored. Log shipping will not fail because of the backlog, but it could take a long time to catch up – especially if you have frequent log backups. You could take a differential backup of the production database and restore it to the secondary, then re-enable the log shipping jobs, to speed up the restore process.

  65. Is it common for Log Shipping removal to take over 19 hours for a database that is 21,662,271 KB in size? Could that be enhanced with changes to Network bandwidth settings? We are running tests for our timing before implementing production. The initial configuration took approximately 12 hours, so I wasn’t expecting the removal of Log Shipping to take this long. Thanks!

    • No, I’ve never seen it take more than a minute. Use a tool like sp_who2 or sp_WhoIsActive to check if there are any active connections to that database, and close any you find. You may also want to use a tool like sp_AskBrent to figure out what is going on while it’s being removed. If you’re using the GUI to attempt removal, I suggest the stored procedures instead.

  66. We have been conducting planned failover testing in our production/dr environment. Upon setting up our Secondary as the Primary and trying to log ship to the former Primary/now Secondary, we receive the following error:

    The media family on device is incorrectly formed. SQL Server cannot process this media family. RESTORE FILELIST is terminating abnormally. (Microsoft SQL Server, Error: 3241)

    I’ve confrmed with our head DBA that our Sql Server versions are the same across the servers we have implemented Log Shipping. The file doesn’t appear to be corrupt as it does ship from the initial Primary to Secondary. It just fails with the above error when we try and reverse the direction.

    Could this be related to encryption settings? Since we’ve verified the SQL Server versions and file integrity, I wasn’t sure what other items to consider in our troubleshooting efforts.

    Thank you for your time and assistance.

  67. Thank you for that information. Our head dba actually revisited and re-configured encryption on each of the servers. We seem to be able to at least configure log shipping in one direction between all servers now.

    I did revisit a planned failover test, but after role reversal, I checked the files from the backup/copy/restore on the new Primary and Secondary servers and continue to receive a Primary Key Violation when trying to view the Transaction Log Shipping Status report. I’ve encountered this before, but it seems to be happening every time.

    Am I missing a step that would prevent this violation from occurring? I do not see duplicate or inaccurate entries on the log shipping monitor tables.

    I should note that we are using SQL Server 2005 and I read where this issue was corrected in later versions. Could this be part of our problem?

    Thanks so much for your help.

    • K_ps – I’d run a Profiler trace while you run the log shipping status report, and that way you’ll be able to get the exact SELECT statement that generates the error. From there you can troubleshoot the duplicates manually.

  68. In our log shipping configuration, we have setup the Recovery mode for the Secondary server to be Standby, Disconnect users …

    During Planned Failover testing, after executing role reversal and returning to the original A to B configuration, I have been executing a Backup with NoRecovery on the Primary Server which I am preparing to revert back to the Secondary. That leaves the database in a Restoring state. When initially configured with the Standby/Disconnect users selection, the database reflects Standby/Read Only. I see where there is a Standby argument for the Backup command. Would I use this to revert back to that original configuration? I wasn’t sure what the Standby file name would be referring to.

    Also, we currently have Replication on our databases. I see that you can implement Log Shipping with Replication. Are there any settings required when implementing log shipping with replication? I have not been able to find anything specific in this regard.

    (I am aware that a problem occurs in that replication does not continue after a log shipping failover. )

    Thank you again for your assistance.

  69. Update to post:

    After finally completing our first successful planned failover test, I learned something in answer to the previous post. After manually running the backup, copy, and restore jobs, the database that was previously in a Restoring State returned to the log shipping setting of Standby/Read Only.

    Still confirming Replication settings, but I wanted to follow up with that update.

    Thanks again for your assistance.

  70. Does the initial FULL backup that is done to initiate log shipping have to be a non-COPY_ONLY backup?

    I’ve been tasked with getting log shipping set up on 40+ databases and trying to automate the process as much as possible, but we also have CommVault doing FULL/DIFF/TLOG backups (I know I need to stop the TLOG backups if I want to keep my sanity) and I’m trying to figure out what I can/can’t do.

    My plan right now is to do normal, non-COPY_ONLY backups of all DBs, get log-shipping going/working, and then after that doing a FULL backup again back in CommVault (and turn its normal FULL/DIFF backups back on). Any comments if I’m on the right track?

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

css.php