Reporting From a Log Shipping Secondary in STANDBY mode

You can build a lot with simple tools. Carefully.
You can build a lot with simple tools. Carefully.

I’m a fan of SQL Server’s transaction log shipping. It works in Standard Edition, it’s relatively simple to set up, and you can even make your log shipping secondary readable using STANDBY mode.

I’ve worked with some pretty cool, complex log shipping environments over the years. In one case, we had multiple log shipping secondaries and a load balancer involved to support a full fledged reporting application. It worked pretty well– with a lot of careful scripting and support.

But there’s a few things you should know before you decide to implement this yourself.

“Readable” Log Shipping Secondaries Are Just a Point in Time

Although a log shipping secondary can be made readable using “STANDBY” mode, it’s just readable to a specific point in time. If I bring the secondary online at 2 pm, users can only read data through the last committed transaction in the last log file I restored. And…

Everyone’s Kicked Out When You Restore Transaction Logs

The database can’t be read when you’re refreshing data. You must kick out any users (or not refresh the data).

Logs Restore More Slowly if You Use STANDBY (Readable) Mode

If you’re reading from the secondary, you usually want those periods where it’s unavailable to be as short as possible. If you have a lot of log files to restore, you will probably find that performance of the restores is better if you’re in “NORECOVERY” mode while you’re doing the restores, and then switch back to “STANDBY” at the end of the process so the data can be read.

This can be done, it just can take some fancy scripting.

You will also need to add monitoring for slow restores if getting the database online is critical. If your log shipping secondaries get behind, it’s possible to use differential backups to catch them up.

You can’t use “STANDBY” Mode if your Secondary is a Higher Version

I was a little sad when I learned this one years ago. I had a SQL Server 2005 instance that was the log shipping primary. The log shipping secondary was made readable so that developers could check out production data if needed without having access to production.

Our upgrade plan was to get SQL Server 2008 on the least critical servers first — and that developer access server was one of them. But I couldn’t use STANDBY mode on the log shipping secondary with it as a higher version: it failed with an error like this…

When SQL Server brings a database online in a higher version, it needs to make some modifications to that database. This breaks with STANDBY mode because the database is read only.

An aside: unfortunately, the Enterprise Edition feature of a database snapshot doesn’t overcome this limitation if you try to use it against a database mirror to make it readable to a point in time. You get a different error, but it’s the same theme:

Auto-Create Statistics Doesn’t Work in the User Database in STANDBY mode

If you’re using your log shipping secondary for reporting, SQL Server can’t automatically create statistics in the database, regardless of your settings. (That read only thing keeps coming up.)

In SQL Server 2012 and higher, this isn’t a huge problem because temporary statistics can get created in tempdb. This new feature was added when Availability Groups came along, but it also works for log shipping secondaries, which is pretty awesome.

Security is a Hoot. And by that, I Mean a Problem. (Less So in SQL Server 2014)

In many scenarios, you only want to grant read access to a user on the log shipping secondary database. You do not want that user to be able to access the primary. This is tricky.

To read from a database, you need a login with an associated database user. To grant reads to a log shipping secondary database, you can create all the logins you want– but the log shipping secondary database is read only, so you can’t create a user in it.

Up through SQL Server 2012, here’s the two main options I’ve seen people use (there are a couple more in the comments):

  1. Grant access via stored procedures or views in another database. This will require enabling cross database ownership chaining, which can be a treacherous security road. And a lot of things could go wrong over time as tables are added, modified, and dropped.
  2. Create the login on the log shipping primary instance with the associated database user, and disable the login on the log shipping primary instance. If you’re using SQL authentication, you may have to use a special script to transfer the SID to the log shipping secondary to get it all to work.

Option 2 isn’t terrible, it’s just awkward to have a bunch of disabled logins. Someone can misunderstand and accidentally enable them or delete them, and then… oops.

Using CONNECT ANY DATABASE and SELECT ALL USER SECURABLES IN SQL SERVER 2014

As of SQL Server 2014, you have a new option because of a couple of new security features. You can do this:

When you add a member to this role, they can read anything. I tested and this worked for me against a logshipping secondary in standby mode just fine.

CONNECT ANY DATABASE is designed to allow access to connect to databases that exist now and databases that exist in the future, so this really IS permissions to read everything, including system databases– only use this if that’s appropriate.

It’s Complicated, but It’s Still Cool

If you’re clever and dedicated, you can work around these issues and use log shipping to provide read access to either applications or users who shouldn’t be reading from the live production copy of data.

Brent Says: Even with AlwaysOn AGs out, I think I’ve said the words “use log shipping” more in the last two years than I’ve said in the rest of my career. It’s useful, cheap, flexible, and nearly bulletproof.

Previous Post
Are You Getting the Benefits of Virtualization?
Next Post
Three reasons to use Extended Events

65 Comments. Leave new

  • Log shipping is still simple and robust, great fan too and great post!

    Reply
  • Sinister Penguin
    January 28, 2015 11:38 am

    Regarding copying Login SID’s to the secondary – the SSIS “Transfer Logins Task” has a “CopySids” option which is a very quick * easy way to do this.

    Reply
  • Sinister Penguin
    January 28, 2015 11:39 am

    Regarding copying Login SID’s to the secondary – the SSIS “Transfer Logins Task” has a “CopySids” option which is a very quick & easy way to do this.

    Reply
  • We do something like this for one of our larger databases but we only need nightlies so we restore differentials. The secondary is used as a reporting database and to run the integrity check since there’s not enough time in the maintenance window to run it on production.

    Reply
  • Steve LaRochelle
    January 28, 2015 2:04 pm

    We had numerous databases that were log shipped to an instance dedicated as a log shipping secondary for use by developers. Development staff had to occasionally recover those databases so they could troubleshoot data issues, and needed to write to the DBs. Re-establishing log shipping was a pain due to the size and number of DBs. Solution was to allow dev staff to take a VM snapshot of the SQL server, recover all DBs, then investigate the data issue. Once the data issue resolved, they would revert to the VM snapshot, and we’d be back in log shipping secondary mode. After a few kinks were worked out, it was a very good solution for all involved.

    Reply
    • Kendra Little
      January 28, 2015 2:16 pm

      Oh, wow Steve. So this inspires two thoughts in me:

      1) That is kind of crazy
      2) That is pretty awesome

      You’re not using the instance for failover, the process doesn’t impact your RPO and RTO numbers, and if you can get that management process with the VM working well and it’s efficient between the teams involved, it’s quite beautiful. Nice!

      Reply
  • Option 3 for us…we had to basically create a read-only environment that allowed people to do back end queries. But we didn’t want to touch the primary -at all- with their logins/permissions. So until MS introduces a server level datareader role, we use a very imperfect solution to approximate something close to that…grant connect, and control server. Then DENY everything else that can be explicitly denied, including impersonation of all other logins. We have a script that manages all this for us. This leaves read/write/execute (at least). Given the small amount of users we have doing this, and the fact that if they break the server, they are the ones who will be hurt the most, we live with the risk involved (system databases, etc).

    A server level role for basic read-only access would be ideal, but not in the cards yet it seems. Given these users’ proclivity to creating horrible massive blocking queries in Access, I’d rather expose a secondary server to the security risks than have them actually running against the primary, so, as I say, we live with it.

    Reply
    • Kendra Little
      January 28, 2015 7:48 pm

      Thanks for the comment, Nic. I can see why you’d do it that way– creating the login on the primary server just seems so wrong, even if you’re disabling it. It’s so easy for someone to enable the login thinking it’s a mistake!

      Reply
    • Kendra Little
      February 5, 2015 8:59 pm

      Nic — today a light bulb went on for me. In 2014 they have basically added the server level role for read only access, it’s just read-everything access. I did some tests and updated the post with the new option to do this in 2014 with a server level role which has been granted ‘CONNECT ANY DATABASE’ and ‘SELECT ALL USER SECURABLES’.

      Reply
      • Yeah, that’s an awesome addition for 2014. I had thought I’d read somewhere about a server level role for this being added, and when I couldn’t find a system server level role, I thought I misremembered, but as you point out…you just create a user server level role of your own with the new permissions.

        Now…if only we can get our vendor to support 2014! 🙂

        Reply
  • I believe I snarked this from one of Microsoft’s articles on database mirroring. It creates a couple of stored procedures in Master (sp_help_revlogin and sp_hexadecimal). Once you’ve run it, you just run sp_help_revlogin and you get a bunch of short create login scripts that will assign a specific SID to the login. You run that on your primary server, and then run the resulting create scripts against the secondary server, and poof, your servers’ logins are in sync. Handy-dandy.

    http://pastebin.com/2aFFrVyU

    I do have a question though. Since authentication is stored with the instance, and authorization is stored in the individual database, you should be able to have a login that is the same on both primary and secondary with the exception of the password, yes?

    Reply
    • Your pastebin link is the code from KB 246133, which I linked to in the article. So I would just use the KB (http://support.microsoft.com/kb/246133), in case they update it over time.

      I’m a little confused with what you want to do with the logins. In general I do not recommend SQL Server Authentication at all– it’s just not as secure as Windows Auth. Getting creative with those means investing time in a less secure direction in the name of security, which just seems counter productive to me.

      Reply
      • The goal would be to populate a database where users could run their queries against (relatively) up to date data, and use their filthy cartesian joins that they love so much. We’ve got OLTP databases over here. Ain’t nobody got time for cartesian joins! 🙂

        Reply
  • Hi Kendra

    Nice article, in my last few engagements I’ve been surprised when the team didn’t know about sp_help_revlogin, they are usually pleasantly surprised when they hear about it.

    One thing I usually do when working with log shipping and specifically allowing read access at the secondary (relating to your options 2 in the post) is to delete the Login from the Primary. This leaves the user in the database intact and with the correct SID so all of your permissions etc at the database level are intact. This prevents anyone accidentally enabling it on the primary but has no ill effects on anyone’s permissions on the secondary.

    Log shipping is a great feature, especially going across domains or when the business doesn’t need (or want) up to the second data.

    Cheers
    Mat

    Reply
  • Regarding the part where you have to kick out all the connections before you can restore logs/backups on the STANDBY catalog…

    Our restore task runs every 15 minutes, and inside it, we have a query that checks see if anyone’s *using* the catalog. If someone is, the task quits. The point is that if someone’s got a long-running query, it gets to finish, but otherwise the STANDBY catalog doesn’t get very stale. The paradigm works well for us. (Yes, we still have to have the client-side use extra “retry/fallback” logic when someone tries to connect during a restore.)

    We use this query in an EXISTS clause; if it returns anything, it means someone else is “using” the database catalog.:

    SELECT TOP 1 1
    FROM sys.dm_exec_requests
    WHERE session_id != @@SPID
    AND database_id = DB_ID()
    AND status != ‘background’ –always non-client activity; not something that should block log-shipping.
    AND (open_transaction_count > 0 –open transactions always need to be waited on
    OR status != ‘sleeping’ –sleeping=idle, except when there”s an open transaction.
    )

    Reply
    • Why TOP 11?

      Reply
      • I’m being explicit that I don’t care what’s returned, only that something is. So the engine can stop the query after it finds the first result (TOP 1), and it returns a constant (1), so it doesn’t have to actually pull anything from the table it’s looking at.

        Reply
  • I’m always surprised that people always forget about the power of log shipping. Your description of the reporting environment reminds me of a post Aaron Bertrand wrote a few months ago. He went into detail on the setup and provided some scripts for those who are interested: http://sqlperformance.com/2014/10/sql-performance/readable-secondaries-on-a-budget I’ve never had the need for such a complex setup, but I’ve used a secondary in standby for reporting when we could tolerate stale data for a few hours.

    Reply
  • James Swinburne
    June 15, 2015 5:43 am

    Hi there,

    We’re trying to implement log shipping from a a SQL Server 2012 to SQL Server 2014.

    I’m not a dba, I just know a bit of SQL, and the people organising the log shipping are the company who owns the software we use.

    They say it can’t be done and I assume it’s because of the: “This backup cannot be restored using WITH STANDBY because a database upgrade is needed. Reissue the RESTORE without WITH STANDBY.” error.

    Is there a way around this? We’d be using it for queries during the day using SSRS. Thanks very much in advance 🙂

    Reply
    • James – correct, log shipping from 2012 to 2014 is for a migration, but not for regular reporting use.

      Reply
      • Hi, thanks for the reply!

        We’ve been advised that to create a separate database for reporting we should use regular (every three hours) log shipping from the live, hosted database – is this not the case?

        Reply
        • James – I can’t quite parse what you’re saying. Can you rephrase your question?

          I think you might be saying that you want to do log shipping with a readable copy on another server, and that’s totally doable, but only if you’re on the *same* version of SQL Server for both servers.

          Reply
          • James Swinburne
            June 15, 2015 6:27 am

            Yes that’s what I meant, thanks very much for your help! Luckily we have a downgrade disc….!

  • Hi Kendra,
    Thanks for pointing out the RESTORE WITH STANDBY option is not supported for a database that requires upgrading.
    I was thinking of applying the SQL 2008 r2 SP3 to our standby database first and then to the primary. If I understand correctly from this, when the database is in standby mode, the actual upgrade is performed if it is brought online. Hence, if we have a secondary of higher level than the primary (after an upgrade), the subsequent tlog restore will fail with “this backup cannot be restored using WITH STANDBY because a database upgrade is needed. Reissue the RESTORE without WITH STANDBY.”.
    My questions are
    1) Is this the same even with Service Pack upgrade (with the same base version release)? that’s patch update instead of SQL version upgrade?
    2) If logshipping standby RO is set up for the secondary, there is no other option but to apply the patch to primary, then secondary and to rebuild the logshipping configuration which requires downtime?

    Please shed some light on this.

    Thanks & regards,
    Irene

    Reply
    • Kendra Little
      October 28, 2015 1:33 pm

      Hi Irene,

      Service Pack upgrade is OK! It doesn’t write updates to the database. This also means that if you have database mirroring and you apply a SP to one node, you can fail over to it and still fail back to the other. It’s only “major” version changes that cause the issue. (It’s been a while since I did this and I’m writing from memory, but I’m pretty sure on this one.)

      (I think that answers both 1 and 2.)

      kl

      Reply
      • Hi Kendra,
        We are in the process of going through a sql 2016 migration but our processes depends on our log shipping processes. We are running to this same issue while trying to do a log shipping with standby from our existing sql server 2005 to sql server 2016. Is there any hacks or workaround for this? Also, do you suggest any alternatives options? Thanks

        Reply
        • Erik Darling
          July 7, 2016 2:42 pm

          Hey Brad,

          Kendra is blogging over here these days. As for your question: Log Shipping hasn’t changed in years, so this is still a limitation.

          Thanks!

          Reply
  • Hi Kendra, nice article.

    I was wondering if you know if there is a way to specify where the .tuf file is created during a RECOVERY restore? By default it seems to be created in the same folder as the data folders, and I’d love to have it created on a faster drive than the data drives.

    BTW, I liked the idea of switching to NORECOVERY and then to STANDBY for the final restore when you have many log files so much I went and wrote a custom restore job. Certainly did take a bit of fancy scripting, and testing, lots of testing!

    Thanks in advance.

    Reply
    • Searching my memory– I don’t think I’ve heard of a way to not have it in backup_destination_directory — where the log files you’re restoring goes. You could move that whole directory to a different drive than the data drives, though!

      Reply
      • I would also expect the tuf file to be in the backup_destination_directory, but weirdly, that’s not the behaviour I am seeing: the restore log files are on a separate disk, as are data and log files. Yet the tuf file is being created in the data drive instead. Not a pressing concern as it all works well anyway, just curious.

        Reply
  • FYI…I’m log shipping 2014 standard to 2014 enterprise… no problem if there was ever a question…lol

    Reply
  • Hello,
    Thanks for this… but I have a question:
    Can I create new indexes on the secondary database that are not on the primary database?

    Reply
    • No, its read-only. It can’t have any changes to it (which would require “recovering” it) lest it break the log chain and not be able to restore anything else. I suppose you could restore with recovery and add the index, but then the only way to get log shipping back is the full restore, so, not very helpful….

      Reply
  • Kendra Little —- You mentioned that you scripted the log shipping to switch between modes ‘no recovery’ and ‘standby’ mode in order to expedite the restoring of the trans logs.
    Mind sharing?

    Reply
    • Juan – no, unfortunately, we don’t have that script to share.

      Reply
      • Brent\Kendra,

        I was able to use the new Read Only Server role in SQL 2014; which as made granting access the reporting db that’s log shipped a lot easier. Do you if SQL 2014 has a similar role to grant execute to SP’s and Functions that only do read operations?

        Reply
    • Juan, you can cobble this together using sp_change_log_shipping_secondary_database to change modes. In your procedure you could switch to norecovery, execute the restore job, monitor msdb.dbo.log_shipping_secondary for the last log file copied and msdb.dbo.log_shipping_secondary_databases for the last log file restored and when they match switch back to standby. I can’t say I’ve done this, but I’m pretty sure that would be the basics of what you would need to do. You could certainly make it more complex and “fancy”, but if you don’t need those features you can probably get by with that.

      Reply
    • Juan,

      I wrote a stored procedure to do this exact thing for my log shipping. It’s been working well for the past 6 months. It’s probably best that you post any questions you have on the relevant discussion thread over on SQL Server Central than on here. But do read the thread first before using, and I’ll caveat this by saying that before you use it in production, you test, test, and test again first!

      (Hope this is OK to link to another site.)

      http://www.sqlservercentral.com/scripts/Log+Shipping/134572/

      Reply
  • Hi Kendra,

    What if you do not refresh the data on your secondary db for two days ?
    Can this cause a problem for your primary db server?

    Reply
    • Erik Darling
      June 25, 2016 3:39 pm

      Hey Nordin! Kendra is blogging over here these days.

      As for your question: as long as you’re still taking log backups on the primary, it won’t cause any direct problems (log filling up, etc.).

      I’d be concerned, though, that if anything horrible happened to your primary database, it may take you a while to get up and running on your secondary. You’ll have to figure out if the time it takes to refresh and failover meets your RPO and RTO goals when you’re not refreshing data for days at a time.

      Reply
  • Is Log Shipping a good strategy to adopt in order to migrate a server to a newer version despite the lack of STANDBY / read-only functionality?

    Reply
  • I am planning to use log shipping . Setup the SSRS reports on the IInd server. BUT my users are kicked off as and when i am going to restore the log file. How can i overcome this. While restoring the log file Is DATABASE on the IInd server is accessible(Readonly) ?

    Reply
    • You can’t, that’s how Log Shipping works.

      People get kicked out when you restore logs, so you need to pause restores while people expect to be able to access the database.

      Reply
      • once database is up can i create the database snapshot and point SSRS reports on that… So that application user will not experience crack while viewing SSRS Reports…

        Reply
        • If you’re on Enterprise Edition (prior to 2016 SP1), sure.

          But that would be the same thing as pausing log restores. It’s only data up to the point where you take the snapshot.

          Reply
  • Yes I am using Microsoft SQL Server 2016 (SP1-GDR) (KB3210089) – 13.0.4202.2 (X64) . Not sure if i go with the High Availability or Always on.. What you suggest. ? Then i can use the SSRS reports with Readonly highavailable cluster.
    Can you please suggest a good way i hope cost effective as well 🙂

    Reply
    • No, that’s more than I can address in a blog comment.

      For consulting, click on the consulting link at the top of the site.

      Thanks!

      Reply
  • Managed to setup log shipping with standby from a 2008R2 (primary) database to 2014 (secondary) was a bit fiddly but works so far….

    Reply
    • Gary Miller
      June 27, 2017 3:07 pm

      Ian, we’d like to do the same thing between 2012 (primary database) and 2016 (secondary) due to our production is not scheduled for upgrade yet. Can you expand on fiddly a bit or better yet post the script!

      Reply
  • Ian Knapton
    June 27, 2017 3:44 pm

    Hi Gary, been a while since I did it hopefully you can make sense of the quick notes I made at the time.
    1. Backup the database on the primary server
    2. Set up log shipping for no recovery only (don’t add standby) using the above backup.
    3. On the secondary server bring the restoring database up with this
    restore database with recovery
    4. Change the compatibility level to 2016.
    5. Backup the database using copy only.
    6. Drop the original database.
    7. Restore the copy only backup same files etc. with no recovery
    8. You now have the same database at the same point in time at different versions.
    9. Run through the LS jobs to test the log shipping, it should work. (without standby yet)
    10. disable the LSRestore and but the 2016 database in standby mode
    restore database with standby = N’c:\temp\redo.bak’ –or some path you prefer
    11. You need 3 steps in the LSRestore job
    step 1 you need to put the database in norecovery mode
    restore database with norecovery
    Step 2 the original log restore step
    Step 3 put the database in standby
    restore database with standby = N’c:\temp\redo.bak’ –or some path you prefer
    12. enable the LSRestore job.

    Hope that helps and let me know if it works for you or any amendments needed

    Reply
    • Hi Ian,
      Steps 9 and 11.2 will be executed successfully seemingly, but no log restorations will be done! This because no newer LSN is found on the TLog backups of the database in the primary server – the reason: steps 3 and 11.3 will create a newer LSN than of the database in the primary server.
      So, dead end!
      Thanks
      Reply

      Reply
  • I have recreated it using the above steps again between my 2008R2 and 2014 instances on my laptop and it’s happily restoring putting the database into no recovery, restoring the logs and then putting it back into standby, One other thing Garry, you may need to kill all the connections to the database, maybe put it in single user, I had an error because I had a query window open to the standby database when it did the restore job.

    Reply
    • Hi Ian,
      Steps 9 and 11.2 will be executed successfully seemingly, but no log restorations will be done! This because no newer LSN is found on the TLog backups of the database in the primary server – the reason: steps 3 and 11.3 will create a newer LSN than of the database in the primary server.
      So, dead end!
      Thanks

      Reply
  • Hi Kendra,

    I have setup log shipping with standby mode but when every 15 min transaction come for restore kill current execution query and restore transaction log. But it possible to setup to restore transaction log after finish of current running and this interval time no other user connect secondary database.

    Reply
  • Yep a dead end, why the job doesn’t fail I don’t know, but hands up it doesn’t work, would be nice if it could though.

    Reply
  • Brent/Erik, i had one question on Log shipping which is basic but very hard to understand. If i have Log shipping on secondary in standby mode where LS restore is scheduled daily once at 10 PM. It completes in 4-5 hours. My question is when does LS restore know to mark itself as completed? This is to check if user is reporting on the secondary database what time of data is expected by them. Q@ also posted on stack, but no reply yet.. https://dba.stackexchange.com/questions/217930/when-does-log-shipping-restore-job-mark-itself-to-completion-in-standby-mode

    Reply
    • Rohan – I’m sorry, but I’ve read this a few times and I don’t understand what you’re asking. I don’t know what this question means:

      “My question is when does LS restore know to mark itself as completed?”

      It restores backup files. When there are no backup files left, it’s done.

      Reply
      • “It restores backup files. When there are no backup files left, it’s done.”– Thanks Brent, thats what i thought. But i see on shared tran log backup folder copied from LS copy job at secondary with couple of files still sitting and were not restored. For eg: LS restore completes at 3:30 AM. For log backups restored , i see the files which arrived at 2:45 AM only restored but not the one at 3 and 3:15 AM. So just thinking if these 2 files are copied, why LS restored ignored them and got completed. LS backup and LS copy happens every 15 mins while restore once at10PM

        Reply
        • Rohan — Log Shipping calls sqllogship.exe, which has various trace levels documented.

          You may have an easier time setting up a similar scenario, and calling the .exe manually with one of the debug/trace options than waiting for someone else to do it for you.

          Thanks!

          Reply
  • Hello,

    Using the STANDBY mode, from time to time, i get an error when restoring a transaction log on the secondary server (Error: 9004, Severity: 16, State: 6. followed by Error: 3414, Severity: 21, State: 4.) and after that error i have to do the logshipping setup again.

    I saw a couple of articles that mentioned something about distinct Bytes per Physical Sector setup on the primary and secondary servers, but it does not apply in my case.

    I noticed that if i use the WITH NORECOVERY option, i never got that error (however, i would really prefer to have the secondary databases in read-only mode).

    Has anyone else encountered this? If yes, how did you manage to overcome it?

    Thanks!

    Reply
  • Will Logshipping stanby option work if Primary server is enterprise sql 2014 and secondary is standard sql 2014 .Will edition of SQL Server Make a difference ?

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.