4 Ways to Move Data Into a Reporting Server

Architecture
46 Comments

You’ve decided that you need to offload reports to a different SQL Server, and you’re willing to pay the costs. Here are the most common ways to move the data from production to a reporting server, arranged roughly from easiest-to-implement to hardest:

1. Log shipping. Log shipping is more of a technique than a feature: it just means that every time you take a transaction log backup in production, another server restores that backup elsewhere. This has worked for decades, is fairly simple to set up, and even works from on-premises up to the cloud. It’s fairly bulletproof – changes to tables, indexes, etc don’t break it. However, to update the readable replicas, you have to apply the latest transaction log backups – and that can only be done when no one else is using the database. This means it’s mostly a good fit for restores that can run after hours, like 6PM-6AM, and all day long users see a snapshot of the data as of this morning. It’s cheap – but it’s not a good fit for real-time reporting.

2. Always On Availability Groups. Kinda like log shipping in that transactions are copied over to the replicas, but independent of log backups, and changes are applied in real-time (or near-real-time) while users are still in the database. Availability Groups are available in Standard Edition, but you can’t read from them – you need Enterprise Edition to offload reads to secondary replicas. The bummer is that every single replica has to be licensed with Enterprise Edition.

3. Replication. SQL Server can replicate transactions from one server to another, giving you the capability to copy only specific tables (or subsets of them) to the reporting server – something neither of the above options can do. You can even index the reporting servers differently, too. However, this can be tougher to set up and maintain in some circumstances, especially when the source table structures change. It’s not a great fit for third party applications where you don’t control the schema.

4. ETL tools like SSIS, BIML, or Azure Data Factory. Sometimes the source data tables just aren’t well-suited for reporting queries, like the source tables are overly normalized and require too many joins. Other times, the source data is stored in trickier-to-query structures like XML or JSON. This is by far the most complex and expensive way to do reporting, though: you have to pick the right tool, design the target tables, figure out how to detect changes in the source system, move the data between systems with as little impact as possible, and then maintain that pipeline whenever the source table structure changes.

I said I’d arranged those from easiest-to-implement to hardest, but I should note that the further you go down this list, the more benefits you gain on the reporting side of the equation. With #3 and #4, your reporting apps are more progressively isolated from changes in the source system. You don’t have to update your reports every time the source system tweaks something about their table structures. (You will, however, have to change the way you move data between systems.)

Previous Post
Things to Think About When Your Databases Grow Quickly
Next Post
Poll: what’s in your development database?

46 Comments. Leave new

  • “Sometimes the source data tables just aren’t well-suited for reporting queries…”
    And sometimes they aren’t well-suited for anything except to be burned at the stake before dropping them!

    Reply
  • What about AWS Database Migration Service and its Continuous Data Replication functionality? I’m currently looking into using it for replicating data from an in-house third-party MS SQL-based application into AWS for reporting and some other requirements. Anyone have any experience or feedback on doing this?

    Reply
    • Steve Rezhener
      June 28, 2019 6:54 am

      I think this is a great option, when your solution is inside an AWS bubble. DMS works like a champ without prior replication knowledge. I successfully implemented it between RDS (SQL Server 2012) to EC2 (SQL Server 2017). Once you enable CDC on source tables, it will start an ongoing replication.

      Reply
  • I’m completely opposed to replication. It’s fragile, fairly resource intensive, it leads to crappy report writing by analysts that just see the data in front of themselves and use it as is. No matter how many times you tell them, analysts are going write stuff they aren’t supposed to, to their replica databases, which then creates work for you when a replication has to be re-initialized then blows away whatever views and tables they added without keeping code copies for, it produces a ridiculous amount of logs that have to be maintained, it frequently creates headaches when you patch and frequently creates headaches in maintaining the source application, not to mention the hours spent on working around the replication when you patch or update an application.

    I’m working on converting nearly 100 replications mainly to stored procedures and a little bit of SSIS, have maybe 35 of them done so far and I am saving conservatively 20 hours a week in support that wretched feature. I have been able to re-allocate 50-60 thousand dollars of SQL core licensing that used to support replication, to other servers that can benefit from the speed.

    THOUSANDS of support hours have been created in the last few years over the instant gratification of enabling sql replications all over the place. The heaviest reporting typically needed by any organization usually requires data that is only somewhat fresh. On our stuff at least, our reports that had to be real time all went to the live copy since even a few moments of latency could be intolerable, let alone hours of latency when the replication breaks, but use of these reports is very minimal under very specific circumstances by a small number of staff. I update my ‘stalest’ reporting data source every 30 minutes and no one notices that the ‘live’ dashboard it feeds is 30 minutes behind.

    I’ll only ever advocate AAGs or ETLs anymore. Even getting rid of only a small number of my replications, my quality of life has drastically improved as a DBA and am able to do more useful things with my time.

    Reply
    • Keith – yep, you’re not alone – lots of folks have had tough times with replication.

      Reply
      • NerdOnTheStreet
        June 27, 2019 5:53 pm

        SAN level snapshots are also a great option if you dont need up to the minute data.
        Yes this falls into backup and restore but you can provision TB of data into a reporting environment in a couple of minutes. Its especially useful if you have multiple databases that need to be consistent for reporting.

        Reply
    • Alex Friedman
      June 30, 2019 4:28 am

      I feel your pain. Good call.

      Reply
  • Another option which I would put before #1 is a Full database restore. We take a Full backup every night and then restore it on our reporting server. Then we run a full DBCC CHECKDB on it. This serves several purposes, 1. Gives us a reporting server. 2. Tests our backups (which are also replicated and restored offsite via block level dedup device) 3. Let’s us run consistency checks against the DB without the load on Production. Not very fancy but is simple and it works.

    Reply
    • Brandon – yeah, I’d consider that Log Shipping v0.5. 😉

      Reply
    • Thinking about doing this for reporting needs of an outside auditor. Its a third-party application that contains some PII the auditor is not supposed to see and there are over 300 tables and views. Thinking ill set up a job that restores the database and the overwrites the sensitive data to something arbitrary before giving the auditor permissions back

      Reply
      • That sounds like it would work and be quick. It works pretty reliably. I forgot to say we also set it have the proper permissions for people and then set it to read-only after it is restored.

        Reply
    • Jeff Mergler
      July 1, 2019 2:07 pm

      @Brandon agreed. This has been my approach as well. I have this scripted to run overnight in a SQL job and after I restore the db, I have a somewhat standard set of steps that I do: truncate or drop a bunch of tables irrelevant to the reporting such as audit trail, drop indexes not needed and add a few that help the reports run faster but would kill the OLTP database. I then set it to read-only. I would like to just target a few tables and only replicate maybe 6 of 24 tables in the schema. This would negate the need for truncation and potentially speed things up.

      So this article came at a great time: I’m looking for a better approaches than this… something that I can replicate and deploy a little easier. For now, its a custom coding step within SQL Agent jobs but I’m going to investigate log shipping and replication based on the article. Although I’m thinking replication might be too brittle and cumbersome with folks occasionally adding new columns to tables.

      Reply
  • You did not mention Change Data Capture (CDC). Unless you are bucketing this under number 4 with SSIS. We have been using it for the past 2 years and it has given us the best performance and closest to real time data as I have ever been able to get in the past. Drawbacks that I have noticed are it takes over the Job History logs and there is no way to only bring in partial data from larger tables that hold a lot of historical data. For example just the last 3 years to date.

    Reply
  • Ioannis Daskalakis
    June 27, 2019 10:33 pm
    Reply
    • Ioannis – that’s definitely an option too. Do you actively use it as a reporting server? Have you found any drawbacks with that approach?

      Reply
      • Ioannis Daskalakis
        June 28, 2019 4:01 am

        Hi Brent,
        Only problem I had was that sometimes the database snapshot creation would fail.

        Reply
        • Yeah, that’s been my experience – I’ve had a tough time making that bulletproof reliable.

          Reply
          • We have a 700 GB database that is in a 6-node AG. On a secondary reporting replica, we have two daily database snapshots (one for today, one for yesterday) that are maintained by a SQL Agent job. We used to get corruption in the older snapshot. We reformatted the dedicated snapshot drive to use large NTFS records (include /L switch in the format) to handle very large sparse files (such as the database snapshot file). Downside is that the redo rate on this node, for this database, is terrible (250 KB/sec).

  • Shaun Austin
    June 28, 2019 12:55 am

    One big-bummer with AG readable secondaries is that the indexes can’t be different from the primary, which means you can’t optimize the indexes for reporting-style queries without creating them in the primary, and taking the associated maintenance hit. Would be nice if you could at least disable them on the primary.

    Reply
  • Rhodri Evans
    June 28, 2019 2:31 am

    Service Broker anyone? (Runs away quickly.)

    Reply
  • Chad Miller
    June 28, 2019 4:28 am

    Another option for enterprises. SAN-based snaps and clones. The storage vendors have gotten very good at this feature where depending on vendor snapping multi-TB LUNs takes minutes. I would rank this option as the easiest. Your storage admin provides does most of the work, then provides command-line syntax to DBA. DBA sets up job to deattach, snap and attach database.

    Reply
    • Chad – true, I’m actually working at a shop doing that this week, hahaha, but I keep forgetting about it as a new valid option because I’m doing so much cloud migration work. For example, this company’s having to move away from that snapshot option as they move to the cloud.

      Reply
  • Brian Boodman
    June 28, 2019 5:58 am

    I think it’s still possible to do that in the cloud (e.g., https://aws.amazon.com/blogs/mt/take-microsoft-vss-enabled-snapshots-using-amazon-ec2-systems-manager/ ), and the results are probably better since it (probably?) doesn’t clear the cache. The approach is definitely not simple, but the plus side is that backup/restore is nearly instant. Of course, calling it instant is misleading. What actually happens is that the restore is done lazily, resulting in disk I/O being much slower until the restore is complete).

    Reply
    • Brian – “think it’s still possible” isn’t a great endorsement. 😉

      Reply
    • I wouldn’t trust it. Other than NetApp’s SnapManager line of snapshot products I haven’t ran across any third-party snapshot based solution that has been stable over time through patching. I believe netapp was working on making a cloud version of their filers but I cant imagine that being anything less than horrendously expensive.

      More than just not working, when snapshots go wrong, they can randomly decide to quiesce a machine for half an hour or even completely destroy a database or a machine.

      MS’s “application consistent” snapshots from Hyper-V seem to be pretty reliable, but sometimes the quiesce time can be excessive.

      Reply
  • Steve Rezhener
    June 28, 2019 7:00 am

    Brent, good summary. I think you need to add no. 5, the cloud option – AWS DMS and Azure Data Sync.

    Reply
  • Jeff Mergler
    July 1, 2019 2:12 pm

    This was a helpful post Brent. Its very timely for me as I’m trying to hone a process to have a reporting database created at most customer sites by default. The wacky reports that they create and run against prod are just killing us. The reports have no business being run against prod. Let them crash the reporting box with their insanity. Thank you.

    Reply
  • Mirroring technology can also be used for the report server.

    Reply
  • Thank. I saw the comment.
    But, mirroring for receiving reports using snapshots we really use. And in our case, it still remains the cheapest solution. There are really problems with snapshots. It is necessary to analyze and repeat the creation of a snapshot. There is another problem. This is a recurring error 665, which is associated with the limitations of the operating system and the snapshot structure (this is a sparse file). Error 665 occurs on large amounts of data and a large number of simultaneous snapshots.
    If there is a financial opportunity, we will look for other solutions.

    Reply
    • Alex – how is it the cheapest solution when it requires Enterprise Edition on both the primary and the readable replica? Why wouldn’t you switch to log shipping and use Standard Edition on the secondary?

      Reply
      • Before the option with mirroring, we already had an option with Log Shipping. For this, 10 servers were deployed for reports. We already had a license for the Enterprise.
        We then purchased system storage EMC ExtreemeIO. And so we decided to do everything on two servers (the source server and the report server), and the storage system ExtreemeIO. We chose to mirror database snapshots for reports.
        Everything worked well until source base (OLTP) grew to a certain size. We are still coping with problems, but we are thinking about changes in the decision.

        Reply
  • Step 4 of SSIS also includes soooo many ways of shifting the data!

    Reply
  • What can you recommend me for Oracle environments? Many thanks 🙂

    Reply
    • A consultant who works with Oracle. You’re welcome.

      Reply
    • I did this a number of ways using SSRS, had some reports pointing to the live production environment, simple using the oracle drivers, found this straight forward.

      For more complex reporting requirements;
      * Still pointed to the production environment as above, but referenced materialized views refreshed on a daily schedule.
      * MS-SQL server DWH, using either SSIS and oracle drives, alternatively you can use a linked server in MS-SQL and create a SQL server agent job to copy records across using pure SQL.

      Above is based of working with oracle in the public sector for around 11 years.

      Reply
  • We are running MERGE statements to replicate data from primary to reporting database, but the database is too big (250GB) and so replicating millions of records taking too long. we run the sql agent job for Merge statement for individual tables.
    The problem with MERGE is takes long because it runs remote query and another big prob is BLOCKING.
    So, thinking to implement with alternative option, 1) Log shipping or 2) Trans Replication.

    Reply
  • Currently, I am taking copy-only back overnight and restoring it in the report server. So far it works fine. But now I am having requirement to increase the frequency of data refresh, say every 1 hour. Backup and restore is almost an hour task. so which approach I will apply now? Any thoughts?

    Reply
  • A bit late to this thread. I’m considering using either Snapshot replication or a backup/restore option for our Reporting DB. One question I have though is, when using the backup/restore route that others have mentioned, how do you get around views and stored procs or even denormlaized tables created within the reporting DB that will get lost on restore? Do you have create scripts that run post restore?

    Reply
    • Either run scripts post restore, or create them in the primary database. (Views and stored procs don’t take up significant space, for example – no reason not to just put them in the primary.)

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.