SQL Server 2012 AlwaysOn Availability Groups Rock

SQL Server 2012 brings huge improvements for scaling out and high availability. To put these changes into perspective, let’s take a trip down memory road first and look at the history of database mirroring.

SQL Server 2005 first introduced mirroring, although it wasn’t fully supported until a service pack. In many ways, mirroring beat the pants off SQL Server’s traditional high availability and disaster recovery methods. Log shipping, clustering, and replication were known for their difficulties in implementation and management. With a few mouse clicks, database administrators could set up a secondary server (aka mirror) to constantly apply the same transactions that were applied to the production server. In synchronous mode, both servers had to commit every transaction in order for it to commit, giving a whole new level of confidence that no transactions would be lost if the primary server suddenly died. In asynchronous mode, servers separated by hundreds or thousands of miles could be kept in sync with the secondary server being a matter of seconds or minutes behind – better than no standby server at all.

Preparing to Compress the Stream
Preparing to Compress the Stream

SQL Server 2008 improved mirroring by compressing the data stream, thereby lowering the bandwidth requirements between the mirroring partners.

In one of the most underrated features of all time, Microsoft even used mirroring to recover from storage corruption. When the primary server detected a corrupt page on disk, it asked the mirror for its copy of the page, and automatically repaired the damage without any DBA intervention whatsoever. Automatic page repair doesn’t get nearly the press it deserves, just silently working away in the background saving the DBA’s bacon.

Database Mirroring’s Drawbacks

While SQL Server was able to read the mirror’s copy of the data to accomplish page repairs, the rest of us weren’t given the ability to do anything helpful with the data. We couldn’t directly access the database. The best we could do is take a snapshot of that database and query the snapshot, but that snapshot was frozen in time – not terribly useful if we want to shed load from the production server. I wanted the ability to run read-only queries against the mirror for reporting purposes or for queries that could live with data a few minutes old. Some companies implemented a series of snapshots for end user access, but this was cumbersome to manage.

Definitely Not Using High Safety Mode
Definitely Not Using High Safety Mode

Unlike log shipping and replication, mirroring only allowed for two SQL Servers to be involved. We could either use mirroring for high availability inside the same datacenter, OR use it for disaster recovery with two servers in different datacenters, but not both. Due to this limitation, a common HA/DR scenario involved using a cluster for the production server (giving local high availability in the event of a server failure) combined with asynchronous mirroring to a remote site. This worked fairly well.

Fairly.

The next problem: database failovers are database-level events. DBAs can fail over one database from the principal to the secondary server, but can’t coordinate the failover of multiple databases simultaneously. In applications that required more than one database, this made automatic failover a non-option. We couldn’t risk letting SQL Server fail over just one database individually without failing over the rest as a group. Even if we tried to manage this manually, database mirroring sometimes still ran into problems when more than ten databases on the same server were mirrored.

Database mirroring didn’t protect objects outside of the database, such as SQL logins and agent jobs. SQL Server 2008 R2 introduced contained databases (DACs), a packaged set of objects that included everything necessary to support a given database application. I abhor DACs for a multitude of reasons, but if you were able to live with their drawbacks, you could more reliably fail over your entire application from datacenter to datacenter.

Enter AlwaysOn: New High Availability & Disaster Recovery

It’s like mirroring, but we get multiple mirrors for many more databases that we can fail over in groups, and we can shed load by querying the mirrors.

That might just be my favorite sentence that I’ve ever typed about a SQL Server feature.

I am the last guy to ever play Microsoft cheerleader – I routinely bash the bejeezus out of things like the DAC Packs, Access, and Windows Phone 7, so believe me when I say I’m genuinely excited about what’s going on here. I’m going to solve a lot of customer problems with mirroring 2.0, and it might be the one killer feature that drives Denali adoption. This is the part where I raise a big, big glass to the SQL Server product team. While I drink, check out the Denali HADR BooksOnline pages and read my thoughts about the specifics.

First off, we get up to four replicas – the artist formerly known as mirrors.

Denali also brings support for mirroring many more databases. We don’t have an exact number yet – we never really got one for 2005 either – but suffice it to say you can mirror more databases with confidence.

Preparing to Demo Availability Groups
Preparing to Demo Availability Groups

DBAs set up availability groups, each of which can have a number of databases. At failover time, we can fail over the entire availability group, thereby ensuring that multi-database applications are failed over correctly.

Denali’s HADRON improvements change my stance on virtualization replication. For the last year, I preferred virtualization replication over database mirroring because it was easier to implement, manage, and fail over. Virtualization still wins if you want to manage all your application failovers on a single pane of glass – it’s easy to manage failovers for SQL Server, Oracle, application servers, file servers, and so on. However, the secondary servers don’t help to shed any load – they’re only activated in the event of a disaster.

AlwaysOn Isn’t Perfect

I need to be honest here and tell you that Denali threw out the baby with the bathwater. There’s going to be a lot of outcry because some of our favorite things about database mirroring, like extremely easy setup, are gone. Take a deep breath and read through this calmly, because I think if you see the big picture, you’ll think we’ve got a much smarter toddler.

AlwaysOn relies on Windows clustering. I know, I know – clustering has a bad reputation because for nearly a decade, it was a cringe-inducing installation followed by validation headaches. Some of my least favorite DBA memories involve misbehaving cluster support calls with finger-pointing between the hardware vendor, SAN vendor, OS vendor, and application vendor. This is different, though, because clusters no longer require shared storage or identical hardware; we can build a cluster with a Dell server in Miami, an HP server in Houston, and a virtual server in New York City, then mirror between them. Now is the right time for AlwaysOn to depend on clustering, because the teething problems are over and clustering is ready for its close-up. (One caveat: clustering requires Windows Server Enterprise Edition, but Microsoft hasn’t officially announced how licensing will work when Denali comes out.)

When you’ve got a clustering/mirroring combo with multiple partners involved, you want to know who’s keeping up and who’s falling behind. You’ll also want to audit the configurations. There’s an improved Availability Group dashboard in SQL Server Management Studio, but I’d argue that GUIs aren’t the answer here. For once, brace yourself – I would actually recommend PowerShell. I’ve given PowerShell the thumbs-down for years, but now I’m going to learn it. It’ll make HADRON management and auditing easier.

Do not try what you're about to see at home. We're what you call consultants.
Do not try what you're about to see at home. We're what you call consultants.

Summing Up Denali AlwaysOn

There’s a lot of challenges here, but as a consultant, I love this feature. It’s a feature built into the product that gives me new ways to handle scalability, high availability, and disaster recovery. There’s a lot of potential in the box, but the clustering requirements are going to scare off many less-experienced users. Folks like us (and you, dear reader, are in the “us” group) are going to be able to parachute in, implement this without spending much money, and have amazing results.

Over the next few months, I’ll be taking you along with me as I dig more into this feature. I plan to implement it in labs at several of my customers right away, and I’ll keep you posted on what we find. If it’s anywhere near as good as it looks, I’m going to be raising a lot of glasses to Microsoft.

If not, I’ll be pointing Diet Coke bottles at Building 35 until they fix the bugs, because this feature could rock.

More SQL Server AlwaysOn Resources

If you want to learn more about how to design, test, and manage this slick feature that offers high availability, disaster recovery, and scale-out reads, visit our AlwaysOn Availability Groups resources page.

Previous Post
Microsoft Atlanta: Cloud-Based SQL Server Monitoring
Next Post
Plagiarism Week: Caught Another One

105 Comments. Leave new

  • Maybe it’s just me, but I live in fear of mixing up the letters when typing “Denali” and “HADRON”.

    Reply
    • Oh and fine post by the way, I really do appreciate the reviews of the new features. You’re like the Roger Ebert of SQL Server this week.

      Reply
    • Nope, not just you. My first thought is how long would it take for HR to be notified when I mistyped it, the second is whether they really want to evoke images of the LHC and it’s potential to create black holes and/or other space-time continuity issues…

      Reply
  • There are a lot of new DMVs related to HADRON that will be very useful for keeping tabs on what is happening.

    Reply
  • “SQL Server 2008 R2 introduced contained databases (DACs)”
    Hi Brent,
    Sorry for being picky but given that contained databases is a term for a future (currently unannounced) feature it might make sense to change that sentence. After all – DAC is (somehow) short for Data Tier Application (I think).

    cheers
    Jamie

    Reply
  • I am very much looking forward to finding out how the Query to the Mirror will work. I have many clients that would really benefit from this that are currently using Transactional Replication to satisfy their reporting needs.

    Reply
    • It gets even better, by the way – you can run your backups on the replicas, too!

      Reply
      • Brent,

        A few questions:

        1. Does this include transaction log backups in some sortof fashion for point in time restores?

        2. Does HADRON come with limitations that are known of as to how many databases until it doesnot work very well? I know with DB mirroring, once the number exceeded 10 it was not a favorable option.

        Thanks,

        Dave

        Reply
        • David – right now, I don’t have the answer to those yet. HADRON isn’t working in CTP1 yet, so it’s tough to tell, but I’m excited to find those answers out.

          Reply
  • for me the Offload of READs is somethign that is of great intrest.. time to roll-up the sleeves and start to try out this puppy dog.. hoping its bite is as good as its bark..

    Reply
  • Wow, wow and wow, I want to try this baby… Sounds great and I hope to be part of this new journey. Excellent Brent. Thanks to share this useful note.

    Reply
  • Good post! Mirroring 2.0 will be a great feature, especially if we can read the replicas with substantial load (and not just put a few ad-hoc readers on there). That would almost solve the scale-out problems that large installations have with SQL Server (anyone want to retrofit a federated schema over an existing multi-TB OLTP database? Yeah, I’m trying not to also. This might keep me from ever having to do that.)

    BUT BRENT!!! How can you be a PowerShell basher if you don’t know PowerShell yet? It really is great for all sorts of things if you are a scripter / automater. If HADRON gets you to learn PS, I bet you’ll wind up loving it.

    Reply
  • Brent,

    Great post! I think it would be nice if you mention the option of running backups on the replicas in your post. Some people may not read the comments.

    Reply
  • This sounds terrific! But, I just finally got up to speed on transactional replication. 🙁 But this definately sounds better.

    It sounds safer too. Not like getting soaking wet and playing with liquids on top of something that clearly says, “Danger. High Voltage“. 😉

    Reply
  • I just want to correct 1 item int he post. Database mirroring as we know it was not thrown out. You can still do just plain old database mirroring in Denali. You don’t have to use HADRON if you don’t need it.

    This also means that if you’re currently using database mirroring extensively, you can continue to use it in Denali and transition to HADRON as you deem appropriate, if at all.

    Reply
    • Absolutely agreed – mirroring wasn’t thrown out, but when Microsoft built HADRON, they threw out a lot of the mirroring-related good stuff in that feature. Does that make sense? How should I reword that so that it makes it clear that HADRON doesn’t include the mirroring-ish goodness?

      Reply
  • I think that caveat you mention is HUGE and will be a MASSIVE impediment to upgrades for the 80+% of the installations out there that are using database mirroring on a STANDARD EDITION stack. Yet another way in which Microsoft is leaving the Tier 2-n users (vast majority of licenses) in the lurch recently.

    Reply
    • Well, Microsoft has repeatedly hinted that they want to keep the existing functionality the same – if you’re doing database mirroring on Standard Edition now, you can continue to do that. It’s not going away. But if you want the new features, that’s another story.

      I hear from more and more of my customers that they’re almost-but-not-quite-tempted to switch to Enterprise Edition for the snapshots, partitioning, data compression, etc. There’s a lot of neat features, but often they don’t quite add up to the Enterprise Edition price tag. The AlwaysOn HADRON features might be the thing to do it.

      I don’t begrudge Microsoft the ability to make money. They’re providing new value, and they should get paid for that value.

      Reply
  • I am absolutely with you on paying for added functionality if you need it. But I REALLY think that Microsoft is and has been losing LOTS of upgrade revenue by a) not providing important functionality that virtually everyone can benefit from (Windowing functions come to mind here) or b) putting a few key items virtually everyone can benefit from only in Enterprise+ (data compression comes to mind here).

    They did down-level backup compression due to strong feedback. Unfortunately that was a direct compete with numerous vendors and was IMHO developer hours spent doing something that was already out there instead of building/IMPROVING existing items in the product stack that only Microsoft can do.

    Eventually I suppose there will be enough “EE-only” features to motivate more to pony up the 4X cost, but I don’t know when that will be nor even if they do it will be a net revenue win for them. Difficult calculus for sure.

    Reply
    • Kevin – when you say “LOTS of upgrade revenue”, look at it mathematically. Last time I checked, Enterprise was something like 5x the cost of Standard. If just one out of five customers decides the Enterprise features are worth the cost, then the gamble paid off. 20% is a pretty low conversion rate, and I’ve seen higher conversions than that when I count per instance, not per client. (Meaning, one of my clients upgraded *all* their instances to Enterprise, and that skewed the overall numbers by a lot.)

      Reply
  • Sorry – by “upgrade revenue” I was on the VERSION upgrade part of the post, not the EDITION upgrade part. 🙂 I have standing as-needed clients that have hundreds of servers and tens of thousands of databases. Roughly 20% of those servers are still on SQL 2000, 60% on 2005 and ONE on SQL 2008 R2 and the ONLY ‘advance’ feature that server is using is data compression. Somewhere around 30% of the servers are still 32 bit!! Like I said – I am Mr. Tier 2-n, so I think you and I run in different circles. :-O
    I just wish Microsoft would do more for the low-mid range clients instead of focusing soooooOOOOO much on the top 0.X% of the clients or potential clients is all. Hmm, actually it isn’t all. I also very much hope they would stop releasing incredibly buggy/feature incomplete features. Likewise they need to FINISH/IMPROVE features that are released instead of so frequently moving onto the next new “shiny” thing. I suppose that discussion is for another blog post though . . .

    Reply
    • Ah, if you mean versions, yeah, I’d agree totally. Most of my clients aren’t upgrading SQL Server no matter what features come bundled in or what the price is, period. As long as the database server continues to work (meaning, not fail) they’re happy. To get performance improvements, they’ve just virtualized onto newer hardware, which buys built-in HA/DR options without touching the application.

      Even at my clients with Enterprise Agreements & Software Assurance, when new versions of SQL Server are effectively free, they’re still not upgrading. For them, it’s not a question of the software cost or features – they just don’t have the manpower and project downtime to do version upgrades willy-nilly. I think that’s actually evidenced by your clients sticking on 2000 and 2005, too – ask them if Denali was free tomorrow, how long would it take them to upgrade all their 2000 and 2005 instances?

      My clients who do deploy new versions of SQL Server are tending towards Enterprise Edition, which means Microsoft’s higher-priced feature strategy is working.

      Reply
  • Claudio Almeida
    November 14, 2010 4:03 pm

    Hi Brent,

    Do you have any information on how the auditing will work in this new version (Denali)?

    Will it be possible to audit SQL Agent/Jobs?

    Will it be possible to audit only a column of the table?

    Will there be any improvement in this area?

    Regards,
    Claudio

    Reply
    • Claudio – I don’t have any information on auditing specifically, but try checking out the Books Online to find out. I haven’t used auditing myself in my role. It looks like a really interesting feature, absolutely nothing against it, but it’s just not my area of specialty. Good luck though!

      Reply
  • When I first read about HADRON too, I was also excited. I just completed setting up mirroring on a couple of our servers and would love to play with the new version of it. We replaced transactional replication with mirroring, so we lost the ability to query data at the subscriber / mirror. We found an alternative, but being able to query the mirrors would be a big plus.

    Reply
  • will filestreams be included in this failover?

    Reply
    • Mark – yes, from what Microsoft told me during a non-NDA interview during the Summit, filestream will be considered a first-class citizen here (as will the new FileTable) and failover will work.

      Reply
  • Hi Brent,

    I just implemented HADR (Availability Groups) in my test virtual Setup.

    I used the following Infrastructure To implement HADR / High Availability Group

    1) Domain Controller for Authentication
    2) Two Virtual Machines with Windows Server 2008 R2 with Failover Clustering installed on each Node.
    3) Local Instance of SQL Server (Not Clustered Instance) installed on Virtual Machines with Windows Server 2008 R2 (which are mentioned in point 2)

    More Details and Step by Step Guide is posted here

    http://sqlserver-training.com/how-to-implement-hadr-a-high-availability-and-disaster-recovery-solution-in-sql-server-step-by-step-guide-with-screenshots/-

    Suggest me some good scenarios to test this HADR (Availability Groups) now.

    Reply
  • Really great article! Thanks.

    Reply
  • I really enjoyed reading this blog entry back in November and it’s still pretty much the best overview of Denali High Availability more than 6 months after publication. Brent, reading through the finer detail of HADR, how much of a limitation do you think it’s going to be that there can be only 1 HADR-enabled instance per cluster node? Together with the limitation of 8 databases per availability group, that looks likely to promote over-complicated architectures. Thank you, donaldc

    Reply
    • Donald – thanks, glad you liked it.

      I’m not worried about the architecture complexity at this point. I generally try to avoid instance stacking anyway.

      Reply
  • Hi! Thanks for this information.

    Can I migrate Jobs and logins with AlwaysOn?

    That’s is a problem from using mirroring.

    Thank You.

    Reply
  • I was wondering is you could elaborate on how Always on can be used to simply zero downtime for application upgrades.

    Reply
    • Clark – I don’t think you can do zero downtime app upgrades with AlwaysOn Availability Groups. Some schema changes on big databases can still require locking or downtime.

      Reply
  • Do the active secondaries require full SQL Server licensing?

    Reply
  • Hi Brent,

    Just wondering what network in the cluster does sql server denali is using in synchronizing the database in the availability group? is it using the heartbeat?

    Reply
  • Brent –

    Great write up on AlwaysOn! You always do a great job clarifying subjects, and always enjoy reading your site! Keep on keeping on!

    Reply
  • Brent,

    When using AlwaysOn do we need to convert the databases participating in the Availability Group to a contained database? Can I configure AlwaysOn on 2 instance(default and name) on the same machine?

    Reply
    • Rafael – the databases don’t have to be contained. I haven’t tried doing 2 instances on the same machine with AlwaysOn Availability Groups, but I wouldn’t recommend it – what are you trying to achieve?

      Reply
      • Brent,

        Thanks a lot for the prompt reply. I’m asking this question since I’m currently having some issues on my POC on Denali. One of the issue encountered is when I failover to the secondary replica the user mapping to the database is removed thus i’m not able to access the database. But if i convert the database to a contained one and create the id inside the database then this resolves the issue since authentication is done on the database.

        For the 2nd issue, i’m not able to connect to the listener on the availability group created on the 2nd instance(named). But if I shutdown the default instance I can connect to the listener successfully. I just wanted to test a 2 node cluster with two instance running on each node. Apparently Always On doesn’t work on this type of setup.

        Reply
  • Brent,

    Your post mentions that “SQL Server 2008 R2 introduced contained database”. I was pretty surprised to read this and googled around. Can only find it for 2012. Does this feature really exist with 2008 r2? why would you not use it (you seem to have some natural bias against it).

    Thanks in advance.

    Reply
  • Brent,

    Thank you for posting the links.

    Best,
    –Sub#

    Reply
  • Hello,

    Anybody have any experience using SIOS DataKeeper Cluster Edition with SQL Server 2012 FCI?

    Thanks!

    Reply
    • Sure, what specific questions did you have?

      Reply
      • I’d like to implement 3-nodes in Site #1 and 2-nodes in Site #2 (DR) with local SSDs. The only way to use this would be AlwaysOn Availability Groups, but I’d like to failover at the instance level (regular SQL Cluster), but want to use local SSDs.

        Datakeeper seems to be the way to use local drives but at the same time failover at the instance level. But the question is with 3-nodes w/local SSDs each, how well does the Synchronous storage replication works (latency?) and async to the DR site (multisite clustering). Any experience insight would be helpful.

        Thanks!

        Reply
        • Hmmm, that’s an interesting question. I’d take a step back and first say that AlwaysOn Availability Groups have been the most complex piece of database infrastructure I’ve ever implemented. I generally don’t recommend them without a team of full time database administrators that are comfortable with clustering already. You definitely don’t want to do this if it’s your first cluster, your first replicated shared storage implementation, and your first AlwaysOn setup. I’d tackle those pieces individually first before trying to combine them.

          Now, having said that, about the performance – the latency will depend on your workloads (big inserts versus small inserts, the number of them, reads vs writes, etc) and the hardware & network involved. I’ve had clients try synchronous storage replication and they’ve been completely happy with it because they invested enough in hardware & networking, and their user loads were mostly reads. I’ve also had clients hit the eject button because it couldn’t keep up.

          Remember that this shouldn’t be your first deployment of any of those individual pieces, so when you’re building something like this, odds are you’ve already got servers with local SSDs in-house that you can use to test synchronous replication and see how well it keeps up just with AlwaysOn by itself, not with Datakeeper. That’s the first step in finding your answer – then start folding in additional technologies.

          Hope that helps!

          Reply
          • Hi Brent

            Jumping back into the future here… I was just really interested to hear more about your experiences using SIOS DataKeeper (Cluster Edition), and whether you’ve seen\deployed it in PROD scenario’s for a high-volume OLTP scenario…

            It is quite a daunting prospect, moving away from the old “trusted” SAN and pin your cluster on Software replication…I’ve found quite a bit out there in terms of how it works and who have used in their environments, but I just don’t get that warm-fuzzy feeling…not yet though…

            Thanks and keen to hear your thoughts!

          • Nico – we don’t have any clients using that software, sorry. The good news is that if you’ve got a high-volume OLTP scenario, it should be relatively easy to load test it in a separate pre-production or staging environment to see how it does. Good luck!

  • Thanks Brent. It is not the first Cluster setup. Our current setup is a SQL Server 2008 R2 clustered instance using EMC storage.

    We are planning to setup a new SQL Server 2012 environment and make the DB switch after all the testing. We are not planning to move to production without extensive testing.

    We have been evaluating AlwaysOn AG with 5-nodes using PCIe SSD and with testing 500K to 1M inserts (not a loop) HADR waits went up and when the sync was changes to async it vanished. Also, instead of dealing with the mirroring at the DB level, I’d thought a solution that would let me use local disks but with a instance level move – FCI with local disks technically, which is not possible without a third-party SW like Datakeeper. I am certainly going to perform testing. I’d like to test 3-nodes in a cluster with local SSDs + Datakeeper (Sync). I just wanted to know if anyone had any implementation experience using SQL Server 2012 (WS2012) with datakeeper with Synchronous (using 10GbE dedicated Network between nodes for the sync).

    Thanks!

    Reply
    • Kathleen Zhang
      October 8, 2015 9:07 am

      Hi Nico,

      Did you implement the above solution — FCI wit local disks uisng Datakeeper? Do you mind sharing your experience?

      Thanks,

      Kathleen

      Reply
  • Hi
    One question about availability groups (AG) that I can’t find an answer on is: What do I do with DTC?

    AG does not support DTC, if I’m correct, but don’t I still need to install and setup DTC? But where do I put it? Only local on the servers or in the cluster?

    You recommend an article from Cindy Gross in the AG pdf but it doesn’t say how to think when AG is used.

    Hope you can light up my path…

    Peace!

    Reply
    • Thomas – nothing changes when AGs are in play. Just keep in mind that databases in an Availability Group don’t support transactional consistency when failed over together to another replica. If you’re lucky and the databases are all in sync, you’re fine – but when there’s a failover, it’s not usually *good* luck involved. 😉

      Reply
  • Hi Brent,

    Is there a limit on number of databases in an availability group ? if there is (as I can see a comment above that there is limit of 8 databases, true?) what is the best way to deal with other databases.

    Many thanks…

    Reply
  • Hi Brent,
    Do you know of ways to query the sync lag and performance info over a period of time? Kind of like how mirroring has a command, sp_dbmmonitorresults ‘dbname’,5,0 to look at the past 5 days.
    I need to answer questions like how does the Send Queues, Apply Rates and Est Data loss look at each hour over the past 1, 3 or 5 days ?

    The Dashboard shows realtime info but I’m trying to find something that can show me how it looks at each hour (or so) over the past n days?

    Thanks!

    Reply
  • Hi Brent,

    How does AlwaysOn complement Analysis Services? do we have to use mixture of Availability groups for Databases and sql server clustering for analysis services? please shed some light on how does HA work for both databases and cubes, Many thanks.

    Reply
    • AK – unfortunately no, AlwaysOn Availability Groups have nothing to do with SSAS. You’ll still need to figure out a different method of making cubes highly available.

      Reply
  • Hi Brent,
    Do you know if (or possible in the future) the AlwaysOn Availability group can cascade updates to additional replicas? We have a 4 node windows cluster with alwayson and 2 nodes are remote across a high speed WAN, instead of having the primary ship updates to both remote nodes over the WAN could we just send updates to one of the remote nodes and then that node would update the other node (if you follow me)?
    Or maybe it doesn’t matter so much…

    Thanks

    Reply
    • Brian – right now, all updates are served out of the primary. If the cascading feature is important to you, consider writing up your business case for it in http://Connect.Microsoft.com. I’m not saying Microsoft takes those suggestions seriously, but it’s a place to start. Hope that helps!

      Reply
  • Brent, we’re considering use of AlwaysOn using a vendor-built schema. Our BI guy needs a subset of data with additional views and indexes. Is transactional replication also available (and advisable) from the primary or replica(s) that could populate a separate BI SQL Server instance with additional views and indexes and from which reporting could be done?

    Thanks

    Reply
    • Glen – that’s totally doable. First, though, I’d try reporting directly off the transactional copy of the database and tuning it with traditional methods before I added more moving parts like replication. If I can get him to read directly off a replica, that will be easier to maintain, faster to update, etc.

      Reply
  • Brent,

    AG gives an option to select Any Replica while configuring backups. In a scenario where there are multiple secondary replicas and the backup priority of each replicas are same, how would DB engine decide on which replica the backup should happen? How would it maintain LSN chain.

    Reply
  • Its the user who decides when to take backup.. SQL automatically wont take backups.. Right?

    Reply
  • Why is it…in an Asynchronous Commit Mode, when there is a long unsent logs, backup log, when taken on secondary replica, for that database fails with connection with Primary replica is not active. However, other databases things are perfect!

    Reply
  • hi Brent i got a question about Always On configuration.
    I got two servers which where supposed to be configured with Always ON. they went live in my absence and the two servers are not configured with always on and the windows cluster service is not installed in both servers. shit happens. i have cut the hand of some people!
    Anyway, is there any way i can configured always on between the two identical instances and servers now that both servers are live with databases and application installed?
    thanks

    Reply
  • with mirroring you could configure bandwidth usage, and configure compression. I can find no way to do this with 2012 AGs. Replicating in async mode, it’s flatlining at 14Mbs across the WAN but 3rd party replication uses 30Mbs. Using AG, it’s unable to keep up with logfile growth.

    Reply
    • Dudeguy – yeah, if you don’t have enough bandwidth to keep up with your rate of data change, you’ll want to use things like network compression appliances (or of course, a bigger pipe). Also it probably goes without saying, but check on what you’re replicating – make sure people aren’t putting scratch/ETL type tables in production databases, or doing unnecessary index rebuilds.

      Reply
      • I think I didn’t explain my issue very well. I’ve got plenty of pipe. Using DoubleTake for replication, I get 30Mbs throughput. I switch to SQL 2012 AG, it flatlines at 14Mbs and can’t keep up with changes. With mirroring, bandwidth usage was configurable as was compression. Is there no way to configure this with AG?

        Reply
        • Dudeguy – ahh, gotcha. If you’re doing sync replication, you’ll hit limits fairly quickly. If you’re doing async replication, the updates should be batched out and you can go much faster (I’ve seen way higher than 30Mbs, for example). Which are you doing?

          Reply
          • I’m using async

          • Dudeguy – ouch! There’s definitely a problem then – I’d start looking at doing network tuning on the server itself (Windows). There’s definitely a lot of headroom.

          • gotchya, I appreciate the advice.

            So, is there a way to tune bandwidth and compression via SQL 2012 AGs?

            Also, what specific network tuning within windows do you suggest?

          • Dudeguy – no problem. There’s no way to tune bandwidth and compression inside SQL Server, but the numbers you’re seeing are so low that I’d suspect there’s a wider network issue at the Windows level. Unfortunately, TCP/IP and Ethernet tuning is beyond the scope of something I can bang out quickly in a blog comment. (sigh) So much work, so little time…

  • Brent,

    It’s been over 4 years now since you posted this article. Have your thoughts on AlwaysOn Availability Groups changed?

    Thanks!

    Reply
  • Hi Brent,

    First of all thanks for all the hardwork to put so much valuable information. Apologies if you have answered this already.

    How far apart LSN are allowed before one can successfully join remote side? Sometime I hit sql error “The remote copy of the database % is not recovered far enough….. SQL Server, Error:1408” . Error is descriptive but want to find out haw far apart as far HARDENED_LSN database can be to successfully join .

    Reply
  • Wow looks like I am really late for this party but I would just shoot my question in really quick

    Brent,

    Supposing hypothetically we use our primary server for reporting and have a replica of same reporting database can we load balance both primary and secondary server and report from each respectively

    Reply
  • if we take log backup on secondary replica what will happendmens lsn missmatch will happend or not

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.