Blog

Office Hours is Moving to Wednesdays

For the last few years, we’ve hosted a Tuesday webcast to talk SQL Server.

We’re giving it a vacation in August, and then starting September 2, Office Hours is coming back on Wednesdays.

This is easier for us because of the way our SQL Critical Care® service works. We’re typically working with clients interactively on Monday and Tuesday, and then Wednesday we take a break to write the findings recap. It’s easier for us to take a midday break on Wednesdays.

Here’s a sampling of some of the questions we’ve answered at Office Hours:

  • What are your recommendations for service accounts? Unique domain accounts for each instance/service, shared domain accounts, or something else?
  • Can you explain why one database can be added to AG with just a full backup and another would need a transaction log backup as well?
  • Working on my first AG setup in 2012. I don’t quite understand how the listener works with DNS. Does the listener name get registered with each AG IP?
  • We’re looking into Amazon Web Services cloud with SQL Server, any thoughts or tips or things to be aware of?
  • Ever ran into an instance where database gets detached automatically after server restart?
  • What would you recommend as MAXDOP for a server with 80 cores?

To pick our brains and learn from other questions/answers, register for Office Hours here.

I’m Presenting at kCura Relativity Fest 2015 in Chicago

This sounds really cheesy, but I’m honestly excited to be presenting again this year at kCura Relativity Fest 2015.

Here’s what I’ll be talking about:

How to Check Your SQL Server’s Health

The Abstract: You’re a system or database administrator responsible for the uptime and performance of Relativity’s SQL Servers, but you’ve never received professional training on SQL Server and you need to figure out if it’s safe. I’ve built up an array of free tools over the 15-plus years I’ve been working with SQL Server, and I’ll show you how to use them in this demo-packed session.

Why I’m Presenting It: For the last few years, kCura has sent me around to dozens of the biggest Relativity shops around to do a one-day SQL Critical Care®. I want to teach you that exact same process – and trust me, it’s way easier than you think. I’m not going to teach you how to put fires out – but I *am* going to teach you how to tell in just a few minutes if your server is on fire or not, and whether it’s a small trash can fire – or a big, blazing three-alarm monster.

Buying the Right Hardware for SQL Server

The Abstract: Ever wonder if there was a simple list of instructions for picking out the perfect infrastructure for Relativity at your organization? Wonder if you should use mirroring, replication, clustering, or AlwaysOn Availability Groups? Are you better off with solid state or shared storage? Is virtualization an option? Microsoft MVP Brent Ozar has worked with dozens of Relativity customers, and he’ll simplify your infrastructure options in this one-hour session.

Why I’m Presenting It: I see the same situation over and over again: your Relativity infrastructure started out small – just one SQL Server in the corner that you didn’t worry too much about – but man, has that thing grown. It’s become really important to the business, and people are stuffing data in there like it’s some kind of Chipotle data burrito. You need to buy exactly one replacement environment, and it’s gotta be right the first time. You don’t buy a lot of SQL Servers, so you want simple independent advice that you can take home to management. I’m going to hand you a few sketched-out options with budget ranges.

Want to Learn More About Relativity Fest?

Check out this video, and then head over to RelativityFest.com to register. It sells out, so move fast.

Out of Office: Time For the Brent Ozar Unlimited Retreat

manzanitaThank you for your web visit. We’re out of the office and will be back on Monday, August 3.

During this period we will have limited access to our email. We’ll still have full access to Twitter, Facebook, and Instagram. We apologize in advance about that, because we’re going to be posting a lot of photos about this year’s company retreat.

This year, we’re spending a week on the Oregon coast – specifically, Manzanita, an adorable little town in Tillamook County – population 598. We’ve rented a couple of houses for us, our spouses, and our dogs. (In the past, we’ve done our retreat on a cruise ship, and a couple of times in a beachfront house in Cabo.)

For immediate assistance, please contact Microsoft Support at 1-800-642-7676.

New Cardinality Estimator, New Missing Index Requests

During some testing with SQL Server 2014’s new cardinality estimator, I noticed something fun: the new CE can give you different index recommendations than the old one.

I’m using the public Stack Overflow database export, and I’m running this Jon Skeet comparison query from Data.StackExchange.com. (Note that it has something a little tricky at the top – it’s using a local variable for the @UserId, which itself makes for a different execution plan. When literals are used in the query, the behavior is different, but that’s another story for another blog post.)

First, here are the two different execution plans, both of which do about 1mm logical reads:

With the new cardinality estimator (compat 2014)

With the new cardinality estimator (compat 2014)

With the old CE (compat 2012)

With the old CE (compat 2012)

It’s a really subtle difference in the plans – at first glance, just looks like the 2014 CE removed a single operator – but the big difference is in the number of estimated rows returned:

  • Old CE estimated 82 rows returned
  • New CE estimated 352,216 rows returned

In actuality, 166 rows get returned with this particular input variable – the new CE is just flat out making bad guesses on this data.

Here are the different index recommendations:

CREATE NONCLUSTERED INDEX NewCE_OwnerUserId_Includes
ON [dbo].[Posts] ([OwnerUserId])
INCLUDE ([ParentId],[Score]);

CREATE NONCLUSTERED INDEX OldCE_OwnerUserId_PostTypeId_Includes
ON [dbo].[Posts] ([OwnerUserId],[PostTypeId])
INCLUDE ([ParentId],[Score]);

And when I run sp_BlitzIndex® after doing a little load testing, both missing index recommendations show up in the DMVs:

sp_BlitzIndex® output

sp_BlitzIndex® output

But surely the new CE’s recommendation is better. We’ll create just the one it recommends, and the resulting execution plan does 57k logical reads. Both the new CE and the old CE produce an identical plan, albeit with wildly different row count estimates (old 83, new says 37,423, actual is 166):

Execution plan with the new CE's recommended index

Execution plan with the new CE’s recommended index

HAHAHA, now the new CE agrees that it needs the index recommended by the old CE in the first place. So let’s remove the new CE’s recommendation, and only create the old CE’s recommended index. Both the old and new CE choose to use it:

With the old CE's recommendation

With the old CE’s recommendation

And even better, the old CE’s recommendation results in only 175 logical reads.

So what’s the takeaway? If you’re relying on the execution plan’s missing index recommendations for fast performance tuning, you’re not going to get the best results – no matter which cardinality estimator you’re using. With 2014, the recommendations are different, not necessarily better.

The real keys are knowing how to do it yourself, and we teach those in the Advanced Querying and Indexing 5-day in-person class next month.

Upcoming Free SQL Server Training Webcasts

Holy cow, people, we have a lot of free stuff coming your way. Here’s just some of the highlights:

  • Digging Into THREADPOOL Waits
  • How Do You Fix a Slow TempDB?
  • Advanced Shared Storage
  • How to Prove Hardware is the Problem
  • SQL Server High Availability Options Explained
  • Watch SQL Server Break and Explode
  • The Art of Stress-Free Database Administration
  • Prove It! Collecting The Right Performance Metrics
  • What’s New in SQL Server 2016

To sign up for ’em all, just put in your contact info here. You don’t even have to check any boxes. Learnin’ doesn’t get much easier than that. These webcasts won’t be recorded, so be there or be square.

13 Questions to Ask Before You Touch a Database Server

I’ve touched a lot of SQL Servers over the years. After my share of trips to HR for inappropriate touching, here’s the questions I ask first these days:

  1. Is this in production now?
  2. If this goes down, what apps go down with it?
  3. When those apps go down, is there potential for loss of life or money?
  4. How sensitive are these apps to temporary slowdowns?
  5. When was the last successful backup?
  6. When was the last successful restore test?
  7. Is everyone okay losing data back to the last successful backup?
  8. When was the last successful clean corruption test?
  9. Do we have a development or staging environment where I can test my changes first?
  10. Is there any documentation for why the server was configured this way?
  11. What changes am I not allowed to make?
  12. Who can test that my changes fixed the problem?
  13. Who can test that the apps still work as designed, and that my changes didn’t have unintended side effects?

Want us to run training classes at your office?

You’re working with SQL Server, and you really want to learn how to make it faster and more reliable, but your boss just won’t let you go off to our upcoming training classes.

Good news – we’ll bring the training to you. Just download our training catalog PDF and choose from modules on how to diagnose your SQL Server pains, treat them, manage your existing servers, and build new ones.

Then email us at Help@BrentOzar.com with your list of modules, the location where you’d like training, and the number of folks who will attend. We’ll get you pricing information and the latest scheduling from Brent, Jeremiah, and Kendra’s calendars.

Let’s get together and talk SQL Server!

An Update to SQL Server Management Studio is Available

That’s the message you get in your tray if you open SSMS 2016 today:

SSMS Update Available

SSMS Update Available

Then click Tools, Check for Updates, and you get:

Check for Updates

Check for Updates

What’s awesome is that I didn’t even install SQL Server Management Studio from the new standalone installer – this is my full CTP2.1 installation of SQL Server, and it’s still smart enough to know that SSMS can be updated separately.

Upon clicking the Update link, I’m taken to the Download SQL Server Management Studio page.

Some folks will probably say that it should update itself the way most apps do these days, by downloading and applying the update without launching a web browser. I think launching a browser is the right call, though – there are release notes to read, important gotchas to be aware of, and hey, who doesn’t like reading documentation? Okay, maybe that last part is just me.

Should You Put Things in the master Database?

Growing up as a young boy, my elementary teachers always warned me about the perils of putting stuff in master. But what’s the real deal? Why is this such a bad idea?

System databases are often located on space-limited drives. The default installation parameters for SQL Server throw master, model, and msdb all on the C drive. I’d rather not create tables that might grow when I’m not looking, and then possibly run my server’s limited boot drive out of space. Starting Windows with a full C drive is not a lot of fun.

In the event of a disaster, we don’t usually restore master. While you can indeed restore one server’s master database over to another, it’s just not a good idea. You’re failing over to another server because things are already going wrong – why make things worse by doing something you rarely practice? Plus, if you’re failing over due to corruption, you probably don’t want to bring over a possibly corrupt master database.

Even if you could restore it, you’ll lose data. You can’t do transaction log backups for the master database, so you’ll lose whatever changes were made since the last full backup.

Now, having said all this, I’m completely okay with putting utility stored procedures in the master database – things like sp_WhoIsActive, sp_Blitz®, and the like. After all, those objects take hardly any space, and we don’t need to restore them over to our DR environment.

Kendra says: You may have more in your master database than you think! It just takes one ‘oops’ on a deployment script, and there you go.

WANT TO LEARN MORE ABOUT HIGH AVAILABILITY AND DISASTER RECOVERY?

We just launched our new DBA’s Guide to SQL Server High Availability and Disaster Recovery – a 6-hour online video course that teaches you about clustering, AlwaysOn AGs, quorum, database mirroring, log shipping, and more.

How to Configure AlwaysOn AG Backups with Ola Hallengren’s Scripts

With SQL Server AlwaysOn Availability Groups, you can offload backups to a replica rather than running them on the primary. Here’s how to do it:

1. Install Ola Hallengren’s utility scripts on all of the replicas. During the install, it creates a laundry list of SQL Agent jobs, but doesn’t set up schedules for any of them. More on that in a couple of steps.

Backup preference settings

Backup preference settings

2. Set your backup preferences. In SSMS, right-click on your Availability Group, click Properties, and click the Backup Preferences pane.

The first option, “Prefer Secondary,” means that your backups will be taken on a secondary server unless all secondaries go offline, at which point they’ll be taken on the primary. There’s some risk here: if communication falls behind, your secondary may be running backups of old data, as Anthony Nocentino explains. In that case, you won’t get backup failure alerts, but you won’t be able to meet your RPO. Monitoring for that is an exercise left for the reader.

In the “Replica backup priorities” window, rank your replicas to choose who will do the backups first.

Say I have three servers – two in my primary data center, and one in my disaster recovery site. I’d rather have my full backups running in my primary data center because if I need to do a restore, I want the backups nearby. (You can also run backups in both places – and I would – but more on that in a future post.)

To configure that, I’d set priorities as:

  • SQLPROD1 and SQLPROD2 (my primary data center replicas) – both 50 priority
  • SQLDR1 (my disaster recovery replica) – priority 40

3. Test your backup preferences. Run this query on each replica:

SELECT d.database_name, 
  sys.fn_hadr_backup_is_preferred_replica (d.database_name) AS IsPreferredBackupReplicaNow
FROM sys.availability_databases_cluster d

This returns a list of databases in an AG, and whether or not they’re the preferred backup replica right now. Check that on all of your replicas to make sure backups are going to run where you expect, and if not, revisit your backup preferences in the last step.

4. Configure Ola’s Agent full backup jobs. On any replica, in the Agent job list, right-click on the “DatabaseBackup – USER_DATABASES – FULL” job, click Properties, click Steps, and edit the first step. By default, it looks like this:

sqlcmd -E -S $(ESCAPE_SQUOTE(SRVR)) -d master -Q “EXECUTE [dbo].[DatabaseBackup] @Databases = ‘USER_DATABASES’, @Directory = N’C:\Backup’, @BackupType = ‘FULL’, @Verify = ‘Y’, @CleanupTime = NULL, @CheckSum = ‘Y’, @LogToTable = ‘Y'” -b

You need to change these parts:

  • @Directory – set this to your backup path. I like using a UNC path that all of the replicas can access.
  • @Verify – I’d recommend turning this off to make your backup jobs go faster. If you really want to verify your backups, restore them on another server.
  • If you want to run the backups on a secondary replica rather than the primary, add a parameter for @CopyOnly=’Y’
  • If you only want to back up specific databases, modify the @Databases parameter. I don’t like doing that – I’d rather have one job for all of my backups. The way this is set now (USER_DATABASES), this one job will back up all of my databases that aren’t in an AG, plus it’ll back up the AG-contained databases where this replica is the preferred backup right now.

With AlwaysOn AGs, replicas can only run copy-only backups, and people often think that’s a problem. It’s only a problem if you want to use differential backups – otherwise, with AGs, it doesn’t affect transaction log handling at all. I don’t recommend using differentials with AlwaysOn AGs, but if you insist on doing it, you’ll be running your full backups on your primary replica.

Other parameters you may want to set:

  • @CleanupTime – how many hours of backup files you want to keep around
  • @Compress – Ola inherits the default compression setting at the server level, but you can hard code this to Y if you want to make sure backups are always compressed

So here’s what my Agent job script ends up looking like, with my changed parts in bold:

sqlcmd -E -S $(ESCAPE_SQUOTE(SRVR)) -d master -Q “EXECUTE [dbo].[DatabaseBackup] @Databases = ‘USER_DATABASES’, @Directory = N’\\FILESERVER1\SQLBackups’@CopyOnly=’Y’, @CleanupTime=48, @Compress=’Y’, @BackupType = ‘FULL’, @Verify = ‘N’, @CleanupTime = NULL, @CheckSum = ‘Y’, @LogToTable = ‘Y'” -b

5. Copy these changes to all replicas and test the full jobs. Make sure each replica that isn’t supposed to run the fulls, doesn’t, and the replica that IS supposed to run the fulls, DOES. In really mission-critical environments where we’re building the new AG servers from the ground up, we actually fail the AG around to different servers to test behavior when different servers are the primary – and when entire data centers go down.

6. Repeat the setup with your log backup jobs. Right-click on the “DatabaseBackup – USER_DATABASES – LOG” job and click Properties, Steps, and edit the first step. Set the @Directory and @Verify steps as we did earlier.

Here’s where things get a little tricky – you don’t have to add @CopyOnly=’Y’ for the log backup steps. There’s no such thing as a copy-only log backup in an Availability Group secondary, much to my dismay.

You might also consider setting the @ChangeBackupType parameter to Y. By default, if Ola can’t do a transaction log backup (like if it’s a brand new database that has never had a full backup before), then the log backup is skipped. If you set @ChangeBackupType=’Y’, then Ola will do a full backup in that situation, and then do a log backup. However, if it’s a large database, this might take a while to perform the full, and this will tie up your log backup job while it runs. Say the full takes 20 minutes to perform – this might blow your RPO/RTO commitments.

7. Copy these changes to all replicas and test the jobs. Same thing here that we did with the full backups.

8. Configure your DBCC CHECKDB jobs. You need to check for corruption on any server where you’re running backups – here’s why.

9. Design your monitoring. Sadly, SQL Server doesn’t centralize backup history, so it’s up to you to poll all of your replicas to find out where backups are happening for any given Availability Group. In one case, I had a DBA change the backup preferences and Ola’s job settings incorrectly, and all of the backup jobs were succeeding – but none of them were backing up one of his Availability Groups.

10. Set yourself a weekly reminder to test restores. AG backups are notoriously complex, and if you cared enough to set up this whole expensive infrastructure, then you should care enough to test it. Make sure you have a good, restorable set of backups.

Kendra says: In a complex environment, I’m a fan of some paid third party tools that help you control the compression on your backups and which can keep a central repository of backup history, to help you monitor. Good news: Ola’s solution integrates with them! You can have your backup cake and eat it, too.

Want to Learn More About AlwaysOn Availability Groups?

We just launched our new DBA’s Guide to SQL Server High Availability and Disaster Recovery – a 6-hour online video course that teaches you about clustering, AlwaysOn AGs, quorum, database mirroring, log shipping, and more.

css.php