Blog

This week, Angie, Erik, Tara, and Richie discuss Veeam, replication, setting up alerts, using multiple instances, and much more.

Here’s the video on YouTube:

You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.

If you prefer to listen to the audio:

Play

Office Hours Webcast – 2016-05-25

Does turning on trace flag 2861 cancel out optimize for ad-hoc workloads?

Angie Rudduck: Let’s go with Gordon for the first question of the day.

Erik Darling: Oh no.

Angie Rudduck: He wants to know if switching on trace flag 2861 which is cache zero-cost plans he thinks, will that cancel out enabling optimize for ad hoc workloads?

Erik Darling: No, it shouldn’t because optimize for ad hoc workloads affects plans differently. Optimize for ad hoc the first time you run something you’ll get a stub. The next time it will cache the actual plan. So the zero-cost plan cache engine should not change that behavior. You can test it though if you don’t believe me.

Angie Rudduck: How would you test that?

Erik Darling: Run a zero-cost plan more than once.

Angie Rudduck: Hopefully that answered your question, Gordon.

 

Why am I getting update cascade errors?

Angie Rudduck: We’ll just go down the line. Wes. He is trying to use update cascade and he’s getting an error about it not being able to cascade directly. When he takes it away, it works. Is there a way to get around this error?

Erik Darling: Yes. Stop having foreign keys reference in a circular motion because it will just try to delete in a big, horrible ouroboros until all your data is gone.

Tara Kizer: So you won’t be able to do that specific foreign key. You may need to handle the deleter, the update, in a store procedure instead or maybe a trigger. We don’t recommend foreign keys, cascading foreign keys, anyway.

Erik Darling: Why not?

Tara Kizer: I looked at your slide. The serializable isolation level, the most pessimistic isolation level it is.

[Laughter]

Erik Darling: That’s right. Wes, there’s another hidden gotcha to those foreign key cascading actions. That’s whenever they run they take out serializable locks on all the tables that have the cascading action happening on them. So if you’re doing a large deleter update or whatever with a cascading action, you could take out some pretty heinous locks on your objects that really block other processes out. So be very careful when using those.

Angie Rudduck: He’s so smart.

Erik Darling: Nah.

Richie Rump: From an architecture perspective, I’ve always wanted to be explicit with what we’re deleting and/or updating. So from an architect’s perspective, that’s why I just never used any of that stuff. At least not since Access 2.0.

Tara Kizer: Access 2.0.

Angie Rudduck: I didn’t know that was a thing. Moving along.

Richie Rump: You weren’t born yet, Angie.

Angie Rudduck: Speaking of born, tomorrow is my birthday if you didn’t see calendars on Brent’s calendar.

Erik Darling: Happy tomorrow birthday because I’ll forget tomorrow.

Angie Rudduck: Don’t worry I will unless I look at my calendar.

Erik Darling: Brent didn’t send the card for me?

Angie Rudduck: Yeah, we’ll blame Brent. He’s not here.

 

How do I get around CRLF characters in an input file?

Angie Rudduck: J.H. has a situation. “A user entered a carriage return in a free form text field. Now trying to bulk insert along with format file relevant flat text file.” How can he skip the unwanted carriage return and use only the last column carriage return for that particular row?

Tara Kizer: You just need to pick a different delimiter, right?

Erik Darling: Right but I think the problem is that the delimiter is a carriage return for all the other rows.

Tara Kizer: You’re right.

Angie Rudduck: Can you just edit the file and delete that carriage return?

Erik Darling: Yeah, what Angie is saying, I would probably do a process that pre-cleans files for things like that.

Richie Rump: Yeah you would encode it in a different string type or whatever, like basic C4 or something or other.

Tara Kizer: I wonder if he could get around it by quoting the column so that the carriage return is in between quotes so it won’t read it. I’m not sure though.

Angie Rudduck: You’d still have to edit the actual file before. He’s going to have to do some processing on the file before it goes anywhere it sounds like.

Tara Kizer: But I wonder if you don’t need to do it though. If you select the option to—or not select the option obviously since it’s… well, okay, yeah.

Richie Rump: By quoting it with a pipe or something like that?

Tara Kizer: Something so that it’s surrounded, but yeah, the file does need to be modified to include that. It might just be easier just to get rid of the carriage returns and not allow those carriage returns from the application side.

Richie Rump: Yeah, unless you need them.

Tara Kizer: Yeah.

Angie Rudduck: Carriage returns.

Erik Darling: Yeah, as someone who’s done a lot of horrible file loading in his lifetime, I always try to make my delimiters as unlikely to happen in real life data as possible, either like double pipes or something like that just really, you’re not going to see it pop up too often.

Angie Rudduck: That’s a smart idea.

 

Could Veeam replication cause 8TB databases to go into suspect mode?

Angie Rudduck: All right, we’ll move on. Eugene wants to know has anyone seen issues running Veeam replication interfering with large SQL Server databases where the databases go into suspect mode? They have an 8 terabyte database and it seems like “each time our server team has enabled Veeam replication it caused issues where the 8 terabyte database goes offline.”

Erik Darling: I think it sounds like what’s happening is it’s freezing IO for a long time. A frozen IO makes SQL think that the database perhaps the drive that it’s on is unavailable and that, you know. So I would check your SQL error log and I would look for frozen IO, I would search for frozen IO and see how long the IO is being frozen for. Because I think if it happens for over a certain amount of time, SQL just starts assuming that the disk that the file is on isn’t available and it just says, “Nope. Not here.” A sort of like a similar thing happened, I’m sure, Tara, you’ve seen it where if a log file goes missing on a drive like that.

Tara Kizer: Mm-hmm.

Erik Darling: It will go right into suspect mode.

Tara Kizer: I wonder if it’s due to the database size because it’s having to freeze it for so long to take a snapshot of an 8 terabyte database. Just might be something that you just can’t do due to the size of the database. I would contact the vendor and see what they say as far as database size and how long that snapshot is going to take.

Angie Rudduck: I learned a trick when you’re search for your error log for your SQL Server error log. We always look for frozen but then if you see it’s frozen how long before it resumes. So you don’t see the resumed ones. But if you search for resume, you see both the frozen and the resume responses. So gives you a little bit more insight to how long your database files were really getting held up.

Tara Kizer: Yeah and it should be less than a second that IO is frozen. I mean we’re talking milliseconds. I don’t know about an 8 terabyte database though.

 

Can I do both Veeam full backups and native SQL Server full backups?

Angie Rudduck: Yeah, well we’re just going to keep going with Veeam because Sean has a question as well. They have a maintenance plan, the full backup, running every day as well as Veeam doing backups every day. “How do these two tasks affect logs? Don’t both tasks truncate logs? Should one be scheduled before the other? Or should one not be done if the other one is being done?”

Tara Kizer: No, they don’t impact the logs because the logs care about LSNs. You can use either of them too, either of the full backups to do your restore. You just have to make sure you get the right log sequence. It does impact differentials though so if the Veeam backups aren’t using the copy only option, the differential base could be a Veeam backup or the regular native backup. If you’re not doing differentials then it doesn’t matter.

Erik Darling: Yeah. Only transaction log backups will truncate the log file, nothing else will. So backups don’t do that.

Tara Kizer: Right.

 

Should I run the publisher and distributor together, or the distributor and subscriber?

Angie Rudduck: All right. Paul says, “Currently at my job they are running the distribution service off that same database server that the database is on. I’m going to propose an additional one replication to a reporting server, the original replication is merge. What considerations are there when you do this? Would it be better to run a separate distribution server?” Tara.

Tara Kizer: Yes, definitely.

Angie Rudduck: We all know that answer is yes.

Tara Kizer: The best practice is the publisher needs its own server. The distributor need its own server. The subscriber needs its own server. The distributor does a lot of work and it should not be on the same box with the publisher especially. When I’ve had to share servers, I’ve put it on the subscriber but it really depends upon the load of those boxes. You are going to need licenses for the distributor to be on another server so that’s something to take into consideration but the best practice is that they’re all on their own boxes.

Erik Darling: Heck yeah.

Angie Rudduck: What Tara said. Justin, I will not be turning 21, thank you. I am a tiny bit older than that.

Richie Rump: 19.

Angie Rudduck: I will be celebrating my one and only 29th birthday. Thanks very much.

Tara Kizer: We can do the other replication question from David.

Angie Rudduck: Yeah? All right. I have to scroll way up for that one now.

Tara Kizer: Okay.

 

How do I know when replication’s initial load is done?

Angie Rudduck: David said they’re setting up replication. How does he confirm that a data loads replication is complete so he can start the job to load the app dbs?

Tara Kizer: So what you can do is you in Management Studio object explorer, expand replication, local publications, then you’ll see your publication. Right click on it and you can do the view snapshot agent status. That’s the snapshot that is snapshotting the publisher. Also right click on the publication for view log reader agent status. Then you get to go into right click on the replication and launch replication monitor. I’m pulling mine up right now. Mine is in failure. I created one as a test. But after you pull up replication monitor, you navigate to the publication and then you click on the all subscriptions tab on the right pane. Then double click on the row that shows up there. Then you could look at the publisher to distributor history, distributor to subscriber history, and you will be able to see where it’s at in the process. You can also insert a tracer token into the publisher. It’s not a row in the table but it’s like a test row basically and once it makes it to the subscriber you know the whole process is complete.

Angie Rudduck: You’re so smart about replication.

Tara Kizer: I don’t know about that.

Angie Rudduck: I’m so glad we have you around.

Tara Kizer: There are definitely smarter people.

Angie Rudduck: To answer all these questions for these poor souls who still have to use it.

Erik Darling: It’s nice because I get to zone out while you…

Angie Rudduck: I know, I’m like, I’ll read the questions to see who should answer it next.

 

How do I find out all SSIS packages that ran under a specific login?

Angie Rudduck: Nate wants to know if there’s a way to find all SSIS packages that have been executed under a specific server login. He’s trying to remove this login because the employee has left the company. Anybody got any suggestions?

Tara Kizer: If you’re using the SSISDB catalog you probably can go through those tables which has information. If it’s not using that, there is a logging table in msdb somewhere, I don’t remember what it’s called but there is a table there. So you’re going to have to do some sleuthing here to figure out which tables are but these are things that people have blogged about so you should be able to find which tables they are.

Erik Darling: Yeah, we don’t do a ton of work with SSIS in these parts. So if you really want a better answer for that, you might want to go to dba.stackexchange.com and ask around there if nothing in the great blog world turns up for you.

 

When did my AG last fail over?

Angie Rudduck: Tammy wants to know if there is a way to use T-SQL to determine when databases and availability groups failover last. She’s been looking through DMVs but hasn’t found anything.

Tara Kizer: Yes. I don’t have the DMVs memorized but, yeah, it’s in there.

Angie Rudduck: It’s dmv_ag_failover. Just kidding. Don’t really go to that.

Tara Kizer: I’m pretty sure at least I’m thinking about failed over last. If not, search through the error log because it’s definitely in there.

 

Are there any drawbacks of setting up SQL Server alerts?

Angie Rudduck: A. Deals wants to know if there are any impacts of setting up alerts from 17 through 25, 823, 824, 825, and 832. Did you get that off of our website?

Erik Darling: I think the bigger impact would be if you don’t set them up.

Angie Rudduck: Yeah.

Erik Darling: And then one day your database is corrupt and you don’t know how long or what happened to it or any of those things.

Tara Kizer: I think that our link says 19 through 25. I’m not too sure what you’re going to get for 17 to 18 so you might get too much to work on. I’m not sure about 832 either. I know 823 to 825 are in there but I’m not positive about 832.

Erik Darling: Yeah, I think if you turn on alerts for 17 and 18 you’re going to get a lot of stuff for failed logins and some of the lower lever old SQL Server errors. So we try to stick to 19 through 25 which are the severe ones and then 823, 824, and 825 which cover the hard and soft IO errors that come along with corruption. What about that one from Dennis?

 

Should I stack multiple instances on the same physical box?

Angie Rudduck: Dennis, all right. Dennis says he’s got an opinion question. Using SQL 2008 R2 Enterprise what’s better 5 by 8 to 12 cores and 128 gigabytes of RAM or one 60 core with 1 terabyte of RAM server? Oh I see.

Erik Darling: So I would only assume that you’re talking about stacking a bunch of instances on the server which we are all vehemently opposed to. Stacked instances are horrible ideas.

Angie Rudduck: Now if you have a bunch of Lego servers and you can break them apart and have five different servers or one mega Transformer server and you’re looking at spec’ing this out before purchasing, maybe there’s some more questions. What do you guys think if he’s not instance stacking? If he’s legitimately just spec’ing out, should he buy one server or five servers? You should buy five servers if you need five instances.

Erik Darling: Right. So it sounds like you’re either going to stack instances or you’re going to stack applications together on a single instance. Either way I’m opposed to and very. If you have no choice then you have no choice. Or if like you know you’d have like your crappy app server that you stick a whole bunch of like internal apps on. That’s one thing. Servers at the size that you’re talking about though, I’m looking at very special, specific hardware for each one. So if there’s a performance problem with one database or one application, I’m troubleshooting a performance problem with that one database and that one application. Not for that one database or one application and the 28 that you have surrounding it. Make life easier on yourself. Separate things as much as you can.

Angie Rudduck: Dennis follows up saying that he was gifted two of the big servers and now he has to find a use for them. If you have spare hardware sitting around…

Erik Darling: Dev servers.

Angie Rudduck: Yeah, staging. Dev test environments.

Erik Darling: I’ll forward you a DBCC_CHECKDB this way.

Angie Rudduck: Because you’re doing that, right? Nightly? Full checks?

 

What’s a good way of encrypting data in a database?

Angie Rudduck: What’s a good way of encrypting data in a database? Cameron wants to know.

Erik Darling: Transparent data encryption if you’re on Enterprise edition. If not, there are the various third-party tools out there. I believe one from a company called—well it used to be called GreenSQL now it’s called something I can’t remember the name of, Hexa… Dexa… Donkey something. They changed their name to something.

Angie Rudduck: So look up GreenSQL and see what really pops up.

Erik Darling: Yeah, see what their new name is and if you can remember it and spell it and email, shoot them an email about how to encrypt your database. Unless you’re on Enterprise, then use TDE then you can come back next week and ask us questions about TDE.

Richie Rump: Yeah, I’ve used TDE as well as you know regular third-party or just middleware-type components where we would just encrypt certain parts of data and then throw the encryption stuff into the database.

Erik Darling: Yep.

 

If I’m having CXPACKET issues, what should I do with MAXDOP?

Angie Rudduck: J.H. wants to know he has a particular database sometimes experiencing frequent CXPACKET. He wants to know if changing MAXDOP recommended—if it’s recommended. If yes, could it hurt performance for other databases running at the same?

Erik Darling: You betcha. MAXDOP, unless you’re on SQL Server 2016 is a server-level setting. So if you change MAXDOP to 1, then you will affect all the other databases. Your best bet is to actually do some work tuning queries and indexes on that database to make sure that the cost of the query stays low enough so that they don’t have frequent CXPACKET problems.

Angie Rudduck: What if he doesn’t have MAXDOP configured at all? Could that possibly be his problem?

Erik Darling: Sure could.

Angie Rudduck: What would you recommend starting out with MAXDOP?

Erik Darling: I don’t know, it depends on how many cores you have, right?

Angie Rudduck: It’s a multi-layered questionnaire.

Richie Rump: Would it be better to just set MAXDOP in a particular query?

Tara Kizer: You could.

Erik Darling: If you change that, sure. If you can change the code, sure.

Angie Rudduck: I like asking questions on questions.

 

What’s the best isolation level to use with Entity Framework?

Angie Rudduck: Let’s go with developers from Paul. The developers are using Entity Framework to create their application, what’s the best isolation level to use?

Richie Rump: Well what would be the best isolation used without any framework? Same thing?

Tara Kizer: Who is the question by?

Angie Rudduck: Paul. He’s kind of in the middle. Right below all the happy birthdays.

Tara Kizer: I sorted by asker so maybe it will find it.

Angie Rudduck: Oh. So smart.

Erik Darling: There’s really no single best isolation level. It kind of depends on what you end up running into as your application grows and matures. So if you run into a lot of weird blocking and deadlocking and you’ve tried your best to sort it out then read committed snapshot isolation could really help you. Otherwise just leave SQL as is.

Tara Kizer: I don’t think that Entity Framework being used is even related to why you would set the isolation level to be something different than the default.
Richie Rump: Right, exactly. If you are having problems with locking and stuff like that then you need to fix the queries, the Entity Framework queries. Either you need to change the link statement or you just trash the link statement if it’s too complex and then write your own SQL statement to be embedded either in the code or in a store procedure.

Erik Darling: My preference is to have everything in a stored procedure and then just have a new framework called the stored procedure.

Richie Rump: Yeah, it’s up to the team itself. I’ve been on some teams where they didn’t want any stored procedures because they wanted the app to update everything when they do deployments because they’re doing deployments once a day and it’s like, “Oh, okay, we’ll do it that way.” And I’ve been on some other teams where they’re all stored procedures. Where we have thousands of stored procedures sitting out there and everyone is rewriting the same stored procedure over again because they can’t find the old one. So you know, it’s up to the team on how they want to do it.

Angie Rudduck: More horror stories there, Richie?

Richie Rump: Oh, I’ve got many. So many.

 

I’m getting backup IO errors and CHECKDB errors on my backups.

Angie Rudduck: Let’s move on to Gordon. He says SQL log is reporting VSS backup IO errors and suggesting running DBCC CHECKDB on the VSS backup. Is that relevant and if so, how does he do that?

Tara Kizer: I don’t know how you would do that.

Erik Darling: I don’t think—there’s no way to, currently, to run DBCC CHECKDB on a backup. You would have to just run it on the database itself.

Tara Kizer: You have to restore it.

Erik Darling: Yeah.

Angie Rudduck: Yeah that seems kind of strange. Maybe it’s referring to like what we talked about early about taking your backup, restoring it elsewhere, and then checking db.

Tara Kizer: I’d be interested in the error that is actually suggesting running DBCC CHECKDB on the VSS backups. I wonder if you’re misreading the error. If you can pop the error in the message box, we could take a look at it. We might not be able to answer it though.

Angie Rudduck: Yeah. We’ll see if he gets that error in there.

 

How can I get a readable replica database without replication?

Angie Rudduck: In the meantime, let’s see what Nate has to say. “Given the complexities and general distaste for replication, what’s the best alternative to maintain a quote ‘read replica type of database’ in Standard Edition?”

Tara Kizer: Wait, I know this answer. Availability groups. Even with Standard Edition you can use synchronous replicas. Enterprise edition gives you asynchronous replicas but I’m right, aren’t I right? That Standard Edition offers synchronous replication? It does not offer synchronous at all?

Erik Darling: No. SQL Server 2016.

Tara Kizer: 16?

Erik Darling: Right. That’s when availability groups came to Standard Edition. They are asynchronous only.

Tara Kizer: That’s what I was trying to remember.

Erik Darling: And you don’t get a readable replica with it. So it really does put the “basic” in basic availability group. So what I would do is sort of depending on how fresh the data needs to be is I would probably try log shipping with a caveat that every x amount of hours people are going to get kicked out so the data can be updated by restoring logs. Or if I’m using mirroring, I’m going to take a database snapshot and go through all the pain of programmatically funneling people off to a different database snapshot.

Tara Kizer: And you have to make sure that you refresh that snapshot on a schedule. Otherwise it’s going to just keep growing and growing. I would actually use replication in this scenario though. Yeah, there are some issues and it can be complex and it takes a lot of time to set up on a large database but it works. If you really don’t like it, move to Enterprise Edition or upgrade to 2016 after it’s released so Standard Edition has the readable secondaries.

Angie Rudduck: Yeah, just because we don’t like replication all the time, it doesn’t mean that it’s not valuable. It just means that it’s not used correctly a lot of the times.

Erik Darling: Yeah, unfortunately a lot of people…

Tara Kizer: And it’s hard to troubleshoot.

Erik Darling: Yeah, there’s that. But a lot of people use it for HADR which it really isn’t a good solution for.

Tara Kizer: I like it for reporting if you can’t use availability groups to do your reporting. You just have to know that you might be spending a lot of time working on replication if that’s the technology you use. It took a lot of my time.

Erik Darling: Yeah, make sure that you are very precise in what you’re replicating over.

Tara Kizer: Yeah.

Erik Darling: If you can put filters on it, put filters on it. If you can get away with only moving certain tables that are necessary for whatever reporting over, only move those tables over. Move as little as possible. Don’t just push everything.

Angie Rudduck: Yeah, that’s good.

 

Should I disable my SA account, rename it, or both?

Angie Rudduck: Okay, Konstantinos wants to know, auditors are telling them—excuse me, they’re requesting to disable their SA account or to rename it. Is it safe to do either of these?

Erik Darling: Yes.

Angie Rudduck: And when is it the best time to do it? Do you do it in a production environment that is two years old for example?

Tara Kizer: I love that. I mean you’re not supposed to be using the SA account. Disable it. Create a new account and grant that sysadmin or just rename the SA account to something else. Don’t ever use SA. And don’t ever even use this other account. Give it a really complex password, put it in a lockbox, never use it. You should be connecting with your own Windows account or you know if you’re having Windows authentication issues, some other SQL account, but really Windows authentication, you should almost never need the SA account. You don’t really need to do this in a maintenance window unless the other things are using SA which they shouldn’t be.

Erik Darling: Yeah, just a quick word of warning on renaming SA is in the past some SQL updates have broken when SA was renamed because even if the account is disabled it was attempting to do something with the account where it got screwed up so you don’t have to rename it but disabling it is fine.

Tara Kizer: Yeah.

 

How do I convince my server admin that multiple instances are a bad idea?

Angie Rudduck: All right, I have a good one from Karen. How can she convince the server admin that multiple instances are a bad idea? I’m guessing she means stacked instances.

Tara Kizer: Why do they think it’s a good idea? I mean why do we have to install more? It’s hard to maintain. How are you going to split the resources? You know, memory, CPU, everything is shared. I mean, yeah, you can use Resource Governor and specify our server configurations but I don’t know.

Angie Rudduck: It’s worse performance than it is cheaper, Karen. It will cost them more in headaches for performance, your clients are going to be complaining, than it would be if for instance you had couple VMs and you were being really diligent about not overcommitting your host with resources to all of your VMs.

 

What’s the right threshold to use for index maintenance jobs?

Angie Rudduck: Let’s see, we’ve got a few more minutes. Did anybody see anybody? I thought I saw one I liked but I’m trying to find it.

Erik Darling: Yeah, there was a Mandy Birch one.

Angie Rudduck: Oh, yes, okay. I didn’t see hers. Mandy wants to know they are trying to determine their optimal thresholds of index fragmentations to use. Deciding for reorg versus rebuilds. They’ve been using the default 5 percent reorg and 30 percent thresholds for rebuilds but they’ve been reading those are probably unnecessarily low. What do you guys recommend? What do we recommend?

Tara Kizer: What are you trying to solve? Is fragmentation really a problem?

Angie Rudduck: Are you on a Standard Edition?

Tara Kizer: Yeah, if you’re on Standard Edition, you’re not going to get online rebuilds so these indexes are coming offline while you’re rebuilding them, but, yeah. I’m not sure why the SQL industry just wants to rebuild their indexes all the time to get rid of fragmentation. It can help you with storage because the more fragmentation you have the more storage it’s using but it’s not helping you with performance. I mean you are getting updated statistics but rebuild indexes less frequently than you probably are doing now and I would set much higher thresholds.

Angie Rudduck: Yeah, I think commonly we say anywhere from 30 to 50 for reorganize and even 60 to 80 percent for rebuilds. Something else I would say is check out Erik’s blog about just updating stats.

Tara Kizer: Update statistics daily or more often and then rebuild or reorg indexes less frequently. I mean I don’t think you even need to do it weekly.

Angie Rudduck: Is there any performance impact to updating statistics?

Tara Kizer: It does add a load, yeah, especially if you’re doing a full scan on a large table but you can have custom scripts to decide on what the sampling is going to be. The bigger the table, the less sampling you may need.

Erik Darling: The other thing that can happen is if you update a lot of statistics and then a lot of queries decide to recompile. Then that can happen. Also updating the statistics themselves can have an IO impact on your server, you know, reading a lot of statistics, big tables, that can certainly make some stuff happen. There is no such thing as free maintenance but updating statistics is a much more lightweight operation than index reorgs or rebuilds.

Angie Rudduck: Yeah. All right, well, it’s 45 after the hour now in everybody’s time zone. Should we call it a day?

Erik Darling: Yeah, might as well.

Angie Rudduck: Let’s all go to the pool, drink, or something, you know.

Erik Darling: Boat drinks.

Angie Rudduck: Bye guys. See you next week.

All: Bye.

Brent Ozar Unlimited Team
Brent Ozar Unlimited is a boutique consulting firm focused on understanding your environment and strategy. We partner with you to objectively identify pain points and develop solutions that align to your business goals. Your experience comes first; we share our knowledge and expertise to help you.
0 comments ↑ Back to top

Just announced on the Microsoft Release Services blog, if you run a SELECT query with the NOLOCK hint and your query goes parallel, it can block other queries.

This is a bug, and it will be fixed soon, but it is a very big deal for people who think NOLOCK means, uh, NOLOCK.

More technical details:

  • While one transaction is holding an exclusive lock on an object (Ex. ongoing table update), another transaction is executing parallelized SELECT (…) FROM SourceTable, using the NOLOCK hint, under the default SQL Server lock-based isolation level or higher. In this scenario, the SELECT query trying to access SourceTable will be blocked.
  • Executing a parallelized SELECT (…) INTO Table FROM SourceTable statement, specifically using the NOLOCK hint, under the default SQL Server lock-based isolation level or higher. In this scenario, other queries trying to access SourceTable will be blocked.

If you haven’t already installed CU6, don’t.

If you have installed it, Microsoft recommends that you leave it in place unless you experience this exact issue, at which point you’d need to uninstall CU6.

To know when a fix comes out, watch the CU6 download page, or subscribe to SQLServerUpdates.com and we’ll email you.

Brent Ozar
I make Microsoft SQL Server faster and more reliable. I love teaching, travel, and laughing.

I’m mostly a figurehead here at Brent Ozar Unlimited. My true skills are menu advice, interpretive dance, and reading Wikipedia.
Brent Ozar on sabtwitterBrent Ozar on sablinkedinBrent Ozar on sabinstagramBrent Ozar on sabgoogleBrent Ozar on sabfacebook
27 comments ↑ Back to top

You have a monitoring tool, but you’ve set up an email rule to dump all the alerts into a folder.

You’re not particularly proud of that, but it is what it is. You’re just tired of the spam.

Group query in the registered servers list

Group query in the registered servers list

But when you get in in the morning, you want a simple screen that shows you if anything is really and truly broken in your environment.

Step 1: set up a list of registered servers or a Central Management Server. This lets you execute a single query across multiple servers.

Step 2: start a group query. Right-click on the group of servers, and click New Query.

Step 3: run sp_Blitz @IgnorePrioritiesAbove = 50, @CheckUserDatabaseObjects = 0. This gets you the fast headline news, especially when used with the improved priorities in the latest version in our SQL Server download kit:

sp_Blitz across multiple servers

sp_Blitz across multiple servers

You’ll discover when:

  • Corruption has been detected
  • Databases in full recovery mode aren’t getting log backups
  • You’re running a known dangerous build of SQL Server
  • Poison waits have struck
  • And much more

On Monday mornings, start here. I know, you’re probably not going to find anything, because your servers are in flawless shape and nothing ever goes wrong.

But just in case….

Brent Ozar
I make Microsoft SQL Server faster and more reliable. I love teaching, travel, and laughing.

I’m mostly a figurehead here at Brent Ozar Unlimited. My true skills are menu advice, interpretive dance, and reading Wikipedia.
Brent Ozar on sabtwitterBrent Ozar on sablinkedinBrent Ozar on sabinstagramBrent Ozar on sabgoogleBrent Ozar on sabfacebook
11 comments ↑ Back to top

Even with lots of monitoring in place, we should perform periodic checks of our SQL Servers.

Think of this like “Spring Cleaning”, except I would recommend that it be more frequently than just once a year. Doing it monthly might be a bit ambitious due to our busy schedules, but quarterly could be achievable.

Below are the areas I recommend for Spring Cleaning your databases.

SP_BLITZ

There is so much good stuff reported by sp_Blitz. You’ll find common health, security and performance issues in there. Once you’ve fixed the issues of concern, you should periodically check if there are any new issues being reported. Did someone enable xp_cmdshell? Is it reporting any poison waits? Is there a new sysadmin that you weren’t aware of? This once happened at my previous job. Desktop Support team had added a user to our DBA group because it resolved an issue. This user was not even in IT. Imagine the damage that could have been done since it was common for most people to have Management Studio installed.

For more information about sp_Blitz, go here and here.

SP_BLITZINDEX

sp_BlitzIndex does a sanity check of the indexes in your database. You can increase your query performance by having the right indexes on your tables. sp_BlitzIndex helps with that. Don’t just look at it once, review the output on a regular basis, especially the High Value Missing Index, Duplicate Keys, Borderline Duplicate Keys, Unused NC Index, Active Heap sections.

For more information about sp_BlitzIndex, go here.

HIGH-VALUE MISSING INDEXES

When adding high-value missing indexes, be sure you aren’t creating a duplicate or borderline duplicate key index. Review your current indexes to determine if the high-value missing index can be combined with an existing one. Maybe an existing index just needs some INCLUDEs.

DUPLICATE AND BORDERLINE DUPLICATE INDEXES

Duplicate key indexes mean that you have two or more indexes that have the same exact key. Borderline duplicate key indexes mean two or more indexes start with the same key column but do not have completely identical keys. You may be able to combine these into one index, but analysis is needed as there are some that shouldn’t be touched.

Check this out to get more details about duplicate and borderline duplicate keys.

UNUSED INDEXES

Unused indexes are tricky. When you are analyzing this data, you have to keep in mind that this data is only available since the last restart. If you rebooted the server yesterday and are viewing the data today, you might see a lot of unused indexes in the list. But are they really unused? Or have the indexes not just been used YET? This is a very important point when deciding to disable or drop an index based on this list. If you reboot your servers monthly due to Microsoft security patches, consider reviewing the list the day prior to the reboot. I once dropped an index 3 weeks after the server was rebooted, thinking that the entire application workload must have been run by now. A few days later, I got a call on the weekend that the database server was pegged at 100% CPU utilization. I reviewed which queries were using the most CPU and found that the top query’s WHERE clause matched the index I had dropped. That query only ran once a month, which is why it hadn’t recorded any reads yet. We later moved that monthly process to another server that was refreshed regularly with production data.

HEAPS

Generally speaking, a table should have a clustered index. A good exception is staging tables, such as those needed for ETL processing. When a table doesn’t have a clustered index, it’s called a heap.

Heaps are great for INSERTs but not for SELECTs. DELETEs leave the space behind unless a table lock is used during the delete, either via a table hint or by lock escalation. Empty space takes up space in backups, restores and memory. If you scan the heap, you must also scan the empty space even if there are no rows. And then there’s UPDATEs. UPDATEs can cause forwarded records if the updated data does not fit on the page. Forwarding pointers are used to keep track of where the data is (a different page). This means extra and random IO.

Heaps cause fragmentation, extra reads and sometimes a huge waste of space.

Just say NO to heaps unless it’s a staging table or some other really, really good reason.

DATABASE SIZE AND GROWTH

Is your database growing faster than you expect? Knowing how big your database is and how fast it is growing can help you plan for future hardware upgrades including memory and disk space.

DISK SPACE

Most of us have monitoring in place to alert when a drive is running out of free space. Wouldn’t it be nice to proactively add storage before you receive an alert? Keep track of total and free disk space over time to help you determine when to add more space or even when to order more storage.

VIRTUAL LOG FILES

If a database has a high number of Virtual Log Files (VLFs), it can impact the the speed of transaction log backups and database recovery. I once had a database with 75,000 VLFs. I didn’t even know what a VLF was at the time (hence having so many). After rebooting the server, a mission critical database with extremely high SLAs took 45 minutes to complete recovery. We were in the process of opening a support case with Microsoft when the database finally came online. The next day, I contacted a Microsoft engineer and learned about VLFs.

For more information about VLFs and how to fix them, go here.

BACKUP TABLES

During a production problem, you might be saving data to a backup table to later review or possibly restore from. But are you remembering to drop these objects at a later time? Search for key words such as “backup”, “bkp”, “bak”, “temp”, or even your name or your initials.

SP_WHOISACTIVE

Hopefully you’re saving sp_WhoIsActive data to a table regularly, such as every 30-60 seconds. You may be using this data to help you find blocking, slow-performing queries, bad execution plans, or tempdb contention. But you probably are looking at the data when there is a current production problem. It might make sense to review the data periodically even if there isn’t a production problem. You might be able to spot a problem or a trend before it becomes a larger problem.

If you aren’t saving sp_WhoIsActive data, check this out for one method.

SP_BLITZCACHE

For sp_BlitzCache, I would take a peek to see if anything stands out. Capture the output of sp_BlitzCache into a table so that you can compare it to previous checks. Is there a stored procedure that’s running slower than it did previously? Is there anything surprising in there, such as a stored procedure that executing several times per second? I once supported a system that had a stored procedure running several hundred times per second. This isn’t necessarily a problem, but I wasn’t sure if it should be running this often. After speaking with the developer, I learned that it was an application bug. The developer fixed the bug in the next release, and I verified it by checking how often it was executing.

For more information about sp_BlitzCache, go here.

WRAPPING IT UP

Wouldn’t it be great to automate collecting all of this data? I leave that exercise up to the reader, but I think it’s important to also do manual checks of your SQL Servers. Set aside some time to proactively fix problems.

How often do you “Spring Clean” your SQL Servers and databases?

What else would you add to this list?

Brent says: as a DBA, it’s so easy to become completely reactive, putting out fires. There’s always gonna be a fire to distract you – you just have to buckle down and set aside time to get proactive.

Tara Kizer
When I’m not washing windows at Brent Ozar Unlimited, I am spending time with my husband Michael and two kids Andrew and Madelyn in San Diego. If I’m not with family, I’m hiking somewhere in Southern California. Sometimes if I’m lucky, the family comes hiking with me. I also like to camp and run. I love the outdoors.
7 comments ↑ Back to top

This post is mostly a thought experiment

The thought was something along the lines of: I have a table I want to keep temporal history of. I don’t plan on keeping a lot of data in that table, but the history table can accumulate quite a bit of data. I want to partition the history table for easy removal of outdated data, and I want to use ColumnStore indexes because I’m just so bleeding edge all of my edges are bleeding edges from their bloody edges.

Fair warning here

This post assumes you’re already familiar with temporal tables, Partitioning, and ColumnStore indexes. I’m not going to go into detail on any of the subjects, I’m just walking through implementation. If you’re interested in temporal tables, Itzik Ben-Gan has a two part series here and here. We have a list of great Partitioning resources here, and of course, Niko Neugebauer has a (so far) 80 part series on ColumnStore over here.

On to the experiment!

The hardest part was getting the ColumnStore index on the history table. Let’s look at the process. There’s a lot of braindumping in the code. Feel free to skip the setup stuff, if you don’t care about it.

--Sample For President 2016
CREATE DATABASE [Sample2016]

--Let's not make a big deal out of this
ALTER DATABASE Sample2016 SET RECOVERY SIMPLE

--Netflix and chill
USE [Sample2016];

--Make sure this stuff is gone
IF OBJECT_ID('dbo.Rockwell') IS NOT NULL
BEGIN
ALTER TABLE [dbo].[Rockwell] SET (SYSTEM_VERSIONING = OFF);
DROP TABLE [dbo].[Rockwell]
DROP TABLE [dbo].[RockwellHistory]
END 

IF (SELECT [ps].[name] FROM [sys].[partition_schemes] AS [ps] WHERE [ps].[name] = 'TemporalDailyScheme') IS NOT NULL
BEGIN
DROP PARTITION SCHEME [TemporalDailyScheme];
END

IF (SELECT [pf].[name] FROM [sys].[partition_functions] AS [pf] WHERE [pf].[name] = 'TemporalDailyPFunc') IS NOT NULL
BEGIN
DROP PARTITION FUNCTION [TemporalDailyPFunc];
END

/*I always feel like...*/
/*Somebody's watching me...*/
CREATE TABLE [dbo].[Rockwell]
(
[ID] BIGINT,
[ProductName] NVARCHAR(50) NOT NULL,
[Price] DECIMAL(18,2) NOT NULL,
[Description] VARCHAR(8000) NOT NULL,
[AuditDateStart] DATETIME2(2) GENERATED ALWAYS AS ROW START HIDDEN NOT NULL,
[AuditDateEnd] DATETIME2(2) GENERATED ALWAYS AS ROW END HIDDEN NOT NULL,
PERIOD FOR SYSTEM_TIME ([AuditDateStart], [AuditDateEnd]),
CONSTRAINT [pk_id] PRIMARY KEY CLUSTERED ([ID]) 
) WITH (SYSTEM_VERSIONING = ON (HISTORY_TABLE = [dbo].[RockwellHistory]) )

/*Partitioning Crap*/

--Make a partitioning function
DECLARE @sql NVARCHAR(MAX) = N'', @i INT = -366, @lf NCHAR(4) = NCHAR(13) + NCHAR(10)
SET @sql = N'CREATE PARTITION FUNCTION TemporalDailyPFunc (DATETIME2(2)) AS RANGE RIGHT FOR VALUES ('
WHILE @i < 0
BEGIN
SET @sql += @lf + 'DATEADD(DAY, ' + CAST(@i AS NVARCHAR) + ', ' + CAST(CAST(GETDATE() AS DATE) AS NVARCHAR) + N' )' + N', '
SET @i += 1
END
SET @sql += @lf + 'DATEADD(DAY, ' + CAST(@i AS NVARCHAR) + ', ' + CAST(CAST(GETDATE() AS DATE) AS NVARCHAR) + N' )' + N');';
EXEC sp_executesql @sql;
--PRINT @sql
GO

--I don't have a ton of filegroups, I just want everything on Primary
CREATE PARTITION SCHEME TemporalDailyScheme
AS PARTITION TemporalDailyPFunc
ALL TO ( [PRIMARY] );

Wew

If you’re here, I should make a couple notes. Microsoft added a really cool feature to Temporal Tables recently: The ability to mark them as hidden. This is gravy for existing apps and tables, because you don’t have to store the row versioning data along with all your other data. It would be really nice if they’d add valid date ranges (read: expiration dates) to the syntax, but hey, maybe in the next RC…

I explicitly named our history table, because SQL will name it something horrible and dumb if you don’t. You don’t have much control over History table creation or indexing at conception, but you can make changes afterwards. SQL will drop a clustered index on your table that mirrors the clustered index definition of the base table.

ColumnStore Party!

So let’s see here. I have a base table. I have a history table. I have a Partitioning Scheme and Function. How does one get their history table to Partitioned and ColumnStored status? With a few catches!

First, you have to drop the index on the history table:

DROP INDEX [ix_RockwellHistory] ON dbo.RockwellHistory

The first thing I tried was just creating my Clustered ColumnStore index in place:

CREATE CLUSTERED COLUMNSTORE INDEX [cx_cs_RockwellHistory] ON [dbo].[RockwellHistory] ON TemporalDailyScheme ([AuditDateStart]);

But that throws an error!

Msg 35316, Level 16, State 1, Line 119 The statement failed because a columnstore index must be partition-aligned with the base table.

For reference, trying to create a nonclustered ColumnStore index throws the same error.

The next thing I did was create a nonclustered index, just to make sure I could create something aligned with the Partitioning. That works!

CREATE NONCLUSTERED INDEX [ix_RockwellHistory_ADS] ON [dbo].[RockwellHistory] ([AuditDateStart]) ON TemporalDailyScheme([AuditDateStart]);

Please and thank you. Everyone’s a winner. But can you create ColumnStore indexes now?

Nope. Same errors as before. Clearly, we need a clustered index here to get things aligned. The problem is, you can’t have two clustered indexes, even if one is ColumnStore and the other isn’t.

CREATE CLUSTERED INDEX [cx_RockwellHistory_ADS] ON [dbo].[RockwellHistory] ([AuditDateStart]) ON TemporalDailyScheme([AuditDateStart]);

CREATE CLUSTERED COLUMNSTORE INDEX [cx_cs_RockwellHistory] ON [dbo].[RockwellHistory] ON TemporalDailyScheme ([AuditDateStart]);

Msg 35372, Level 16, State 3, Line 121 You cannot create more than one clustered index on table ‘dbo.RockwellHistory’. Consider creating a new clustered index using ‘with (drop_existing = on)’ option.

Ooh. But that DROP_EXISTING hint! That give me an idea. Or two. Okay, two ideas. Either one works, it just depends on how uh, bottom-side retentive you are about how things are named. This will create a ColumnStore index over your clustered index, using DROP_EXISTING.

CREATE CLUSTERED COLUMNSTORE INDEX [cx_RockwellHistory_ADS] ON [dbo].[RockwellHistory] WITH (DROP_EXISTING = ON) ON TemporalDailyScheme([AuditDateStart]);

This will drop your current Clustered Index, and create your Clustered ColumnStore index in its place, just with a name that lets you know it’s ColumnStore. Hooray. Hooray for you.

DROP INDEX [cx_RockwellHistory_ADS] ON [dbo].[RockwellHistory]

CREATE CLUSTERED COLUMNSTORE INDEX [cx_cs_RockwellHistory] ON [dbo].[RockwellHistory] ON TemporalDailyScheme ([AuditDateStart]);

SUCCESS!

Never tasted so… Obtuse, I suppose. Maybe like the parts of a lobster you shouldn’t really eat. Anyway, I hope this solves a problem for someone. I had fun working out how to get it working.

I can imagine more than a few of you seeing different ways of doing this through the course of the article, either by manipulating the initial index, creating the history table separately and then assigning it to the base table, or using sp_rename to get the naming convention of choice. Sure, that’s all possible, but a lot less fun.

Thanks for reading!

Brent says: when Microsoft ships a feature, they test its operability. When you use multiple new features together, you’re testing their interoperability – the way they work together. Microsoft doesn’t always test – or document – the way every feature works together. For example, in this example, if you want to play along as a reader, your next mission is to look at query plans that span current and history data, see how the data joins together, and how it performs.

Erik Darling
Restoring tempdb since GETDATE()
1 comment ↑ Back to top

You’re a data professional working with (or at least applying to work with) a company using the StackOverflow database (I’m using the March 2016 version today). Your users are complaining that this stored procedure is slow:

usp_GetPostsByOwnerUserId

usp_GetPostsByOwnerUserId

They didn’t give you parameter 26837 – I’m just giving you that so you can see an execution plan.

You don’t have to talk me through the query itself, or what you’d want to do to fix it. In fact, I want you to avoid that altogether.

Instead, tell me what things you need to know before you start tuning, and explain how you’re going to get them.

I’ll follow up in a few days with my thoughts.


Update 2016/05/28 – 71 comments, holy smokes! One thing is clear: you folks like interviewing for jobs. Because there were so many comments, here’s what I’ll do this week: I’m going to start by talking about what I had in mind when I wrote the interview question, without looking at your answers, then I’m going to read yours because I bet you had even better ideas than I did.

For me, the most important part is, “Can you follow directions?” It’s so tempting to jump in and start fixing the query, but I asked two very specific questions, and I was looking for the answers to those.

Before I start tuning this query, I want to know:

  • What parameters make this query run slow?
  • What does “slow” mean – in concrete time terms?
  • Is it always slow for those same parameters, or does the time vary widely?
  • How fast does it need to run when I’m done? Or, how long should I spend tuning it? (Typically, I ask the user to tell me if I should spend 15 minutes, 1 hour, or 1 day tuning the query.)
  • How often does it run?
  • Could we cache the data in the application layer?

To get these answers, I’m going to:

  • Talk to the users for speed guidance
  • Query the execution plan cache using sp_BlitzCache® to see if this query shows up in our top 10 most resource-intensive queries, and if so, does it have warnings for Frequent Executions, Parameter Sniffing, and/or Long-Running Queries
  • Look at the execution plan to see what parameters it was compiled for
  • Talk to the developers to see if caching in the app tier is an option

Now, let’s see what you submitted, and look for neat ideas.

  • Mazhar wanted to know the table’s size – great idea! You’d want to tune indexes on a 1GB table differently than you’d tune a 1TB one.
  • Raul @SQLDoubleG was one of the first to point out that this code and execution plan are a perfect example of a parameter sniffing problem, good job.
  • Mike Taras asked who’s running this proc, users or apps? Good question – you might tune end-user facing code differently than service-based code. He also asked if we really need to return all of these fields.
  • Russ suggested zooming out and checking the server’s health overall. A+! That’s the first step in my BE CREEPY tuning process, blitzing the box with sp_Blitz®.
  • James Anderson turned it around on the users and said, how do you know this is the proc? Are you running traces to find out that it’s slow? I like James. I bet he has ways of making his users talk.
  • Thomas Pullen asked if it was lunchtime yet. I’ll meet him at the pub for a pint and we can laugh about the slow queries in our shops.
  • Mike F wondered what the users’ SET operations are, because that’s one of the things that can make troubleshooting parameter sniffing harder.
  • Jason Strate emphasized getting real numbers because without that, it’s like “knocking on doors at random in Chicago hoping to find Brent’s swag penthouse.” For the record, I have a doorman, but you’re on the guest list, Jason.
  • Stephen Falken wants to know what’s changed on that server recently, ask who has elevated permissions on the box, and what they changed.
  • Chintak Chhapia asked how frequently data is added & updated to this table, and what columns are updated. Very good question because it determines your indexing strategy.
  • And If_Luke_Skywalker_Was_A_Troll gets very high marks for asking excellent followup and challenge questions throughout, getting folks to think deeper through their suggested answers.
Brent Ozar
I make Microsoft SQL Server faster and more reliable. I love teaching, travel, and laughing.

I’m mostly a figurehead here at Brent Ozar Unlimited. My true skills are menu advice, interpretive dance, and reading Wikipedia.
Brent Ozar on sabtwitterBrent Ozar on sablinkedinBrent Ozar on sabinstagramBrent Ozar on sabgoogleBrent Ozar on sabfacebook
76 comments ↑ Back to top

This week, Brent, Angie, Erik, Jessica, and Richie discuss sp_Blitz® updates, merge replication, detecting index fragmentation, how to punish your DBAs, and Brent’s new chroma key backdrop.

Here’s the video on YouTube:

You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.

If you prefer to listen to the audio:

Play

Jessica Connors: Let’s talk about some merge replication. Paul asks, “What are the ways to handle manual identity management…?”

Brent Ozar: God bless.

[Laughter]

Brent Ozar: Oh man, Paul, you have a crappy job. “What are the ways to handle manual identity management for merge replication?” So what you do is for identities, when you only have two servers and you’re dealing with merge replication, you start one a positive one and goes upwards. The other a negative one and goes downwards. The more servers that you add the more complex you have to get. For example, you can start one at one and then go up by two so that it’s hitting all the odd numbers, 1, 3, 5, 7, 9. You start another one at two and it goes up at two, so it’s 2, 4, 6, 8, and so forth. Then you do the same thing with negatives. The more servers that you have the more complex this gets and you start buying Bitcoin rigs in order to calculate prime numbers and all that kind of crap. You can also do ranges. You can start at the maximum of one identity and go in the other direction. It is just not pretty the more identities that you add.

Erik Darling: Use GUIDs.

Brent Ozar: GUIDs? Yeah, I actually like GUIDs a lot for that. Also, so you are telling me when I said that he had a crappy job.

 

What’s the most accurate way to detect index fragmentation?

Jessica Connors: Gregory has a question about detecting index fragmentation. Wants to know which is more accurate, DBCC CONTIG, might be config, or sys.dm_db_index_physical_stats. I know the DMV is supposed to be more accurate when measuring across multiple files but in my environment, that isn’t a factor. Which metric is more accurate, scan density or logical scan fragmentation?

Erik Darling: Wow. That is a weird question.

Angie Walker: Does it matter that much? I’ve never heard of that.

Erik Darling: How accurate do you need it to be? Are you worried about like 51.1 percent versus 51.2 percent fragmentation? Like I assume that you’re going to only act on this fragmentation at a whole number anyway. So, unless you’re worried about wildly different results, then I wouldn’t really sweat it. The dm_db_index_physical_stats function is the one that everything I’ve ever seen used to detect it at least in modern versions of SQL. Ola Hallengren scripts, maybe some other reindex tools that Brent likes.

Jessica Connors: Yeah, he said he uses Ola Hallengren scripts.

Erik Darling: Why do you have to do anything with them?

Brent Ozar: I want to say that isn’t DBCC SHOWCONTIG deprecated? I’m not 100 percent positive. Yeah.

Erik Darling: If it’s not deprecated then it’s like on the way out. It’s one of those like SQL 2000-ish commands, you know, DBCC PINTABLE.

Brent Ozar: Grandpa told you about.

Erik Darling: Yeah.

Angie Walker: Tell your developer to stop pretending that they’re the DBA. If you’re the DBA you do defragmentation index remaintenance and leave it alone.

Erik Darling: Don’t actually do it and save yourself a whole bunch of time.

Angie Walker: Read Erik’s blog about not reindexing at all.

Brent Ozar: But to get that information, when you scan that DMV, it actually reads the tables. It will do scans depending on what parameters you pass in. I don’t want developers looking at that during the day during business hours. You want to explain to them what it actually does behind the scenes like that it reads huge quantities of large tables. So I just would say, “Hey, dude, if you want I’ll capture that after hours, using Ola Hallengren scripts to just capture it after hours. But don’t touch it during business days.”

 

Can you detach and reattach a database in a restoring state?

Jessica Connors: Question from Bill. “Hey, is it possible to detach, rename the underlying files and reattach a database in a restoring state?”

Angie Walker: Not in restore… what?

Brent Ozar: I think you can. I think you can because I remember having to do something with this around log shipping. Like getting the examples right and all that I would have no clue about, like attaching in a database in a restoring state but just go try it. If you run into problems with it, this is a great example of posting it on Stack Exchange. Go to dba.stackexchange.com and post what you’re trying to do and someone else can walk through the exact syntax. If you don’t get a good answer within a day or two, shoot us an email but I’m pretty sure that you can do that.

Angie Walker: Scott also says you can.

Brent Ozar: Yeah, okay cool.

Angie Walker: Whoever Scott is on the internet.

Brent Ozar: He must be right, right? He’s here. We only have smart people in our Q and A. He’s here for the A. Not as much for the Q. Yeah, I think I’ve done that for log shipping databases.

Jessica Connors: Got ya.

Brent Ozar: Scott says, “I do it at least once a quarter.” This isn’t about your personal life, Scott.

[Laughter]

This is a work webcast. Congratulations, but…

 

Can I rename a SQL Server cluster’s virtual network name?

Jessica Connors: Let’s move on to Nate Johnson. He says, “How easy is it to rename a SQL Server specifically a failover cluster virtual network name, i.e. the physical node names will remain the same, the Windows clustered name will remain the same, but the SQL virtual instance name or the cluster name needs to change.” Thoughts?

Erik Darling: I have never tried to do that.

Brent Ozar: You don’t have to. You don’t have to. There’s an easy work around. All you do is create a DNS CNAME. You create a DNS CNAME of whatever new name that you want and you point it to the existing cluster’s virtual name. Leave the cluster virtual name exactly the way it is. Then just create a DNS CNAME that points to that. I’m saying this really quickly but DNS CNAME. So the letter “C” and then name. Ask your systems administrators to create that for you, they’re the active directory guys, or DNS guys. It’s so simple, takes 30 seconds. Then end users will never know that it’s a different SQL Server. And both names will continue to work forever.

Angie Walker: But what if it’s in the GUI, like in the failover cluster? Maybe that’s why he needs to rename it. Maybe they’re being crazy and then they’ll have two of the same.

Brent Ozar: Look at the GUI, get out of there. I love the GUI. I’m a big GUI kind of guy.

Jessica Connors: Nate is also in a pickle. He looks like he had, he’s broken replication.

Richie Rump: Good.

Angie Walker: Tara is not here.

Erik Darling: You fixed replication.

[Laughter]

Brent Ozar: Sounds like success to us.

Angie Walker: Where’s the hammer? Fixed.

 

We broke replication. Now what do we do?

Jessica Connors: He says, “We broke replication, transactional replication. We tried a reinstitute with new Snap and then canceled due to the CPU spiking. Now we’re missing schema and data. How to recover besides a full reinstitute? We’ve got the schema synced up manually but the missing data is turning out to be huge because some tables are millions of rows. Any recommendations?”

Brent Ozar: Let’s see in reinitialization, I think with a backup—I truly don’t do transactional replication—but what I would do is if you know that some data but not all data is there, you can insert it across yourself. You can do your own inserts and then just tell transactional replication, “Hey, we’re good. You don’t have to bother to reinitialize.” But you said that there’s millions of rows missing. Then yeah, then you have to put those rows back. That’s the part that blows.

Erik Darling: Unless you can get away with it and throw a filter on there to not put all the rows in. You could start small.

Brent Ozar: I like that, filtered publication. Only the stuff that is current to people.

 

Jessica Connors: Nate Johnson is on a roll.

Brent Ozar: Oh, on a roll.

Jessica Connors: He’s on a roll. He has another question.

Brent Ozar: Not in a good roll. This poor guy is like having to throw so many questions, “Another thing, quick, I’ve got to get these in before you guys close the webcast because this is on my homework.”

[Laughter]

 

Should I upgrade SQL Server 2005 to 2008R2 in-place?

Jessica Connors: I’ll take one more from Nate. He says, “It’s taking months longer than anticipated to prep SQL migration and new hardware to a 2005 to 2014 upgrade. Can I convince managers that a 2005 to 2008 R2 in-place upgrade is a good idea to buy us more time and compliance?”

Erik Darling: No. In-place upgrades are the devil, especially major versions like that. Run screaming. Run screaming into the night.

Angie Walker: Plus, why do you want to go to 08 R2?

Brent Ozar: I think he wants to get under warranty support because he’s worried about 2005 being out of support and I get that.

Erik Darling: So you would have to upgrade to 2008 R2 and then to Service Pack 3 so you’re looking at some pretty big jumps.

Brent Ozar: Yeah, that makes me nervous. With no backout plan if anything goes wrong.

Angie Walker: So maybe to let him know what would be—if right now he has a problem, he’s on 2005 and he calls Microsoft. Are they going to tell him sorry? Shut the door in his face? Or are they going to make him pay extra money for help?

Brent Ozar: God, 2005, I don’t even think they take support calls anymore. It’s not like, this is too, a couple of us were joking around recently. It’s like, how often do you really call Microsoft anyway, right? Do you really get on the phone with Microsoft a lot? Does it really matter that you’re out of support? That’s horrible for me to say but if it’s a production application—I mean, we only want to work on stuff that’s in support. We don’t take work on out-of-support databases unless it’s an extra charge kind of thing. Yeah, you look at Erik fist pumping. Yeah, that’s tough. I would actually stand up and do VM, just a virtual machine in the meantime assuming that you’ve got those. If you could make that work performance-wise, that’s what I would go for until they get the new hardware you want.

 

Should adding logins or taking backups go through change control?

Jessica Connors: A quick question from Samuel. He says, “On production do you agree that adding a login or taking a backup should require change control?”

Brent Ozar: Don’t you love it when people ask, “Do you agree?” It’s like this big leading question. “Wouldn’t you say that…?” Leading question, your Honor. All right, so we’ll go through, we’ll ask each person. So just as I’m looking at my monitor, Erik, would you go through change control if it involved adding a login?

Erik Darling: Adding a login, it would depend on for what. If it was a new developer or a new DBA I probably wouldn’t go through change control for that. If it was for a new application then I certainly would or something else that needed access to the server from outside, I might. What was the other part?

Brent Ozar: Backing up database? Was that what it was?

Erik Darling: If you want to go through change control every time you take a backup, you’re going to spend a lot of time with change control.

Brent Ozar: The only time, when you say take a—like if it’s an out-of-band backup maybe, like if somebody wants to take a backup in the middle of the day and it’s going to slow down performance, but otherwise I would hope that if you have to file a change control, yeah, I think we’re generally we’re going no across here. But in terms of adding a login, yeah, it just depends on what it’s for. If it’s a read-only login that is deny datawriter then I’m kind of comfortable with it as long as it’s not personally identifiable data. You know, it’s not like credit card data or social security numbers.

Erik Darling: If it is an out-of-band backup, you want to be looking at taking a copy only backup, not a regular full backup.

Brent Ozar: Oh, that’s a good point. Richie, how about you?

[Sirens in background]

Are the police coming for you? Is that what that is?

Erik Darling: Uh, no. They’re coming for me first of all.

Richie Rump: It used to be for you remember all the sirens you used to get? But now it’s Erik.

Brent Ozar: The guy in Brooklyn.

 

Should I add multiple transaction log files for better speed?

Jessica Connors: All right, sp_Blitz question. Bruno says, “sp_Blitz warns me that there are multiple log files on the drive and that this is not a performance booster because log file access is sequential not parallel but isn’t this considered a best practice to minimize contention problems?”

Erik Darling: If it’s sequential, then there’s no contention.

Brent Ozar: There’s a super rare edge case that Thomas Kejser blogs about where they end up with a bunch of really small log files in order to… [Laughs] Erik is throwing his headphones. Just wildly rare, ridiculously, crazy rare, where they have a whole bunch of really small log files in order to minimize log buffer rights. That’s like in the million transactions a second territory. That’s like a totally different world than what most of us out there deal with.

Erik Darling: You also have to put them all on separate drives, all sorts of other stuff too.

Brent Ozar: Yeah. Solid state drives.

Erik Darling:  In Oracle, it is a little bit different. Oracle will use the multiple log files at once but that’s not a contention thing, that’s a redundancy thing. So you can have multiple log files to get the same synchronous data pushed to them but even that’s not going to do much. SQL uses them serially, it’s not going to paralyze the log writes.

Jessica Connors: So sp_Blitz does check for too many virtual log files?

Brent Ozar: Yes, yes, it does. I want to say we alert you around 1,000. I don’t think 1,000 is bad but we just want to give you the heads up before you hit some ridiculously bad number. So at 1,000 we’re like, “Hey, maybe you should think about shrinking this log file. Doing something about that.”

 

How can I gradually move databases from one AG to another?

Jessica Connors: Question from Gregory. He says, “Is there a way to gradually move databases from one instance to a separate AlwaysOn Availability Group without any application changes? I would like to use an alias DNS record but I think the movement would be all of the databases at once. I was hoping the alias record would just be the name of the AG listener but I don’t think both servers can use it at the same time to allow gradual migration.

Erik Darling: Couldn’t you use multiple connection strings on that?

Brent Ozar: Yeah, yeah, yeah. Keep going. Keep explaining there.

Erik Darling:  Oh, god, I don’t—I just know that that exists. So with availability groups you can use connection strings for different activities. So you have one for reads and one for writes and one for everything else. I think you might be able to do something where you set up and assuming it’s .NET, so this is a multi-subnet failover. Yeah, so something that you would use when it’s a .NET application and you can use that stuff. You should be able to use different connection strings to push different activities to different servers and/or databases. Which is probably what I would do if I had a real job.

Brent Ozar: Each connection string can use a different DNS CNAME and then you can gradually move those apart to different listeners if you like.

Erik Darling: Yeah. Thanks for smartening that up for me.

Brent Ozar: Oh, and then, when a big database or a big query loves a small database very much…

Erik Darling: It goes to jail for a long time.

[Laughter]

 

In virtualization, how do I use local SSDs for TempDB?

Jessica Connors: Question from Sean. How can tempdb be placed on a local SSD in a failover cluster? System is two-node Hyper-V slow SAN SQL 2012 Standard.

Erik Darling: That kind of dive bombs for me when you say Hyper-V or really any sort of virtualization because then you have that machine tied to that host. Then you would have to on every host have that thing set up for that VM and it gets messy. Get a better SAN. Sorry.

Brent Ozar: Yeah. So often you’ll see people saying, “Virtual machines can be just as fast as physical ones.” Not when you do stuff like this. There’s still some tricks that you only get in physical machines. Some people will say, “Well what I could do is in an AlwaysOn Availability Group put a different VM on each node and have him use local storage.” Yeah, but then when the host dies you’re screwed. That whole thing failsover.

Erik Darling: Let’s be honest, whenever you hear about someone saying, “Oh I made this machine so much faster by virtualizing,” it’s always like, “I took a SQL 2000 box and I put it on modern hardware and now it’s miraculously faster.” It’s never, “Oh, I had two equivalent boxes sitting on SQL Server,” let’s say 2012, keep it sort of recent. “One was virtual, one was physical and they were both the same speed.” No one ever says that. Everyone is always showing off about some old really old application that they upgraded the hardware on. It’s like, so? I don’t get it.

Angie Walker: Yeah, usually much simpler setup than this too. I mean one standalone server with tiny data. Not as much of a problem.

 

How should I punish DBAs who put production databases in simple recovery model?

Jessica Connors: All right. Justin wants to know how to punish his DBAs. “What’s the appropriate punishment for a production DBA who puts all the production databases into simple recovery mode?”

Erik Darling: He has to do his MySQL.

Brent Ozar: That will also do the trick.

Angie Walker: Does the business care? Come on. Maybe you don’t need full recovery model. Who cares about logging every transaction if you don’t need point in time restore? Or anything more than a week’s RPL goal maybe?

Brent Ozar: Data warehouses, when you’re going to reload from scratch every day. Normally I would give Angie a gold star but the only thing I have is a gold screen protector.

Angie Walker: I will accept it.

Richie Rump: See you start with a box and then you put nails… Oh, I’m going to stop right there.

Angie Walker: Bad developer.

[Laughter]

Jessica Connors: All right. Mike has a question and he saw you in Houston last week.

Brent Ozar: Oh, cool.

 

How often should I rebuild indexes?

Jessica Connors:  Mike says, “I’m hoping to convince my customers to let me pull the plug on index rebuilding MX plans temporarily in order to accumulate x amount of time of usage stats. What is a minimal amount of time you feel is sufficient? In Houston you said to get a few weeks but do you think that one week would be enough? One particular customer will be having anxiety attacks about not running index maintenance though it’s going to be a tough sell so I’m hoping maybe a week would be good enough.

Brent Ozar: Mike, I have phenomenal news. SQL Server 2012 had a series of cumulative updates come out yesterday that fixed the index usage stats bug. So starting—dun dun dun—2012 SP whatever it is, if you go to SQLserverupdates.com, if they’re on SQL Server 2012 the latest update that came out yesterday for both the service pack and the one before the current service pack, so it’s like CU3 and CU12. Now let SQL Server’s index usage statistics persist. Missing index recommendations do not persist but at least the usage stats do. So if they’re on 2012, have them go get that. 2014 that fix is not out yet. Because the 2012 one came out, I feel I would be kind of like I’m not going to pick that battle with my customers. I’ll just go back and apply the 2014 CU when it comes out.

Erik Darling: Since if you install the service pack you may have to reboot anyway, which will clear out your stats anyway. What I would do is I would just scale my index maintenance back to only reorganize stuff which doesn’t wipe stats out. So that you have stats persist and they still get their precious fragmentation lessened and you get to collect usage statistics. Only rebuilding clears those out to clarify.

Angie Walker: Since there’s a fix for it, we can officially call it a bug now, right? Like it’s not just…

Brent Ozar: Right. That’s what I call it, yeah.

Erik Darling: See now I have egg on my face because I’ve been telling people that 2012 wasn’t going to get the fix because the original announcement about that change said 2014 and 2016 only. So, omelette everywhere.

 

Where do I download SQL Server for Linux?

Jessica Connors: Speaking of 2016, since SQL 2016 for Linux is not scheduled for general release until 2017 have you heard of any preview versions available?

Angie Walker: That is a no.

Brent Ozar: The way that Microsoft does its preview-type stuff is often people who have NDA or who get early access programs get access to early betas. We are not in that program, we’re not in the one for Linux. We were in the one for 2016 as you can tell when we bring out blog posts and we have screenshots available five minutes within the release coming out. In terms of other people though, what you want to do is get in contact with your Microsoft sales rep. If you don’t have a sales rep or a TAM, a technical account manager, then you’re probably not going to get into the program.

Jessica Connors: Wah-wah-wah.

Brent Ozar: Wah-wah-wah.

 

How do I measure storage bandwidth?

Jessica Connors: From Dennis, he says, “Do you guys have a good way to measure a 10GB iSCSI bandwidth to the SAN?” This does not sound like a database engine question.

Brent Ozar: How do you guys like to do storage testing?

Erik Darling: CrystalDiskMark or Diskspeed.

Brent Ozar: There you go. CrystalDiskMark, yeah, it’s excellent, it’s an easy button for storage testing. If you go to our site and grab CrystalDiskMark there’s a post on exactly on how to use it. It’s current with the current version of CrystalDiskMark too. They changed the screenshots around.

 

What’s the difference between a unique index and a unique constraint?

Jessica Connors: All right. Question from Jonathan. He says, “From what I understand there is no internal difference between a unique index and a unique constraint. Which one do you all prefer and why?”

Brent Ozar: Isn’t there a difference?

Erik Darling: A unique constraint is backed by a unique index, right? All constraints are backed by indexes basically.

Angie Walker: Correlation not causation or something.

Richie Rump: I always thought it was syntactic sugar. I could be wrong.

Brent Ozar: It is. If you go to Stack Exchange, if you go to dba.stackexchange.com, there’s a question “When should I use a unique constraint instead of a unique index?” Like for example, a unique constraint cannot be filtered, a unique index can be filtered. There’s a bunch of little, super tiny differences between. Unique constraint cannot be disabled however unique index can be disabled. I’ll post this, I’ll just put it out on Twitter what the differences between these. I’m going to tweet that right now. Tweet.

Angie Walker: Andrew proved that even short questions can be answered on Stack Exchange if you don’t get around to our webcast.

Brent Ozar: Yeah, the answers are huge. The answers are like pages long on this. People go into details.

 

How do I rename my Active Directory domain?

Jessica Connors: All right. Let’s take one from Corey. He says, “Management would like the local AD domain name changed to reflect their new brand. We have a single SQL 2008 R2 database for document management. We have about 30 users. Is this realistic? Any gotchas? My background is not SQL, just a basic network admin.”

Brent Ozar: It depends on whether or not you have a cluster. If you have a cluster, this is the part that sucks. But if you don’t have a cluster, it’s totally easy. It’s a total piece of cake. What a lot of people do is just stand up a new domain and then move stuff across from one to the other or a new forest. Erik’s got an idea.

Erik Darling: No, I don’t have an idea. I just finally understood the question.

Brent Ozar: Oh.

Angie Walker: Yeah, answering it for me—I was like, “Does he really just want to change the server name?”

Erik Darling:  It was confusing to me too. I was like—then, no, that’s bad.

Brent Ozar: Yeah, so it is a giant pain in the rear. I visited a company this past Monday I guess that had to go through that and it was ginormous work for all the DBAs involved.

Jessica Connors: The green looks good on you. You stand out. Brent has this green wall behind him. He just painted the wall behind him green. It’s this bright, bright green and it actually looks really good.

Brent Ozar: Thank you. It depends a lot on the webcam and like how much sunlight is in here for a while. So sometimes I look like pale, alien pale. So had to get the right lights working. I look alien pale in the daylight too as well so. Yes, we did that for the green screen effects that we use in our training videos. Doug does the same thing down at his place, has a wall set up so that he can make the background disappear. It’s called chroma keying. It’s really fun.

 

Can I ignore the QDS wait types?

Jessica Connors: Nice. Bruno had the sp_Blitz question earlier. I think he has a follow-up. He says, “Can I ignore all the QDS wait types?”

Brent Ozar: Yeah, stands for Query Store and you can totally ignore all the sales in the Query Store.

Jessica Connors: All right.

 

Should I index my temp tables?

Erik Darling: There’s a question from Dan that looks kind of interesting. I can read it to you if you like interesting.

Jessica Connors: Read it.

Erik Darling: He has a temp table in the store procedure and it has three nonclustered indexes on it but the execution plan is showing a table scan rather than hitting any of the nonclustered indexes. Now what you could be looking at is you have these nonclustered indexes that may well satisfy your predicate like a where clause or a join or something but they may not have enough included columns in them to tip the balance over to use those indexes rather than just scanning the entire table to get all the data. So if you’re selecting all the columns in the table but your where clause is only on a column or two, you may see that the optimizer chooses to scan the entire table because it’s cheaper than hitting the nonclustered index and doing a key lookup back to get all the other columns up.

Jessica Connors: There you go.

Erik Darling: Start with a clustered index on a temp table rather a nonclustered index. That’s usually my rule of thumb.

Jessica Connors: All right.

Brent Ozar: I like it.

 

Can I force my AG users to query the listener rather than server names?

Jessica Connors: We have another question from Gregory. “Is there a way to enforce the use of AG listeners and AlwaysOn Availability Groups versus just connecting directly to an instance?”

Brent Ozar: Oh, wow. No. No, I don’t think so. Wow. What a good question. I don’t think so. Could you force…? Yeah, I can’t think of a way that you would be able to do that. No, other than being the obvious jerk and changing the names of your servers all the time so that people can’t guess them that way but yeah, I don’t think there’s a way to do that. Greg, you should totally ask that on Stack Exchange and then whether you get an answer or not, email us, because I would like to hear about that and see if anybody comes up with answer.

Erik Darling: That’s pretty [Inaudible 00:25:07]

Brent Ozar: Or, so I’m going to go out of the box. Maybe like if you use F5 Global IPs or Impervas or Guardiums, those network appliances that sit in between your SQL Server and the network but other than that, that would be a firewall’s stateful packet inspection.

Jessica Connors: All right, let’s do one more. From our buddy, Nate. He’s my friend today because there’s not many people asking questions. It’s like three people who are having at it.

Angie Walker: A high honor you guys, to be Jessica’s friend.

 

Have you ever worked with Microsoft Dynamics NAV migrations?

Jessica Connors: Yeah, I’m going to add him on Facebook after this. He says, “This might be a shot in the dark but have you guys ever worked with a client who was running Microsoft Dynamics NAV. If so, do you have any experience with migrating or upgrading the backing to SQL Server that’s running the NAV databases?”

Brent Ozar: How about you guys?

Erik Darling: Nothing.

Angie Walker: We don’t really help with migrating and upgrading clients’ databases specifically, not on a regular basis.

Brent Ozar: I don’t know that we really help with anything. I don’t even know if we help on some things.

Angie Walker: Well, I help with some things.

[Laughter]

Erik Darling: Big hand wavy.

Angie Walker: I pull all of the skeletons out of the closet and then I set them up for Erik and Doug and Tara to beat them down a little bit.

 

Why do low-cost queries sometimes go parallel?

Erik Darling: There’s a good question in here that I want to get to. It’s from Samuel and he asks, “I have a query that is less than the cost of parallelism but it’s still giving a parallel plan. Why would this be?” It’s because it’s not the cost of the parallel plan that has to exceed the cost threshold. It’s the cost of the serial plan that has to exceed the cost threshold for parallelism and the parallel plan has to be cheaper. So what you’re seeing is a parallel plan that is cheaper than the serial plan that crossed the cost threshold.

Brent Ozar: Dun dun dun. So you can see what the cost of the serial one was by going option max stop 1 in a query and you’ll get an idea unless it’s a stored procedure, then it gets a little trickier.

Erik Darling: Yep.

Jessica Connors: Cool beans. I’m hungry, my stomach is growling.

Brent Ozar: Oh, where are you going for lunch?

Jessica Connors: I’m getting some UberEATS. Unless you want to go somewhere. You’re right down the street today.

Brent Ozar: Oh, no. I’m good. I think I’m going to just phone in and have something else delivered here as well.

Jessica Connors: Yeah, UberEATS is awesome. Three minutes. Hot meal.

Richie Rump: I’ve got to go to FedEx and drop off a computer. Thanks, Brent.

[Laughter]

Brent Ozar: Oh, Richie. Richie chose poorly, not chose wisely. Yes, his machine. I hope they don’t ruin your stickers. All right, thanks everybody and we will see you next week.

Brent Ozar Unlimited Team
Brent Ozar Unlimited is a boutique consulting firm focused on understanding your environment and strategy. We partner with you to objectively identify pain points and develop solutions that align to your business goals. Your experience comes first; we share our knowledge and expertise to help you.
1 comment ↑ Back to top

Let’s just get one thing out of the way first.

Yes, I understand that you, dear reader, probably hate maintenance plans. You associate them with everything dumb that you did early on in your career, like setting up a single plan that reorganized indexes, then rebuilt them, then updated their statistics. You are older and wiser now, and you swear by tools like Ola Hallengren’s maintenance scripts or Minion Reindex.

This blog post, however, is not about you.

It’s about all of the SQL Servers out there that have not yet had the wonderful opportunity to feel the tender loving hand of a qualified database administrator such as yourself. It’s about the tools that accidental DBAs will use over the years to come.

So let’s start with the index rebuild task:

Index rebuilds in SQL Server 2016

Index rebuilds in SQL Server 2016

Be still, my beating heart.

You can tell it’s kinda slapped together haphazardly – note the awkward spacing of the “Used in last” line at the bottom – but God bless ’em, Microsoft’s heart is in the right place. We have new options to only rebuild indexes if they’re a certain percent fragmented, or a certain size, or they’ve been used recently.

The goodness continues on the reorg screen:

Reorganizing indexes in SQL Server 2016

Reorganizing indexes in SQL Server 2016

Same nice options about only optimizing indexes that are in use, or are in bad shape.

The CHECKDB screen shows off its new MAXDOP capabilities, now that DBCC CHECKDB can take a hint:

maintenance-plan-checkdb

Part of me is happy because undereducated database caretakers now have new, more powerful tools at their disposal.

The other part of me is a little bit sad because it’s still not easy to use. If maintenance plans are designed for the accidental and junior DBAs amongst us, I don’t think a lot of this stuff should even be an option. It should just default to the right thing, and take care of the database with Microsoft’s best practices set up as standard.

But that is a really, really small part of me. Maintenance plans are getting better, and that means something good.

Brent Ozar
I make Microsoft SQL Server faster and more reliable. I love teaching, travel, and laughing.

I’m mostly a figurehead here at Brent Ozar Unlimited. My true skills are menu advice, interpretive dance, and reading Wikipedia.
Brent Ozar on sabtwitterBrent Ozar on sablinkedinBrent Ozar on sabinstagramBrent Ozar on sabgoogleBrent Ozar on sabfacebook
4 comments ↑ Back to top

The instant you encounter corruption in a production SQL Server database, here’s what to do:

1. Query the msdb.dbo.suspect_pages table. This system table tracks up to the last 1,000 corrupt pages detected. This will give you a quick idea of how widespread the corruption might be. If it spans multiple databases, you need to consider getting off this hardware rather than repairing the corruption. The storage itself may be bad, and your corruption repair efforts may be fighting a losing battle. If you see corruption in multiple databases, send an email to your manager, team, and app stakeholders:

SQL Server (INSERT NAME HERE) just reported corruption across multiple databases. This indicates that it has storage problems. We need to fail over to another SQL Server immediately – come to my cube (OR ONLINE MEETING OR WHATEVER) to discuss our options.

At that point, look at your disaster recovery plan to figure out where you’re going to fail over (like a log shipping secondary.) You’ll need to check that location for corruption as well, but if we’re dealing with widespread, multi-database corruption, it’s beyond the scope of what I can teach you to do in a blog post. You can keep reading for single-database corruption recovery options, and use these same tactics across multiple databases, but it’s going to be a ton of work.

2. If it’s just one database, send an email to your manager and your team. Copy/paste this in:

SQL Server (INSERT NAME HERE) just reported corruption in one of our databases. This is really bad – we may be facing data loss, and in order to find out, I’m going to have to drop everything and do emergency troubleshooting. I’ll let you know more in half an hour.

In this step – and the next couple – I’ve got you sending emails, and people are likely to come running in asking for more details. Be polite but firm – you need to do 30 minutes of homework before you can give answers.

3. If you’re using a SAN, alert the storage team. If your data and log files reside on shared storage, there may be a more widespread issue. Multiple databases or servers may be affected if the shared storage is having problems. Copy/paste this into an email:

SQL Server (INSERT NAME HERE) just reported corruption in a database that lives on the SAN. I don’t know yet whether this is due to a storage issue or a SQL Server bug. I’m dropping everything to do emergency troubleshooting, and I’ll let you know more in half an hour, but if you hear anything from other server admins about SAN issues, please let me know.

4. Alert the application owners and stakeholders. Copy/paste this into an email:

SQL Server (INSERT NAME HERE) just reported database corruption. I’m dropping everything to do emergency troubleshooting to find out if we’ve lost data, and how we’ll recover it. I’ll let you know more in half an hour, but in the meantime, I would recommend taking the applications offline. If more data is added after this point, I may not be able to recover it.

Step -1: pour a glass of wine. That's for your spouse, while you tackle the bottle.

Step -1: pour a glass of wine. That’s for your spouse, while you tackle the bottle.

This sounds paranoid, but as an example, here’s a corruption case I had recently: shortly after detecting corruption, the team realized they’d have to revert to a backup of the database from a few days ago. Rather than telling users about that possibility, they let the users keep adding data into the already-corrupt database while the DBAs did troubleshooting. Several days later, as the corruption got worse, even Microsoft couldn’t repair the corruption – and the affected tables went completely offline, permanently. If the users would have been alerted earlier, they could have avoided even more data loss.

5. Turn off your backup-delete jobs: you’re gonna need ’em. If you have any kind of job that automatically deletes backups older than a certain number of days, turn off that job. Depending on the type of corruption, you may have to restore a clean full backup from before the corruption occurred, plus all of the transaction log backups since.

6. Review the history of your CHECKDB and backup jobs. Fill out these answers:

  • When was the most recent clean CHECKDB? (Meaning, when did CHECKDB run against this database without errors?) ________
  • Do we have a full backup available from prior to that date? (For example, if CHECKDB said this database was okay on January 15th at 10PM, what’s the latest backup we have BEFORE that date, like January 14th at 11PM.) ________
  • Do we have all of the transaction log backups since that date, yes/no? ________

If you can pass all three of these, great news! You’re going to have an alternate recovery path. If any of these backups are not online (like if they’re on a slow virtual tape library, or have been sent offsite), get one of your Windows admins to start retrieving these backups and putting them on a very fast network share as quickly as possible. You may need to restore all of these backups in a subsequent step.

7. If you have the full CHECKDB output, look for an easy fix. DO NOT RUN CHECKDB in this step – that can be a long process, and before you go down that road, I need you to do other stuff first. I’m just saying that if you do happen to have the CHECKDB output that shows the corruption, look at the index IDs involved. If the only corruption involves an index with an ID of 2 or higher, that means it’s only corruption in a nonclustered index, and you’ll be able to repair that relatively quickly by:

  • Run sp_BlitzIndex® to list out the index definitions on the affected table:
    EXEC sp_BlitzIndex @DatabaseName = ‘MyDB’, @SchemaName = ‘dbo’, @TableName = ‘MyTable’
  • Find the corrupt index by its ID
  • Copy out the CREATE INDEX statement from sp_BlitzIndex’s output
  • Drop the index
  • Recreate it again
  • To verify that no other corruption exists in the database, run:
    DBCC CHECKDB(‘MyDB’) WITH NO_INFOMSGS, ALL_ERRORMSGS
  • If it comes back clean, you’re good, and you can email the teams that things are OK, but jump to the Aftermath section at the end of this post
  • If it doesn’t come back clean, keep going

8. Fork your recovery efforts into multiple teams. Grab other members of your team and assign them tasks that will be done in parallel:

  • Team 1: Restore clean backups to another server – this team grabs a different SQL Server of the same version as production. For example, if production is a SQL 2012 box, grab a development server that’s also SQL 2012. Restore the full and log backups from the dates specified in step 5. After it finishes, run DBCC CHECKDB to make sure there’s no corruption in this copy. This may take a long time, but if it works, you’ve got an excellent plan B.
  • Team 2: Open a support call with Microsoft. If you don’t have a support agreement with Microsoft, grab a credit card and call professional support. It’s about $500, and they work the problem with you until it’s done. Don’t be afraid to say, “We’re not making progress fast enough – I’d like to escalate this call to the next support level.” (But at the same time, you’d better be doing what they tell you to do.)
  • Team 3: Run CHECKDB if you don’t have the output. If the corruption was detected by an end user’s query or a monitoring system alert, run this, save the output to a text file, and then analyze the output with Gail Shaw’s corruption recovery instructions. Depending on the database’s size, this can take hours:
    DBCC CHECKDB(‘MyDB’) WITH NO_INFOMSGS, ALL_ERRORMSGS
  • Team 4: If you don’t have a clean backup, restore the most recent backups to yet another server. If you’re going to try your own attempts at corruption repair, you’re going to want another copy of the corrupt database on a different server where you can test first before trying in production. As soon as that restore finishes, if the other teams aren’t ready to try their corruption repair commands yet, create a new database on this same server, and start copying as many tables as you can from the corrupt database into the new one. You want to be able to salvage as many tables as you can.
  • Team 5: Contact a data recovery company. If things are looking bad, and management wants another option, contact Kroll Ontrack for a quote for database recovery. The pricing isn’t public, and it will likely vary based on the complexity of the situation.

9. Send out a status update as work starts. As soon as the above team task assignments are done – not when the work stops, but when it starts – it’s time to communicate your status to your manager, team members, and the application stakeholders.

Your update email will be based on what you’ve learned so far, but here’s an example:

Update on the corruption issue: we have identified that the corruption is confined to the SalesSite database, and it occurred at some point after 2016/05/04. We have split up our recovery efforts into three teams. Based on a quick assessment, we believe Team 1 will be the first to finish, and we hope to have a clean copy of the database up on DevSQL2012. If we have to fail over to that, we’ll have a one-hour outage. We’ll update you in another half-hour as we know more.

10. Keep working the plan. Don’t panic – let each of the teams do their thing independently. Keep doing check-ins across teams every 30 minutes, see which team is the closest to a solution the business is comfortable with, and keep the end users informed.

The Aftermath: Avoiding Corruption Next Time

As soon as practical, get on the most current service pack and cumulative update for your SQL Server version. You may have hit a bug like the notorious online index rebuild bug or the NOLOCK UPDATE bug, and  patching will reduce the chances that you hit that issue again.

If you can’t rule out a SQL Server bug as the cause of the corruption, work with your storage teams to make sure your storage gear is patched up to its most recent firmware and driver versions. Check to see if there were any storage errors over the last several weeks – not just in the Windows event logs, but in the storage gear’s logs as well.

Corrupted system tables may require more drastic measures

Corrupted system tables may require more drastic measures

Once you’ve hit corruption on a server, consider taking a few proactive measures:

  • Run CHECKDB more often, like every night (which sounds crazy – until you hit a corruption scenario)
  • Keep your backups for a longer length of time – if you’re only running CHECKDB every week, you need at least a week’s worth of full and transaction log backups so that you can recover via restores
  • Run transaction log backups more frequently, like every minute
  • Put your RPO and RTO in writing – the business always assumes databases can’t lose data, and you assume that the business understands technology breaks sometimes. Get everybody on the same page.
  • Based on your RPO and RTO, revisit how many databases you have on each server. If your hardware isn’t able to keep up with backups and corruption checks, it’s time to up your horsepower or switch to snapshot backups.

Erik says: Brent is spot on with how to react to corruption scenarios. I’m all about not being past the point of no return in the first place. Whether it’s:

sp_Blitz® can help you find all sorts of dangers and inadequacies with your server set up. Run it on all your servers. You may be surprised what you find.

Tara says: I’ve encountered corruption 3 times in my career. The last event was the most memorable, and I don’t mean that as a good thing. If we had setup alerts for the 823, 824 and 825 errors as Erik mentioned, we would have been notified of the corruption sooner and had less data loss. We implemented the alerts through SCOM after the event, but we should have also created SQL Server Agent alerts for those errors as SCOM wasn’t always reliable (your mileage may vary). Reacting to corruption quickly is very important.

Brent Ozar
I make Microsoft SQL Server faster and more reliable. I love teaching, travel, and laughing.

I’m mostly a figurehead here at Brent Ozar Unlimited. My true skills are menu advice, interpretive dance, and reading Wikipedia.
Brent Ozar on sabtwitterBrent Ozar on sablinkedinBrent Ozar on sabinstagramBrent Ozar on sabgoogleBrent Ozar on sabfacebook
17 comments ↑ Back to top

You can do a lot with Extended Events

I’m really looking forward to the stuff in 2016 becoming mainstream in 2020 or so. Raise your hand if you’re using SQL Server 2014 in production. Raise your hand if your vendor supports SQL Server 2014.

Okay then.

When you open up the New Session dialog to create event session, the default screen has a nearly overwhelming amount of stuff in it. It’s good that you can search for things, but the search is kind of iffy. Not all words hit, even if you spell them right.

Where do you even begin?

Where do you even begin?

Assuming things

You’d think that with Trace being deprecated and XE being the new hotness on the block, that everything would be up front to help you find what you need, expose some cool functionality, and ideally get you started trying to parse the Codd forsook hunks of XML that session data is stored in. But no. You gotta go hunting for that, too.

Ugh.

Ugh.

What’s in there?

Everything. Literally everything. Every weird and awesome thing you can imagine. Seriously, go look. If I started writing a list, I’d just list everything in there.

Thanks for reading!

Erik Darling
Restoring tempdb since GETDATE()
3 comments ↑ Back to top
css.php