[Video] Office Hours 2016 2016/06/08 (With Transcriptions)

This week, Angie, Erik, Doug, Jessica, and Richie discuss DB migration, rebuilding large indexes, recommendation for SQL dev ops tools, best practices for disabling SA accounts, compression, and more!

Here’s the video on YouTube:

You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.

Office Hours Webcast – 2016-06-08

Jessica Connors: Question from Justin. He always asks us something. Justin says, “Is it advisable to move the public’s role from being able to query sys logins, sys databases, and/or sys configurations in master?”

Erik Darling: Advisable for what? I’ve never done it. I never cared that much. But I’m not like a big security guy. Any other big security guys want to talk about it…?

Doug Lane: Yeah, I’ve never done anything with public’s role and I’ve never seen it be a problem, but again, we’re not security experts.

Erik Darling: Again, we always recommend that when people ask sort of offhand security questions, Denny Cherry’s book Securing SQL Server is probably the go-to thing to read to figure out if what you’re doing is good or bad.

Jessica Connors: Yeah, Justin says that they got audited and [Inaudible 00:00:47]..

Erik Darling: What kind of audit was it that brought those up? I’d be curious.


Two Servers, One Load Test

Jessica Connors: Let’s move on to a question from Claudio. He says, “I would like to load test a new SQL Server instance with real production data. Is there anything we could put between clients and two SQL Servers that will intercept the queries for them to both SQL Servers and return the response only to one SQL Server?

Erik Darling: Yes, and I also have a magic spell that turns rats into kittens. No. That’s a bit much and a bit specific. You’re going to have to come up with something else. If you want to get really crazy, you’re going to have to look at Distributed Replay and come back in three years when you finish reading the documentation.


How do I configure multi-subnet AG listeners?

Jessica Connors: Okay. Let’s see here. This is a long one from Richard. Let’s tackle this one. “I will be adding a remote DR replica, non-readable, to an existing local availability group on a multi-subnet cluster to be able to use the listener at the DR site. I know a remote site IP address will be added to the listener. Is there anything else that has to be configured in the availability group or cluster besides DNS and firewall rules?”

Erik Darling: Brent?

Doug Lane: Yeah.

Jessica Connors: Where are you, Brent?

Erik Darling: I don’t know actually. I would be interested so I want you to try it out and email me if you hit any errors because I would be fascinated.

[Angie Rudduck enters webcast]

Jessica Connors: Oh, hi.

Doug Lane: Oh, we heard Angie before we saw her.

Angie Rudduck: Thought I had my mute on.

Doug Lane: As for the AG mystery, we’re going to leave that one unsolved.

Jessica Connors: Unsolved mysteries.


How should I configure database maintenance tasks?

Jessica Connors: Question from David. He says, “For routine tasks, index maintenance, backup, etcetera, is it preferred to use agent jobs or maintenance plans? It seems to be the DBA preference. Any reasons to lean one way or the other?”
Erik Darling: Ola Hallengren. Angie, tell us about Ola Hallengren.

Angie Rudduck: Ola Hallengren is amazing. I tell every single client about Ola Hallengren. I used it at my last place and in production across every server. You can do all backups, full disk logs. You can do it separated for your user databases versus your system databases. You get your CHECKDBs in there, user versus system databases. You even get index optimize and even better, Brent, aka Erik, has a really good blog post about how you can use it to just do update stats which is a great follow-up from his post about why don’t do index maintenance anyway, right? Just update stats. I love Ola. I’m working on a minideck to pitch all of his stuff in one instead of just the indexing.

Erik Darling: Nice.

Angie Rudduck: But I’m too busy with clients.

Doug Lane: Plus, he’s a Sagittarius.

Angie Rudduck: Gemini.

Erik Darling: I’ve heard rumors that I’m a Scorpio but I’ve never had that confirmed.

Jessica Connors: Use your Google machine.

Doug Lane: [Imitating Sean Connery] Do you expect me to talk, Scorpio?



How do I set the default port for the DAC?

Jessica Connors: Let’s take one from Ben. He says, “Oh, SQL stuff. Here’s one. In old SQL, we had to set a registry key to set a static remote DAC port. Is there a better way in SQL 2012, 2014, 2016? What’s the registry key?”

Erik Darling: A static remote direct administrative connection port?

Jessica Connors: Mm-hmm.

Erik Darling: Weird. No, I don’t know, I’ve never done that.

Doug Lane: Yeah, me neither.

Angie Rudduck: What is old SQL? Like what version is old SQL?


Angie Rudduck: 2005?

Doug Lane: 2005 he says.

Erik Darling: Hmm, I don’t believe that’s changed much since then.

Richie Rump: Yeah, it sounds like a blog post you need to write, Erik.

Angie Rudduck: We’ve got something on the site about remote DAC because…

Doug Lane: That doesn’t say anything about the port though.

Angie Rudduck: No, but it’s pretty detailed, isn’t it? I don’t know maybe go check that out, Ben, and go from there. I think it’s just go/dac. I don’t know. I’m making up things now.

Erik Darling: Brentozar.com/go/dac, D-A-C.

Jessica Connors: What’s the oldest version of SQL you guys have worked on?

Erik Darling: ’05.

Angie Rudduck: 2000.

Doug Lane: In Critical Care, ’05.

Angie Rudduck: Oh, yeah.

Richie Rump: No 6.5 people? No?

Angie Rudduck: Tara is not here.

Jessica Connors: Yeah, she’d probably have a story about the oldest version she’s used. She’s got the best stories.

Erik Darling: “It was on a floppy disk…”


Doug Lane: I worked on 7 once upon a time. I didn’t actually like do real work on 7, it was just, believe it or not, writing stored procedures in the GUI window.

Angie Rudduck: Query explorer or whatever it is?

Doug Lane: No, it was like the properties of the—it was crazy when I think back on it. There was like no validation of any kind except the little parse button. This was back when Query Analyzer and Enterprise Manager were separate and I was doing it in Enterprise Manager.

Angie Rudduck: We had a 2000 box at my last place and I knew nothing about 2000. I tried logging in there and I was like, “Wait, where is Management Studio?” That was really hard to try to figure it out. The management administrative part is really scary in 2000 and I was on the server directly. It was like already a precarious server about to tip over. So, scary.


What’s the best way to rebuild a 2-billion-row table?

Jessica Connors: Question from Joe. He says, “What is the best way to rebuild a very large index without taking outage or filling the log? Rebuilding after two billion record delete.”

Doug Lane: Oh, are you sure you need to delete two billion rows from a table?

Erik Darling: Maybe he was archiving.

Doug Lane: Yeah, I don’t know if you want to flag them as deleted and then move them out some other time or what, but, wow, that’s a lot of log stuff. You can do minimal logging if it’s a table that you really don’t care about it being fully logged on but there are disadvantages to that too.

Erik Darling: What I would probably do, I mean, if you’re on Enterprise you’re kind of out of luck either way, right? There’s no online index operations there. You can help with the log backup stuff if you put it into bulk logged and continue taking log backups, but at that point, if anything else happens that you need to be recoverable after it starts bulk logging something, you’re going to lose all that information too. So bulk log does have its downsides. It’s not a magic bullet. So depending on your situation, you might be in a little bit of a pickle. A better bet is if you’re deleting two billion records and depending on how many records are leftover, you might just want to dump the stuff that you’re not deleting into another table and then do some sp_rename and switch things around.

Doug Lane: You can actually just drop the index and recreate it. Sometimes that goes a lot faster.


Are there any problems with SQL role triggers?

Jessica Connors: Question from J.H. He says, “Anything to be aware of or downsides of setting up SQL role triggers, mainly sysadmin role changes?”

Erik Darling: All these security questions.

Doug Lane: Yeah.

Erik Darling: We bill ourselves as not security people.

Doug Lane: Like the one before, I think we’re going to punt on that.

Jessica Connors: Thomas Cline says, “No security questions.”

Angie Rudduck: Too bad the slides aren’t up.

Jessica Connors: Yeah.

Erik Darling: “For security questions…”

Angie Rudduck: “Please call…”

Erik Darling: Yeah, there we go.

Angie Rudduck: I’ll do them because it usually works for me.

Erik Darling: Attendees… staff… Angie. I’ll just mute you, just kidding. There we go. You are presenting.


What are the HA and DR options with Azure VMs?

Jessica Connors: All right, who wants to answer some Azure questions?

Erik Darling: Nope.


Jessica Connors: Does anybody here know the HA and DR options with SQL 2012 Standard in Azure VMs?

Doug Lane: Oh, no. Not me.

Erik Darling: Using a VM? If you’re just using the VMs, I assume it’s the same as are available with anything else. It’s only if you use the managed databases that you get something else but I think it’s mirroring either way. I know Amazon RDS uses mirroring.

Richie Rump: Yeah, I think they have like three copies and if one goes down it automatically fails over to the other two or something like that. Don’t quote me.

Jessica Connors: Okay, we’re all being quoted. We’re actually all being transcribed. We’re all being recorded. We’re all being watched.

Erik Darling: Really?


Is there a better solution for replication than linked servers?

Jessica Connors: Question from Cynthia. She says, “My developers have a product that uses linked servers for parameter table replication. I’ve read that linked servers aren’t the greatest. Is there another way to do this?”

Doug Lane: Okay, that’s actually kind of a two-part question because you’ve heard that linked servers aren’t the greatest. You’re right. So with SQL Server 2012 SP1 and later, you don’t have to blast a huge security hole in order to get statistics back from the remote side in linked servers. It used to be that you had to have outrageous permissions like ddl admin or sysadmin in order to reach across, get a good estimate, when it then builds the query plan on the local side. That’s not the case anymore. The problem that you can still run into though is that where clauses can be evaluated on the local side. Meaning, if you do a where on a remote table what can happen is SQL Server will bring the entire contents of that remote table over and then evaluate the where clause locally. So you’re talking about a huge amount of network traffic potentially. That’s what can go wrong with them. The other question, “Is there a better way?” That kind of depends on what flexibility the app gives you because you say that this is a product. So I don’t know if this is something that you have the ability to change or not but if you’re talking about replicating from one side to the other, there’s any number of ways to move data from A to B.

Jessica Connors: And why do linked servers suck so bad?

Doug Lane: I just explained that.

Jessica Connors: Oh, did you? I didn’t hear you say why they suck so bad, sorry.

Doug Lane: Because you can end up with really bad plans either because permissions don’t allow good statistics or you end up pulling everything across the network just to filter it down once you’ve got it on the other side.


Are there any good devops tools for SQL Server?

Jessica Connors: Question from Joshua. This might be one for Richie. “Do you have any recommendations for Microsoft SQL dev ops tools?”

Richie Rump: There’s not a ton. I guess Opserver. I guess from Stack Overflow would be one of them but not that I know of that there’s like out-of-the-box ways to do that kind of stuff. I know when I was consulting with one firm, they had built their own dev ops tools. I think they had Splunk and then they just threw stuff out from SQL Server logs and then did a whole bunch of other querying to put dashboards up so they could do monitoring amongst the team and do all that other stuff. I think Opserver does a lot of that stuff for you but it’s a lot of configuration to get it up and running. I’d say test it out, try it out, and see if that works for you but I’m not aware of any kind of things you could buy and it’s kind of ops-y things. I don’t know, what do you think guys?

Erik Darling: I agree with you.

Doug Lane: I don’t live in the dev ops world.

Jessica Connors: I agree with you, Richie.

Angie Rudduck: Yeah, whatever the developer says.

Jessica Connors: What he said.


Should we disable the SA account and set the DB owner to something else?

Jessica Connors: Question from Curtis. He says, “I’m looking for a clarification on SA usage. sp_Blitz [inaudible 00:12:03] to having DB owner set to SA, not a user account. But what about the best practice of disabling SA? Should DB owner be set to a surrogate SA account?

Erik Darling: Nope. It’s not really catastrophic because it’s something that you should be aware of because usually what happens on a server is someone will come in and restore it. Someone will come in and restore a database, usually from an older server to the new one. They’ll be logged in with their user account so they’ll be the owner of that database. The owner of the database has elevated privileges on the database equal to SA, which you may not want always and forever. That’s why SA should be the owner, even if it’s disabled and the user account shouldn’t be. Even if the user is a sysadmin, you kind of just don’t want them to also be the owner of a database.


How do I migrate databases in simple recovery?

Jessica Connors: Question from Monica M. “We are migrating and upgrading from SQL 2008 R2 to 2014. We use simple recovery as our reporting/analysis rather than OLTP. Our IT department said after I copy/restore the databases to the new server it will take them two weeks to go live. By this time, our DBs will obviously be out of sync. What simple method would be best to perform this move?”

Angie Rudduck: Every time I moved, we did some server upgrades where we just created a new VM and ended up renaming it to the old server name eventually but what we did was we took a full backup like the day before, hopefully, but if you have to do two-weeks, we took the full backup when we knew and then we took a differential right when we’re ready to make the cut over. So let’s say at 6:00 p.m. the maintenance window is open and I’m allowed to take the database offline. I put it in single-user mode. I took a differential and then applied that to the new server. Then took it out of single-user mode on the new server. Then we did all of our extra work. So it’s not perfect for two weeks of data change, so if you could keep applying the fulls until like the night before, that would give you a little bit better change over.


Jessica Connors: Trying to find some questions here. You guys are real chatty today.

Erik Darling: Everyone is all blah, blah, blah, problems, blah, blah, blah.

Jessica Connors: “Here is my error…” They copy and paste it. I’m never reading those.

Erik Darling: “Here’s the memory dump I had.”

Angie Rudduck: Jessica likes to be able to read the questions and she doesn’t read SQL, so nobody reads computer. Nobody really reads computer, including us.

Erik Darling: “Yeah, I found this weird XML…”

Jessica Connors: Richie reads computer.

Angie Rudduck: That’s true, Richie reads computer.

Richie Rump: I was reading XML before I got on.

Angie Rudduck: That’s disturbing.

Erik Darling: Naughty boy.


How do I shrink a 1.2TB database?

Jessica Connors: Here’s a question from Ben. He says, “I have a large 1.2 terabyte [inaudible 00:14:51] queuing database. Added a new drive and a new file device. DBCC SHRINKFILE does not seem to be working on the original file. Seems that the queuing application reuses space before it can be reclaimed. Any suggestions?”

Angie Rudduck: Don’t shrink.

Erik Darling: I don’t know what you’re trying to do. Are you trying to move the file to the new drive or what are you up to? I don’t think you’re being totally honest with us here.

Angie Rudduck: Yeah.

Jessica Connors: But you shouldn’t shrink, huh?

Doug Lane: Spread usage across drives, okay.

Angie Rudduck: Maybe put it on one drive, I don’t know? I guess that’s hard to do with such a large file size.

Jessica Connors: 1.2 terabytes.

Erik Darling: So you have your database and you bought a new drive. Did you put like files or file groups on the new drive? Did you do any of that stuff yet?

Angie Rudduck: He says he has to shrink because the original drive is maxed and he needs workspace. I think it’s just not creating—maybe he has to do what you’re saying, Erik, about creating an additional file group to be on the other drive.

Erik Darling: Right, so what you have to do is actually move stuff over to that other file. So if you haven’t done it already, you have to pick some indexes or nonclustered or clustered indexes and start doing rebuild on the other file group.

Angie Rudduck: Then you’ll be able to clear out space to shrink your file.

Erik Darling: Hopefully.

Angie Rudduck: Maybe, yeah. Let us know next Wednesday.


Has anybody played with SQL Server 2016 yet?

Jessica Connors: Have we played with SQL 2016 yet?

Erik Darling: Oh, yeah.

Doug Lane: Yep.

Jessica Connors: No? Some of you?

Erik Darling: Yes.

Jessica Connors: Have you played around with the 2016 cardinality estimator and do you know if it works better than SQL 2014?

Erik Darling: It’s the same one as 2014.

Jessica Connors:         Is it?

Doug Lane: So there’s the new and the old. Old is 2012 and previous and the new is 2014 plus. There’s all kinds of other new stuff in 2016 but the cardinality estimator actually hasn’t been upgraded a second time.

Erik Darling: Yeah, Microsoft is actually approaching things a little bit differently where post 2014 with a new cardinality estimator, they’ll add optimizer fixes and improvements for a version but you won’t automatically be forced into using those. You’ll have to use trace flag 4199 to apply some of those. So even if you pop right into 2016, you may not see things immediately. You may have to trace flag your way into greatness and glory.


Are high IO waits on TempDB a problem?

Jessica Connors: Here’s a good question from Mandy. She says, “I’ve been on a SQL 2014 standard cluster with tempdb stored on SSDs for several months. The last few days we’ve been seeing a lot of alerts and spotlights saying that we have high IO waits on those tempdb files. The IO waits are as high as 500 to 800 milliseconds. Is this a high value? I’m new to using SSDs with SQL Server and I admit that I just don’t know what high is in this case. Any thoughts?”

Doug Lane: It’s high but how frequent is it? Because if you’re getting an alert that like once a day that you’re hitting that threshold, it may not be something you need to worry about too much depending on what it is that’s hitting it. So what you want to do is look at your wait stats and look at those as a ratio of exactly how much wait has been accumulated versus hours of up time. If you’re seeing a lot of accumulated wait versus hours of up time, not only will you know there’s a problem but you’ll also be able to see what that particular wait type is and get more information about what’s causing it. Then you can put that together with what might be happening in tempdb and possibly come up with an explanation for what’s going on.

Erik Darling: Yeah. I’d also be curious if something changed that started using tempdb a whole lot more or if maybe you might be seeing some hardware degradation just after some time of use.


What should I do when my audit stops working?

Jessica Connors: Question from James. He says, “I’ve installed a SQL Server audit and noticed it stopped working. Is there anyway to be alerted when a SQL Server audit stops or fails?”

Angie Rudduck: Is that the Redgate tool? Because I feel like Redgate had some auditing tool or encrypting tool that went out of support. When I was at my last place and we had to change over so I’m not sure what that is.

Doug Lane: If it throws a certain severity error then you can have SQL Server notify you of those kinds of things. But as far as like audit as a product, I’m not sure.


Will backup compression compress compressed indexes?

Jessica Connors: Then we’ll move on to J.H. Says, “When compressing all tables page option in a database does compressing its backup gain more compression?”

Erik Darling: Yes.

Angie Rudduck: Compression squared.

Erik Darling: Compression times compression. Are you really compressing all your tables to get smaller backups?

Jessica Connors: Is that really bad?

Erik Darling: No. It’s just kind of a funny way to approach it.

Doug Lane: I don’t know if that’s the purpose but…

Angie Rudduck: I think he has no drive space, tiny, tiny, tiny SAN.

Erik Darling: Buy a new thumb drive.

Doug Lane: Talk to Ben because he apparently has the budget to have new large drives.


Are there performance issues with SSMS 2016?

Jessica Connors: We have somebody in here that’s playing with SQL 2016. He says, this is from Michael, “SQL Server Management Studio 2016 sometimes goes into not responding status when using the object explorer window such as expanding the list of database tables. These freezes last around 20 seconds. Is there any known performance issues with SSMS 2016?”

Doug Lane: I found one. I was trying to do a demo on parameter sniffing where I return ten million rows of a single int-type column and maybe about half the time SSMS would stop working and it would crash and force the restart. So I think SSMS 2016, at least related to the RTM release, is a little bit flakey.

Jessica Connors: For now.

Erik Darling: Yeah, it might depend on just how many tables you’re trying to expand too. I’ve been using it for a bit and I haven’t run into that particular problem with just expanding object explore stuff. So how many tables are you trying to bring back would be my question.

Angie Rudduck: I was just about to say, we had that question last week or the week before about SSMS crashing when they tried to…

Erik Darling: Oh, that’s right.

Angie Rudduck: Remember? They were trying to expand their two million objects.

Erik Darling: Yeah, that’s not going to work out well.

Angie Rudduck: So maybe this is the same person, different question.

Doug Lane: I was going to say I think it might just be a little…

Angie Rudduck: Yeah. It’s brand new, what do you expect? It’s a week old. It’s going to be flakey.

Richie Rump: Something to work when you release it?

Angie Rudduck: No, come on.

Richie Rump: I’m just saying, it’s a crazy idea, I know. I have all these crazy ideas but…

Angie Rudduck: Unrealistic expectations, Richie.

Erik Darling: That would require testing.

Jessica Connors: Richie has never released anything with bugs.

Angie Rudduck: Who needs to test things? I did have a client recently ask me what test meant when I was talking about test environment.

Jessica Connors: I know, what?

Richie Rump: What’s this test you speak of?

Erik Darling: Just for the record, Richie wipes cooties on everything he releases.

Angie Rudduck: Kiddie cooties.

Doug Lane: All right, looks like we’ve got two minutes left. Lightening round, huh?

[Group speaking at the same time]


What’s the best SQL Server hardware you’ve ever worked on? And the worst?

Jessica Connors: Question from Dennis. He wants to know, “Tell me the best SQL hardware environment that you have ever worked on.”

Doug Lane: I would say when we went down to Round Rock last year. I got to play with I think it was a 56-core server, that was pretty fun.

Erik Darling: Yeah, I think my best was 64 cores and 1.5 terabytes of RAM.

Richie Rump: Yeah, I had 32 cores and 2 terabytes of RAM.

Erik Darling: Nice.

Jessica Connors: What about the worst you’ve seen with clients?

Erik Darling: Ugh. Probably an availability group with 16 gigs of RAM across them. That was pretty bad. And it had like one dual core processor. It was pretty, yeah. It was Richie’s laptop.

Angie Rudduck: Worse than Richie’s laptop.

Doug Lane: That sounds about like the worst I’ve seen is like dual core, 10 or 12 gigs of RAM.

Angie Rudduck: 500 gigs of data.

Erik Darling: I’ve had faster diaries than that.

Jessica Connors: All right, well, we’re out of time.

All: Bye.

Previous Post
Getting Started With Oracle Week: Creating Indexes and Statistics
Next Post
[Video] Office Hours 2016 2016/06/01 (With Transcriptions)

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.