[Video] Office Hours 2016/08/24 (With Transcriptions)

This week, Richie and Erik discuss in-place instance upgrades, availability groups, the cloud, job security, business intelligence and user adoption, and much more!

Here’s the video on YouTube:

You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.

If you prefer to listen to the audio:


Office Hours Webcast – 2016-08-24


Erik Darling: First one, I already said, take it to Stack Exchange because I have no idea what’s going on there. Grant has a question where he gives some context. Thankfully, finally, someone gives context. “Doing server instance upgrade from 2008 R2 to 2012.” Already no, already no. N-O. Probably P.E., nope, no, don’t do it. The thing at the bottom is that he found a bunch of check constraints that were untrusted and he’s asking if this is something that a senior, not a junior, DBA would be responsible for. If you want to be a senior DBA, you will stand up for the fact that in-place upgrades are idiotic. Don’t do them. Stop it. Fresh hardware. Because you have no back out plan, right? This is something that a senior DBA will let everyone know about. If you upgrade your instance in place and something goes wrong, you don’t have an instance anymore. If you build a new instance and you set up log shipping or mirroring or you just do a backup and restore during a maintenance window to get your application over there, you have a back out plan in case anything goes horribly wrong. Rule number one of being a senior DBA, always have a back out plan. Keep a pair of pants in your pants.

Richie Rump: Yeah, don’t be a developer. Have a backup plan.

Erik Darling: Even Richie uses GitHub, even Richie.

Richie Rump: I check in once a week, come on.

Erik Darling: It’s good enough.


Erik Darling: Next question, “Our tempdb is hosted on a SAN. How can I tell if it’s the bottleneck?” Oh boy, couple of different ways. One, you can check your wait stats and see if page latch stuff in tempdb is going crazy and you can see if tempdb is even a point of contention within SQL Server. Then you can always check the dm file stats, DMV, to see what kind of read and write lag you have on your data files and log files. So if you’re having trouble writing to the files, if you have latency writing to the files, or latency reading from those files, you can check that out and see if it is indeed a bottleneck there. Okay, that’s my answer on that. I should probably start, “answered verbally.”


Erik Darling: Let’s see, let’s scroll on down. This question looks like I’m going to read it. “Currently we have common two-node cluster with SAN running SQL 2014 Enterprise instance for HA. Let’s call it S1. We will create another VM SQL 2104…” What year are you from time traveler? Instance S2 with its own drives and add it to S1. Can we create availability groups? If so…” Good god, man. You need to put that down. If these are your questions about availability groups and failover clusters, you need to put the availability group down. You’re going to have a very hard time managing that availability group and if anything bad happens with it, that availability group, you better pray to the Google gods that your answer lies somewhere readily available or else you are hosed. You should probably step back and use a technology that’s a bit more manageable by people who don’t need to ask availability group questions during a 30-minute webcast, you know, log shipping or mirroring, because you are not prepared for the availability group hell that you are going to face.

Richie Rump: So, Erik, tell me how you really feel.

Erik Darling: I feel like Microsoft hands out these shiny brochures and pamphlets, like retirement home style. Like, “Send your data to shady availability group villages.” And everyone is just like, “Cool. I’m just going to have availability groups. It’s going to be great.” Then they start using them and then you see like the string of down voted questions on Stack Exchange. Like, no, this is not a real question about computers. Stop it. Use something else, because they’re hard. The brochure makes it look like, yeah, you just set it up—because they’re dead easy to set up. I can set them up on my own. Like I can do that. It’s so easy I can do it. But then as soon as you start having to like troubleshoot them, you have a problem with them, and you want to figure out where to take backups and then you realize you have to take DBCC CHECKDB everywhere. And just like all the other stuff that comes along with them. People are just thoroughly unprepared for the amount of administrative work that goes into an availability group. It’s not just like you know like setting them up or getting them kicked off is hard, that’s pretty easy, but geez, as soon as stuff starts failing, that’s when people run to the hills.
Richie Rump: So the tl;dr in availability groups is it’s easy-ish to set up but there’s a lot of gotchas once you start getting into it.

Erik Darling: Yeah. So I would really try to set things up in a development environment first, in a non-production environment, and see how that goes and see what issues arise and what issues you can create on your own. I mean, just sort of figure out if this is actually what you want to do, if this is the right solution for you. Because most people just dive right in saying, “Oh, we’ve got availability groups. We’re going to go for it.” But they don’t actually do any like kicking of the tires or they don’t do a test drive on the technology before they go into it. So mirroring is deprecated but still supported through 2016. Log shipping ain’t going nowheres. You have options other than availability groups that people without advanced SQL degrees can manage fairly easy, mostly just don’t try to use synchronism because that can be tough too.


Erik Darling: All right. J.H., the mysterious J.H., “Parameter sniffing. When using OPTION (RECOMPILE) at the end of a slow secondary ad hoc query with criteria, will it affect/mess up stored plan… anything else on first initial…” I don’t quite understand your question. You’re asking if you have a good plan in cache and then you run a query that gets a bad plan and use recompile, will that keep the bad plan out of cache? If you can clarify that, we might be able to go somewhere with it. I’m not exactly sure what you’re getting to there. Going on down the list…

Richie Rump: A lot of caching going on in that question.

Erik Darling: A lot of caching. Cache is king.


Erik Darling: Ted Locke says that Anchorage airport for 24 hours is much worse than Dallas. Doug Lane is not here to defend Alaska, so I am not going to let that fly. I’m sure the Anchorage airport is lovely.


Erik Darling: Tim Hughes, or just “Timmy Tim” as I call him on the streets, “Can you explain why a different query plan would get created for the same SQL statement and parameter values when using sp_executesql versus running the SQL with parameters in SSMS, even when doing DBCC FREEPROCCACHE before running each?” Well, golly and gosh, Tim. If you are using local variables in Management Studio, then you could totally end up with a weird execution plan because SQL estimates things for local variables a bit differently than it does for parameterized SQL. Parameterized SQL will of course use the statistics histogram and it will happily create a query plan based on the cardinality estimate from that. If you’re in SSMS and you’re using local variables, it will do some fuzzy math based on the rows in the table in the density vector… it will do some fuzzy math, it will multiple those and it will say, “This is your cardinality estimate,” which is usually wrong, horribly wrong. So that’s one possible reason why. If you have more questions on that, I would head on over to Stack Exchange where you can paste in your code and show everyone what it’s doing and the execution plans involved. Gosh, if only there were an easy way to share execution plans.

Richie Rump: Oh, man. Wow.

Erik Darling: Wouldn’t that be a killer product?

Richie Rump: That would be interesting.

Erik Darling: An easy web-based way to do that. We should talk to Brent about that.

Richie Rump: I may have to start thinking about that.

Erik Darling: That’d be a good thing to think about.

Richie Rump: Would that be one of those Erland Sommarskog white papers? You could take a look for that question?

Erik Darling: Sure. There may be some more information on that somewhere on the internet. Let me actually pull up the…

Richie Rump: “Fast in SSMS…”

Erik Darling: So Mr. Erland Sommarskog has a great article called “Slow in the Application, Fast in SSMS?” which delves into some query plan mysteries about why things get weird when they do get weird.


Erik Darling: Eric Swiggum. I don’t think that’s a real name but I’m going to answer this question anyway. Eric Swiggum has a question. “When you go to Server Properties –> Connections under Remote Server Connections, the checkbox “Allow remote connections to this server” what is that exactly? Can I uncheck it in a live production environment? What would happen?


Dude, no. No. Don’t uncheck that in a live production environment. I’m pretty sure that would classify as an outage, as a forced error. Do not check that box. Do not pass go. Yikes.

Richie Rump: Well, I mean, let’s just put it this way. If we had no idea what that is, what would you do? If you had a checkbox, there’s no documentation on it, how would you go about approaching seeing how that worked?

Erik Darling: I would go on Google and I would put in quotes “allow remote connections to the server, management studio” or SQL Server, or something like that, add in some buzz words. I would see what comes up and I would see what Microsoft had to say because perhaps there is documentation about that that would lead me to not uncheck that box in a live production environment willy-nilly.

Richie Rump: I mean, the next step would be what? Go into a dev environment, unclicking it, running some tests up against it, see what happens.

Erik Darling: Perhaps try to connect to that server after unchecking it.

Richie Rump: Yes, because my guess is that they will not be able to connect.

Erik Darling: Yeah, so that’s a lot like turning off your server without turning off your server. I would avoid that like the plague.

Richie Rump: All the folks running on the local would be fine, right?

Erik Darling: Yeah, so anyone just RDPing into the server, opening up SMSS, hacking away queries, they’re good to go. The funny thing is, oh god, so I bet if you uncheck that and you click OK, I bet if you were doing that remotely it would sever your connection and you wouldn’t be able to recheck it.

Richie Rump: Oh, sounds like we could have some fun in the lab.

Erik Darling: Maybe the next DBA Days we can do that. Just kidding. We’re not doing that. Don’t do it.

Richie Rump: Yeah, “Little do we know, we removed all the remote connection on Brent’s server. Let’s see what happens next.”

Erik Darling: Yeah, let’s watch Brent’s server stop responding.


Erik Darling: Thomas Strike asks, “Do you think working on a Security+ certification as an adjunct to my SQL knowledge is a better way to go or should I be more focused on Azure and AWS for adding some job security through knowledge?” Richie, what do you think about the cloud, or the “clued,” as they say in Canada?

Richie Rump: I like the cloud although at this very moment I’m very frustrated with the cloud because I’m running into some problems with one of my VPCs right now. But my personal opinion, there’s no such thing as job security. There’s only what you know. I don’t think there’s anything as bad knowledge but I think that there’s better knowledge. Certification, depending on where you want to go. The security certification is fine, it’s just not going to get you any jobs. It may get you into a door, but that in itself is not going to get you any new jobs. Maybe it will get you a bonus at work, but probably not. What you’ll get is the knowledge that you obtain from getting the certification. At the end of the day, you have a piece of paper and probably out about $500 or so, whatever it takes to take the test. The cloud, in my opinion, is where everyone is going to go. It’s just so easy to get up and running and running at scale. I was able to spin up a VPC yesterday and get a whole virtual private cloud and get a whole network up and running and me as a developer with very little networking knowledge and it’s dead simple. The cloud is all about the little gotchas. There’s what you can and cannot do inside of the cloud, that is where the keys to the knowledge lies. I think you probably need a little bit of both but I think you need to sit down and say, “Where do I want my career to go?” and then make a plan out for that.

Erik Darling: Good points, good points. From my perspective, I think security is just too damn hard to mix in with SQL knowledge. Like, you can run out, grab Denny Cherry’s book Securing SQL Server and you can have a pretty good idea about security from a SQL Server perspective. But then if you want to run out and then learn the vast surface area, the vast swath of security stuff that can go on outside of that, that’s great knowledge to have, but if I wanted to focus on security, it would probably be in the situation where I am stepping away from SQL Server to focus solely on security. Because security is so big and so hard and so vast that piling that on top of SQL Server is just not intuitive to me. SQL Server plus Azure or AWS is much more intuitive to me because your SQL Server can run there in either one. I mean, not like literally there, but you can run SQL Server in either one. There are different tradeoffs and gotchas and backs and forths that you have using SQL Server there but that’s for you to read about later. So if you’re planning on continuing with SQL Server, I would also learn SQL Server in the cloud. If you’re planning on learning security because you’re stepping away from SQL Server, then by all means, go with security. But you know, security within SQL Server is probably its own subject which you can read about at length anywhere.

Richie Rump: No offense to security people but it seems like there are a lot of broken people out there that are security people. They start getting into it and it’s like, “Oh my gosh, there’s nothing that’s secure that’s out there.” It’s just this wide, eye-opening aspect to there’s nothing safe. Once it’s out there, it’s out there.

Erik Darling: Yep. I—actually, no—I’m not even going to tell this story. I don’t want to get anyone in trouble. But I’ve heard things from people who work in the security sector basically along the lines of, “All your passwords are useless.” So, there’s that.

Richie Rump: Yeah, but they’re thinking at a different level rather than us, “Oh, trying to hack in,” and blah blah blah. They’re thinking of all these different vectors to get in and whatnot. I think once you get into that, you start thinking at a different level than normal, run-of-the-mill IT person.

Erik Darling: Yeah. That’s very true.


Erik Darling: All right, let’s get away from this. Heather has a question, “I need to add a large-ish data file to a database that has frequent log backups but getting message 3023, back file manip encrypt must be serialized. Would it be better to add a small file and increase it? I don’t want to interrupt the log backups.” My first question to you is do you have instant file initialization enabled? Because that makes adding large data files pretty quick. Instant file initialization is of course when you have the perform volume maintenance tasks privilege assigned to your SQL Server service account, so that SQL can go and hug a whole bunch of disk space and say, “I’m going to use this eventually.” I’m not going to zero it all out because that’s really the slow part of adding data files is the zeroing out of stuff. So if you have IFI enabled and you’re still getting that, I would probably request a maintenance window where I can pause log backups for a minute or two, maybe five minutes. Add the data file at the size I want and then go on and resume log backups. Otherwise, your idea to add a small file and increase it is probably palatable but you know, if you can mix that all in, then great, if not, you’re going to need that maintenance window.

Richie Rump: [Inaudible 00:16:13]

Erik Darling: Yeah, that’s how I feel about that. Cool.


Erik Darling: Nate Johnson has a question about big data and business intelligence… Nothing. BI, nada. No thoughts. No. You can go to the BA Con. Just kidding, you can’t go to that.

Richie Rump: Not anymore. That’s not a thing. So what’s the question? The question is—because I’ve had thoughts on this stuff. “What do you guys recommend for a company that’s about five to ten years behind the curve just beginning to understand that we need business intelligence/data warehouse types of things to help drive business decisions.” You need to ask what questions you want answered. This isn’t a technology thing, this is a business thing. The business has to sit down and say, “What questions would we love answered?” Then the data team needs to go out and say, “How do we go out and answer these questions? Can we do it with our normal data or do we need something that’s better?” Whether that’s a star schema or a data warehouse or Tableau or whatever, whatever Microsoft BI stuff, Power BI, that’s irrelevant. The question is what questions does the business need answered so they could do XYZ things in the future. So it needs to start with the business. If it’s a data warehouse, a data initiative that’s not started with the business, it’s probably going to end up failing because no one’s going to adopt it.

Erik Darling: I agree.

Richie Rump: I answered a question.

Erik Darling: Good for you. That means you get paid with money this week. You do not get a bag of circus peanuts.

Richie Rump: Thank god. That Life money, nobody is taking that, The Game of Life money. I’m going to the grocery stores like, “But I got $100,000 right here” and they’re like, “No.”

Erik Darling: “I’ve got 1,400 Brent bucks, what can I buy with this?”

Richie Rump: “But Brent said…”


Erik Darling: I owe my soul to the company store.

Richie Rump: Yeah, I wondered why Erik Larsen’s art was on my cash dollars I got from Brent. I should have guessed that it wasn’t legal tender. That’s crazy.

Erik Darling: 40 Brent bucks.


Erik Darling: All right, Fizal says, “We are getting a dedicated integrity checking server. I want to make sure we have it set up as best as possible. Obviously, I want to maximize throughput. We’ll be spreading tempdb and data file across as many spindles as we can and the logs on a lower spindle count. Is there anything that we need to take into account before we install SQL Server?” We have a setup guide that is applicable to any SQL Server, well except for like SharePoint ones, but who cares about those? Probably [inaudible 00:18:57 dynamics] do. But we have a setup guide. If you go to, you can get the First Responder Kit which has our setup guide and checklist in there which will walk you through all the stuff you need to do from soup to nuts setting up and validating a SQL Server install environment plus all that good stuff. Just because it’s only for offloading DBCC CHECKDB doesn’t mean the normal rules don’t apply to it.


Richie Rump: Here’s a question by Raul. “Is it good practice to run the upgrade advisor before performing an actual SQL Server instance?” Well, I think we know what you meant, Raul.

Erik Darling: Let me show you my stroke face, dear. “Is it good practice to run the upgrade advisor before performing an actual…?” I’ve never actually run the upgrade advisor because I’m crazy like that but it probably is a good practice to do it if I were to…

Richie Rump: To do an upgrade.

Erik Darling: If I were to think about it, I would probably want to do that. But in fairness to me, most of the upgrades that I’ve done have been on SQL boxes that were hosting third-party vendor apps where they had already done the testing ahead of time. So I didn’t have to do that sort of thing. All I had to do was get the okay from the vendor that, “We have tested our product on SQL version XXX and you are good to upgrade on that.” So aside from some in-house code stuff like CLR or the things that I kick developers to work on doing, I’ve never actually had to run the upgrade advisor on a piece of in-house software. It’s always been third party stuff. If you’re doing in-house stuff that’s not vendor certified or anything like that, then yeah, I’d probably run it before moving anything over.


Richie Rump: All right. So what replication question do want to answer now, Erik?

Erik Darling: None.

Richie Rump: All of them.

Erik Darling: None, ever.

Erik Darling: J.H., “Does performance improve by placing tempdb data files onto a different drive than its tempdb log files even within a virtual environment?” Probably not if it’s on a SAN. They’re all using the same sort of shared pool of disks anyway. I’m would not go crazy on that.


Erik Darling: All right, you pick a question. I’m sick of picking questions.

Richie Rump: Oh, “Small shop. Accidental DBA here.”

Erik Darling: [Inaudible 00:21:10]

Richie Rump: Yeah, let’s do it. You draw the short straw, congratulations. “If SQL Server is in Windows Authentication Mode, what happens if the domain controllers become available for some reason? Would the cache credentials on my workstation still allow me to connect?”

Erik Darling: You know what, that has never happened to me and I don’t know.

Richie Rump: You stop them. You stop the … Erik.

Erik Darling: I’ve just never run into that. Let’s see, so what I can tell you from some experience is that if you have SQL agent jobs that are running under an active directory account and the active directory account is disabled, that job will usually stop working. So following some sort of logic, I think that if your AD controller goes down, that you may have trouble but the extent of that trouble, I’m not exactly sure on.

Richie Rump: Sounds like it’s a good thing to get into the cloud, set up a lab, domain controller, SQL Server, bring it down, see what happens.

Erik Darling: Or just even set up a Hyper-V environment on your desktop or laptop or whatever kind of top you have. I have one on mine.

Richie Rump: I have my head in the cloud today, man. I’m cloudy.

Erik Darling: You do. I’m not opposed to the cloud but there was a very funny post recently by a Microsoft PFE, that’s a Premier Field Engineer, and even he had limited Azure developer credits. Like he doesn’t even have free reign to Azure. Azure his tail off. So that’s weird too.

Richie Rump: Poor guy.

Erik Darling: Can you imagine, you work for Microsoft and your Azure access is limited.


Richie Rump: All right, last one. “What are your thoughts about being a multi-platform aka polyglot DBA?”

Erik Darling: Aka polyglot. You know, I like exploring other database platforms. I like looking at Oracle and Postgres stuff because I find their implementations of SQL interesting. As far as actually doing the production work on them, if you’re going to be a polyglot, I would want to be a limited polyglot. I would probably want to specialize in one platform but be able to back up DBAs from another platform and their tasks. So you know, learn as much as you can. Learn as much as you want. Those are my two preferred platforms if you want to learn something else and you know help other DBAs out in them until you get your skills up to par with production DBAs in a different platform, that’s totally up to you, but I’m pretty cool with it.

Richie Rump: Yeah, I think it’s a good idea. Not just the relational side either, but there’s a whole slew of databases that are becoming very popular in the NoSQL world, like Mongo, Cassandra, and those of the ilk. It’s probably a good idea to start getting an understanding on how those things work as well. So I heard, last night, I was at the ballpark because there was a tech night at the ballpark, because that’s what you do here in south Florida.

Erik Darling: Wow, check you out.

Richie Rump: So I was talking with someone from a large software organization. I’m not going to say who. But they’re moving their entire platform from SQL Server over to Mongo. I giggled inside because they have relational data and they’re going to a non-relational store. But I’ve used both and I understand the pros and cons of both. So if you could talk intelligently about both platforms, then you could kind of educate other people on the team and it’s not just a buzzword bingo type thing. So definitely, you definitely want to take a look and see what everyone else is doing. If you want to get deeper, go do what Erik has suggested. I think that’s a great way to go.

Erik Darling: All right. That looks like 12:46 to me so I think we are done here for today. Thank you everyone for showing up, asking questions. Sorry we didn’t get to everything. There’s always next week. There’s only two of us, so things tend to go a little bit slower.

Richie Rump: Yeah, and maybe we’ll have an Ozar sighting next week which would be pretty cool.

Erik Darling: Maybe. If he doesn’t get eaten by Smokey the Bear.

Richie Rump: That would be pretty horrible. We wouldn’t want that.

Erik Darling: No, we would not want that.

Richie Rump: The company will go on because Ernie lives.

Erik Darling: That’s true. The company is in Ernie’s name, so. We all just do what Ernie says anyway.

Richie Rump: Pretty much.

Erik Darling: All right. See you in chat.

Richie Rump: Peace.

[Video] Office Hours 2016/08/10 (With Transcriptions)

This week, Brent, Richie, Erik, and Tara get together in Round Rock, TX, for Dell DBA Days, to talk through your SQL questions and answers. They discuss what they are working on at Dell during the event, as well as Windows Kerberos and NTLM Authentication, Always On Availability Groups, Microsoft’s Document DB, automating restore databases to other servers, why they are wearing “fun” hats, and much, much more!

Here’s the video on YouTube:

You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.

If you prefer to listen to the audio:


Office Hours Webcast – 2016-08-10


Brent Ozar: All right, it’s 11:01.

Erik Darling: Which means it’s 12:01.

Tara Kizer: It’s 9:01 right there.

Brent Ozar: So this week we are in Round Rock, Texas at Dell. We could turn this around and show you the other way, but it’s a whole rack of servers and then you hear all the sound humming from all the servers yelling at each other. So this week we’re actually in the same room together. Oh my god.

Erik Darling: Amazing.

Brent Ozar: Unbelievable.

Erik Darling: Reasonably priced Round Rock.

Brent Ozar: And we had a reasonably priced dinner last night.

Erik Darling: It was great.

Tara Kizer: With such great weather.

Brent Ozar: If you ever are looking for a place to come for vacation, Round Rock in August is not an excellent choice. Not a very good choice at all.

Richie Rump: Hades would be cooler.


What We’re Testing at Dell DBA Days

Brent Ozar: I thought you were going to say Haiti, which would also be cooler. Holy smokes. So yeah, it’s been 100 every day. So today, Richie, what are you working on today?

Richie Rump: I’m working on testing loads with bulk loads and C# and that would be today. Tomorrow I’ll be doing things like what if you’re using selects and doing raw inserts and concatenation? What about entity framework? What [inaudible 00:01:11] entity framework inserts? How long will it take to do 100,000 inserts with that? Will it even finish?

Brent Ozar: Now, I keep hearing you swearing, dropping all kinds of like… So what kinds of errors have you been running into?

Richie Rump: Oh, wow. I’ve been running into errors just getting data out and putting it into a format to consume via programmatically. I have a gig file that I’ve created of just straight JSON, of just SQL, from system Stack Overflow. Reading that and getting into the application and then being able to put it back in has been very challenging with the size of data that we’re dealing with, right? I tried with one million, half a million. Now, I’m down to 100,000, seeing how that works. I’m running out of memory. That’s been the biggest problem.

Brent Ozar: But to be fair, on your laptop.

Richie Rump: On my laptop. But a lot of this stuff would be running in some app server, depending on if it’s a client or whatever. So I don’t really want to run it on the big, beefy servers because that’s kind of cheating.

Brent Ozar: That’s fair.

Richie Rump: It’s kind of cheating. You’re not going to have app servers the size of the stuff that we’ve got.

Erik Darling: App servers do not get the same love.

Brent Ozar: In many cases SQL servers don’t even get the same love.

Erik Darling: That’s true. Sometimes they’re on the same hardware.

Brent Ozar: Tara, what are you testing so far today?

Tara Kizer: I’ve been working on backup performance, seeing if there’s any differences in backup times between 2014 and 2016, since they say 2016 “just runs faster.” Not noticing any performance gains with backups at least, not that we were expecting any, but it was a good test. Now, just testing all the different parameters that you can use to affect the backup performance and seeing what works best.

Brent Ozar: When I was a database administrator, I don’t think I ever touched any of those settings, except for the number files. I played with the number files but beyond that, I was like, “There’s a what?” Erik, how about you?

Erik Darling: I am testing a few different things. I’m testing some stuff from the 2016 “it just runs faster” line. The first thing I’m testing is DBCC CHECKDB which it just runs faster because it skips some stuff. But it seems to be going pretty well. I have it tested on 2016. I’m now running commensurate tests on 2014. I’m also testing a feature where 2016 has multiple log writers. So doing a bunch of single row inserts should theoretically go faster on 2016 but that test is running as we speak right now. The other thing that I’m planning on checking is—oh, gosh, what was it? The soft NUMA workload scaling. I’m going to try and put together something that checks on that and see if it actually is 30 percent faster with MAXDOP set to the number of cores in a socket.

Brent Ozar: It’s weird. So the way that they do soft NUMA on 2016, I hadn’t seen this. We’re dealing with these 72 core systems where, theoretically, each NUMA node really has 36 cores but we see, when we run sp_Blitz will show you—with CheckServerInfo equals 1—will show you how SQL server has divided it up in multiple NUMA nodes.

Erik Darling: Yeah, pretty crazy stuff.

Brent Ozar: You don’t have to do anything with it. It just happens automatically. As to whether or not it just runs faster, the jury is still out. Overnight, I ran a load test on, I would add one index, run CHECKTABLE, add another index, run CHECKTABLE. I wanted to see if it scaled exactly linearly or approximately linearly or if I hit any kind of hockey stick where as I started to run into problems with the table being larger than memory, where would I run into problems with this thing taking a stratospheric amount of time? And because we have a relatively limited amount of time here with the boxes, what I just do is I’ve been running the tests and just dumping the data into a database. I’m not even analyzing the outputs of it yet. In the coming weeks, we’ll start to digest and slice and dice these so that we can go give you blogposts on what are the results of these, lay them out in a nice visual fashion so I can see graphs. But the next thing will be that I want to play around with, is memory grants. I want to play around purposely constructed—it’s easy to write bad memory grants on the sizes of data that we have here and then see if I can easily reproduce stuff like RESOURCE_SEMAPHORE query compile issues and see if I get different memory grant estimations back between 2012 and then 2016 CE.

Erik Darling: The query compile ones, for me, have always been a lot harder to hit than just the straight RESOURCE_SEMAPHORE ones. Anyone can blow up memory by getting the query compile ones. Getting SQL to the point where it can’t get memory to even figure out how to run a query is impressive.

Brent Ozar: Yes, which now it’s easier with 72 cores, I can actually have enough running queries and really just drain this guy dry.

Richie Rump: You guys have been.

Brent Ozar: Yes.

Richie Rump: You guys have definitely been draining them.

Brent Ozar: I love this stuff. Let’s see here. So if you guys have questions, now is the chance. There’s only like 20 of you guys in here. Now is the chance to go put your questions in because we usually go through these chronologically. We tend to hit the ones that come in early first. If you add more questions later, you may get missed. You may not actually get the answer in. If you want, we’ll go ahead and get started just since we have folks in here already. We’ll go ahead and start going through these.


What’s the difference between Kerberos and NTLM for Windows authentication?

Brent Ozar: Nate says, “First off, thanks to you all guys for doing this. The videos and transcripts are a big help to me and the rest of the community. How much of a difference is there between using Kerberos and NTLM for Windows authentication?” Oh, you’re breaking up. I can’t…


So do any of you guys know anything about NTLM versus Kerberos? Because I am certainly not that guy.

Tara Kizer: Microsoft says Kerberos is the recommendation so we use Kerberos for now.

Erik Darling: All the applications I’ve ever dealt with use Kerberos too. There was never any, “Oh, you have to use NTLM for this one weird thing.” It was always, like Richie’s shirt: Kerby, Kerby, Kerby.

Richie Rump: Kirby Link.

Erik Darling: Yeah, whatever. I get it. I get it…

Brent Ozar: I have no idea what’s happening here. Graham says, “Kerberos is OS agnostic and more complicated than NTLM.”

Erik Darling: Thanks, Graham.

Tara Kizer: It’s definitely more complicated.


Should I use managed service accounts?

Brent Ozar: Graham says, “I want to use managed service accounts for SQL Server service account. Did you use them?” Did you use MSAs?

Tara Kizer: Nope.

Richie Rump: No.

Erik Darling: No.

Brent Ozar: It was one of those where it came out in Books Online and Microsoft was like, “Yeah, we have these now.” I didn’t see anybody using them. I know you couldn’t use them with AlwaysOn Availability Groups when they first came out and I know that’s fixed now but I just don’t know, because I don’t really know what problem it solves.

Tara Kizer: What’s the difference between MSAs and regular?

Brent Ozar: I think you don’t have to have the password or something like that? Like SQL Server manages or Windows manages the password for you…

Jason Hall: [Inaudible 00:07:43] I don’t know how it works behind the scenes but I know that’s the advantage of it is they can change the password in the backend and it propagates out to all the…

Brent Ozar: So we have here Jason Hall from Dell as well.

Richie Rump: All he is just fingers.

Brent Ozar: Magic, vibrating fingers.


Should I get my own OU in Active Directory?

Brent Ozar: Graham continues: “I would like to manage my own OU in Active Directory, but it’s a pending review in my organization. Is asking to manage my own OU too much? Current management is no bueno.” I never, I don’t know—well, what do you guys feel about that?

Erik Darling: I always wanted my Active Directory guy to just tell me what accounts I can use and just run with it. Separate myself from it a bit.

Tara Kizer: I believe we managed our own OU at one of the corporations I was at, but that’s because we were very trusted by the IT team. But other than that, and I’m not even too sure why we needed to manage it, because it’s not like it was changing very often.

Brent Ozar: If your AD admins are so slow in getting you what you want, if you need to spin up that many SQL servers with different service accounts all the time and virtual computer objects, I would probably ask you why you want that. You know, just what the big benefit is for you to have the rapidly changing AD stuff. I don’t think I’ve ever heard of a DBA getting to manage—I hear a lot of DBAs who are sysadmins who are Active Directory admins, and I was that way. But splitting off into separate OU, I never got that.

Erik Darling: Because like most of the time when you set it up, as long as it’s right the first time, you get like the set service principle stuff, privileges—like the, what is it? Delegate something. It’s another setting that has to do with delegation.

Brent Ozar: Cluster and all that.

Erik Darling: Yeah, yeah, so as long as you get that stuff set up the first time, there’s not really a need to go back and tinker with it again after that. Just set the password to never expire and roll with it. I think managed server accounts, much like in SQL server, the multiple active results sets where like kind of a checkbox for one customer, one big customer who wanted something. They just said, “We have it now.”

Tara Kizer: One of the corporations I worked at, we had about 700 SQL Servers, and we were spinning up SQL Servers pretty quickly. After moving from one service account to—each server got its own service account and the agent got its own. We required the AD team, or whoever was setting up the accounts, to create 100 new accounts for us. That way we didn’t have to keep requesting it and waiting on them. If we ever ran out, we would ask for another 100 of these.

Brent Ozar: I bet they were like pushing you, “Here’s your own OU. Please.”

Tara Kizer: It wasn’t the sysadmins that had to do this. They’d given some other help desk people privileges to add the accounts. So it wasn’t a big deal to them, but it did take a while for them to turn around those 100 accounts.

Brent Ozar: It’s not like they’re adding value. It’s not like they’re doing some—if you’re letting help desk people do it. Graham says he wants his own OU because there’s an AD overhaul going on. “Thanks for the recommendations.”

Erik Darling: There’s the benefit, good luck.


Are there any improvements in SQL 2016’s Always On Availability Groups?

Brent Ozar: Mark asks, “Any improvements in SQL Server 2016 with AlwaysOn?”

Erik Darling: I don’t know if I’d call them improvements, but there are some new features. One that I’ve been writing about a lot is, oh god, what do you call it?

Tara Kizer: Direct seeding.

Erik Darling: Yeah, that thing. I have a whole series of blogposts about direct seeding, what’s kind of good about it, what’s kind of bad about it. Things it works with, things it doesn’t work with. I recommend you go back and take a look at that. The problem that solves is actually really cool, because I look on dba.stackexchange a lot, and there are constantly availability group questions where people want to know, “Is there a way that I can automatically get a database added to my availability group and seed it out without writing a script?” Okay, we added backup, restore, blah blah blah. There never was that, but direct seeding attempts to solve that problem. It’s still got some stuff to work out, it’s a little bit janky, it’s the first run of it. I’ve got some faith in it. I think it’s going to end up being all right.

Brent Ozar: You’re so polite.

Erik Darling: I try. I’m in Texas, I’m trying to be a gentleman.

Brent Ozar: There we go.

Erik Darling: I’m making a nod on that.

Brent Ozar: Multiple availability groups, so you get an AG of AGs is the new distributed availability groups feature. No GUI. But it lets you have one AG in one cluster, another AG in another Windows cluster. Can be in different Windows domains even. And then another AG on top of that. So thinking with it as an AG of AGs. Sounds kind of crazy and complicated, but the nice part is whenever you want to roll out SQL Server 4096 or whatever comes next, then you can go build a separate AG there and directly move your data across from one AG to another. No GUI.

Tara Kizer: It sounds from the documentation that I was reading about, it sounds like they’re trying to solve people that have disaster recovery solutions so that you don’t have to have an AG across both data centers. To me, it sounds like they’re trying to solve how people are configuring their AGs in 2012 and 2014 and not having the quorum in both set up properly. Which you can do it in 2012 and 2014, if you do it correctly. But distributed availability groups makes it simpler.

Brent Ozar: You nailed it when you said they were fixing kind of the wrong problem, is that because it’s built on top of clustering, the cluster can fail, so people kind of want a plan B. More clusters! I’m like, how about you not use clustering as the base level technology? But of course that ship has just sailed. It is what it is.

Erik Darling: I think you also get—what is it? You get some more replicas now, you also have parallel redo on the…

Brent Ozar: Yeah, performance improvements.

Erik Darling: Yeah, performance improvements and probably some compression stuff they improved. I couldn’t tell you more about that.

Brent Ozar: It’s nice to see that they’re actually investing in that technology, that things are getting better there.


Why are you wearing the fun hats?

Brent Ozar: Amber asks, “Why are you guys wearing the fun hats?”

Richie Rump: We’re always wearing these hats.

Tara Kizer: Howdy.

Brent Ozar: We are in Texas. We’re in Round Rock, Texas, for Dell DBA Days. There’s a big, ginormous rack of servers behind you. There’s a wall, a bar over on this other side of the room. We’re eating Mexican food and barbecue and all kinds of things that are going to kill us, steak.

Erik Darling: I think this means that Amber has not been tuning into DBA Days, which is a darn tootin’ shame.

Brent Ozar: Amber, it’s totally free. If you go to, we still have three webcasts left on Thursday and Friday. What’s coming up tomorrow?

Erik Darling: I’m doing a webcast tomorrow called Downtime Train, which explores some of the bad things that can happen when you don’t pay enough attention to tempdb.

Richie Rump: Will you be cosplaying as Rod Stewart?

Erik Darling: The question is, when am I not? That’s the secret, I’m always Rod Stewart.

Brent Ozar: Then we’re also doing one on Thursday afternoon, talking about the performance overhead of various features, where we’re going to look at some of the results that we’ve gotten and just talk about the tests that we ran and what kind of performance impacts that we saw. Then Friday we’re going to start pulling plugs and doing diabolical things to SQL Server in order to make them break, just for fun. Because when we walk out of here, we want to leave the servers in a smoking pile of rubble, just as we leave.


Appreciate the sponsorships, see you next year!


Where can I get help with SPN configuration?

Brent Ozar: Nate says, “Thanks for addressing my question earlier.” Let’s see, Nate’s question was the one about Kerberos and NTLM. He says, “Our environment was never set up with correct SPNs.” Was anyone’s, though, really, when you get right down to it? So all SQL connections are using NTLMs, sounds like it’s time to saddle up and bring it up with the domain names.” One of my favorite, if you’re looking for help on the setspn thing, if you Google for “setspn Robert Davis,” Robert Davis has a post talking about how you go about identifying when you have the service provider name problems and how you go about fixing them. I used to have that stored in my bookmarks, I go back to that every time.

Erik Darling: One thing that I notice frequently when we check up on clients is that in the error log they will have “SQL could not” whatever “with SPN handshake.” I’m like, ugh?

Brent Ozar: Do you ever tell them what to do to fix it?

Richie Rump: No, because it’s never related to a problem.

Tara Kizer: [Looks at laptop screen] We knew it.

Brent Ozar: Folks, you’re saying that the webcast quality seems to be a little bit above and beyond what the Dell DBA Days one is. Yes, we’re using different platform here so it’s a little nicer.

Erik Darling: Thank Richie for setting up the microphone.

Brent Ozar: Yes, and the camera, and yes. Also, this tells you what it’s like here at Dell DBA Days. We are holding the webcam up with a couple of SSDs, that’s how much hardware we have to play around with. We’re just using them for all over the stuff.

Richie Rump: That’s funny, but it’s not a joke. It’s reality.

Brent Ozar: We really are.

Richie Rump: Some SSDs holding up the…

Brent Ozar: To those of you who are watching the Periscope, you can actually see the SSDs now over in the Periscope side.

Erik Darling: So if you want to see even more of us…

Brent Ozar: Yeah, we’re everywhere. All of us.


What’s the impact of snapshot isolation levels on tempdb?

Brent Ozar: Mandy asks, “Erik, in your Downtime Train …” She even got it spelled correctly. “Downtime Train webinar—will you be covering tempdb impact of snapshot isolation levels?”

Erik Darling: Oh, Mandy, yes. Yeah, that’s actually what makes things break at the end of the day is that tempdb wasn’t watched appropriately and someone went and turned on a cool feature and then tempdb stopped responding.

Brent Ozar: BEGIN TRAN.

Tara Kizer: I once had a database, it was a Java application, and…

Brent Ozar: You can stop the story there, it sounds terrifying.

Erik Darling: Yeah, we get it.

Tara Kizer: There were times where, and I don’t remember what it was, hibernate maybe, but there was a bug where it would just do a BEGIN TRAN, and it just would forget to commit. We were using RCSI, similar to snapshot isolation, on the database. Eventually tempdb would run out of space. That’s when I learned about the version store, because we were new to RCSI at the time, it was many years ago, back around 2005. I had to write a job to start looking for this condition and kill that. I just wouldn’t kill an open transaction just because, but if it had been open for several hours, you know, then I’d have to get rid of it so we wouldn’t encounter this tempdb issue. When I left that company, just three years ago, that issue still existed. So it was years of this bug in place, and it wasn’t an application bug, it was the framework they were using.

Erik Darling: I dealt with a very similar bug with a third party vendor back when I worked at a market research company. What would happen sometimes is a telephone interviewer would start asking a question and they would go to edit something. So SQL would start to issue an update to the table to mark a response, but it wouldn’t commit until they hit the next button, so there’d be all these…

Richie Rump: Bad developer.

Erik Darling: “What happened?” “I don’t know.” This was like my first feet in the water with SQL thing too, so I would see this stuff and just… I don’t know.

Brent Ozar: Is this normal? yeah.

Richie Rump: Well, it shouldn’t be normal, how’s that?

Erik Darling: We would see this stuff pile up and they were like, “I can’t do the survey.” “Nope, you can’t.”

Brent Ozar: Kevin says, “Microsoft has a free tool for Kerberos called Kerberos Configuration Manager.” Never seen that. That is pretty sweet, it probably hasn’t been updated in 14 years and has 60 known bugs.

Richie Rump: Why does everything have to be a manager of something?

Brent Ozar: Or enterprise.

Richie Rump: Or enterprise. Maybe enterprise manager of something.

Brent Ozar: As your enterprise manager of Kerberos…


What’s using all the space in my database?

Brent Ozar: Kyle asks, “I have a database which is using 40 gigs of space, and I don’t know where that space is being used. I use various queries to list space usage for table, including the built in SQL Server Management Studios reports, and it shows maybe five gigs use of space usage in table. I’ve tried DBCC UPDATE USAGE and there’s no change. There’ve been no recent upgrades or [inaudible 00:19:31] usage. Where should I look to see what’s taking up that space?”

Erik Darling: Heaps.

Brent Ozar: Oh, I like that. So where would you look to see that?

Erik Darling: You can actually run sp_BlitzIndex and you can figure out if you have heaps with a lot of deletes against them. I talked about it a little bit earlier, when you delete from heaps, you don’t actually deallocate pages from the table, you just delete rows from the pages. So you could have gigantic heaps with very little data in them. That’s very common if people have staging tables or they’re just deleting from the staging table rather than truncating.

Tara Kizer: So that space usage per table report would reflect that?

Brent Ozar: I have no idea.

Erik Darling: sp_BlitzIndex would.

Brent Ozar: But he’s saying he ran the space usage…

Erik Darling: Oh, yeah, I’m not sure what that would show for that.

Tara Kizer: Since you’re looking at reports in SSMS, look at the disk usage. Not per table, just disk usage, and you’ll probably see it’s free space.

Brent Ozar: I like that.

Erik Darling: That doesn’t mean you should shrink your database.

Brent Ozar: Why not? I hear so many good things about shrinking databases. It’s all over the web, everyone says that I should do it.

Erik Darling: Well you have to figure out if that free space is actually a problem or not. If it’s a 40 gig database, if you have, hopefully, you’re living in recent history and you have at least a 100 gig or 200 gig drive, it’s probably not a big deal.

Brent Ozar: True. The solid-state drives we’re using to hold up the webcam are 120 gig drives, that’s why they’re lying around in here like popcorn after a movie theater closes. Because you can buy a 120 gig drive these days, is like $80 on Amazon. It’s really cheap.

Erik Darling: So if your database grew to be that size once, there’s a pretty good chance it’s going to be that size again. If you just kind of leave it how it is, you just save yourself the trouble of having to regrow your data file.


Which conference should I attend, the PASS Summit or SQL Intersection?

Brent Ozar: Nate asks a question, “This year in October, both the PASS Summit and SQLintersection are two different conferences that are scheduled at the same time. Which would you guys recommend?”

Tara Kizer: Both.


Brent Ozar: That’s hard to go wrong.

Erik Darling: I would recommend coming to our precon then going to Intersections.

Richie Rump: Interesting.

Brent Ozar: That’s what I did last year. So what do you guys recommend and why?

Tara Kizer: I haven’t been to SQLintersections, but I want to go.

Brent Ozar: Especially because it’s in Vegas.

Tara Kizer: In Vegas, yeah. But I don’t want to miss PASS. I don’t know. Tough.

Richie Rump: What are you trying to get out of a conference? They are two very different conferences, right?

Brent Ozar: That’s a great question.

Richie Rump: If you’re going to PASS, it’s going to be much more of a community level event, much more of real people who use these technologies day in and day out, they’re going to be the ones presenting. Microsoft presenters are there as well. Intersections, there’s a whole bunch of other different technologies going on at the same time. If you’re more of a widespread, “Hey, I like a little dev, I like a little of this, a little of that,” Intersections may be something good for you.

Brent Ozar: That’s true. I go back and forth to them all the time. I like both of them. PASS is kind of life changing because you see 5,000 people in the same conference center that do the same thing that you do. There are 25 tracks, or whatever it is, going on at exactly the same time. So there’s a bunch of the topics covering everything you could possibly imagine, but only SQL Server. The presentations can be hit or miss, because you got to remember that a lot of us are just community presenters who don’t present full time, they’re going to share what they learn. It’s a lot of real world type stuff there. So don’t get hung up as much on the quality of the presentations, as much as it is you’re getting learning from somebody. Don’t be afraid to go talk to them afterwards. A lot of us will go through, I used when I was attending, just make this list of sessions that I wanted to run from, from one to the next. Stop and just find the ones you really want and talk to people afterwards, go talk to the presenter, take your time.

Erik Darling: If you want five different introductions to R, go to the Summit.

Brent Ozar: Five sessions on an intro to R. Intersections, the presenters tend to be a little bit more full time presenter-y, like professional trainers. There’s less simultaneous tracks, there’s only going to be like three or four SQL Server tracks simultaneously. So you don’t quite get the breadth. But just, you know, six of one, half dozen of the other. Seattle in October sucks. Vegas is a little bit more fun. So if you’re going just as an excuse to write a check and go party somewhere, go to Vegas.

Richie Rump: Yeah. I would say probably the “hallway track” at PASS is probably best. The hallway track being, “Hey, I’m not going to go to a session, but I’m going to talk to people just hanging around and get learning from that.” I would say PASS is probably best for that. I remember I was at PASS and I had cornered one of the Microsoft guys. I got a two-hour conversation on just the things that I was looking for. So I had like my own private session for a couple hours. Those are the kind of cool things that I think that happens at PASS that maybe not so much will happen at some other conferences.

Brent Ozar: The after hours track at PASS is huge too. Like there’s so many parties every night that vendors put on, that individuals do, karaoke-type things. Intersections, at 5 o’clock or whatever it is, everyone disappears. Everybody goes off and does their own thing. The speakers still get together, and if you’re an attendee that wants to get to the speakers, go talk to one of the speakers and they’ll tell you wherever they’re going off as a group that night. It’s just very different from the SQL family thing that a lot of people talk about at PASS.


Does the Senior DBA class cover the cloud?

Brent Ozar: Amber says, “Does your senior DBA class include cloud technologies as well?” We do one hour on the differences between platform as a service and infrastructure as a service, and why senior DBAs typically want infrastructure as a service, but it is only one hour. Typically, if you want more in-depth cloud stuff, here’s the thing, it changes so fast that generally you don’t want to go get a regular cloud training class on it. You want to go directly to the cloud vendor and explain to him what your use case is and they’ll help get you set up.


Have you used Microsoft’s DocumentDB?

Brent Ozar: Graham asks, “Any of you guys have any experience with Microsoft’s DocumentDB?” Have you seen anybody using that, Richie?

Richie Rump: I haven’t seen really many people use it hardly at all. From what I understand, it’s very similar to the stuff they have over at Amazon, and Dynamo and things like that. Which I’ve worked with.

Brent Ozar: DocumentDB?

Richie Rump: No, I’ve worked with Dynamo, I haven’t worked with DocumentDB yet. It’s on the list of things to do, and it’s very long.

Brent Ozar: Tell me about the kinds of experiences that you had with Dynamo.

Richie Rump: With Dynamo, essentially you’re just going to have one flat table, and it’s going to be very fast in that flat table. If you need to run with one particular index on it, you can do that. Any more than that, you’re kind of sunk. You’re not going to want to do like a lot of OLTP type stuff.

Brent Ozar: Relational…

Richie Rump: Yeah, like have my whole bunch of other tables and they all relate to one another. You probably want one unique thing in one table and then have something else that’s not related in another table. Very fast, great performant, but you’re not going to get the niceties of transactions and things like that, ASCII compliance, that you would get with your regular SQL.

Brent Ozar: Relationals.

Richie Rump: Yeah.

Brent Ozar: The thing I would look at with DocumentDB is the maximum size of the document, and then the maximum size of a collection as well. I forget what DocumentDB calls the collection thing, but it’s fairly limited, as is Amazon’s. They’re both pretty limited.

Richie Rump: Yeah, we were messing with some of that stuff a few weeks ago. We had decided not to put it inside the database but move it off to like S3 storage or BLOB storage. Because from a cost and from pulling it out, and doing all that stuff, it just didn’t make sense.

Brent Ozar: And the file size.

Richie Rump: Yeah, the file size. It’s that learning stuff that you just learn, it’s like, “Oh, this is not like a regular database. This is something completely different.”

Brent Ozar: And it changes so fast. Every six months, you have huge, massive changes with it. We know the new product manager for DocumentDB, Denny Lee, was at Microsoft years ago, worked in the SQL CAT team and then left to go do NoSQL work and then came back to Microsoft. So, they’re investing in it at least. They hired a new product manager for it.


Does optimize for ad hoc help performance?

Brent Ozar: Dennis asks, “In your experience, when you’re dealing with SQL Server 20012 …”


We don’t deal with that a lot, but we’ll tell you about its grandpa, 2012. “Does turning on optimize for ad hoc help performance when we have a few stored procs and a whole bunch of ad hoc developers?

Tara Kizer: I don’t know that it helps necessarily with performance, but it helps with memory. You’re going to use less memory for your ad hoc plans because with that setting on, it’s going to mean that when the query runs the first time, the stub record is going to be loaded into the plan cache rather than the full execution plan. If that query runs again, then it’ll load the full execution plan. A lot of times with ad hoc queries, they only ever run once. I was actually investigating this last week because a client asked if entity framework queries come in as ad hoc because they had most of their plan cache was ad hoc queries. They said, “We can’t possibly have that many employees running queries to have caused this.” I did a little research, and there was a Stack Exchange answer that said, “Yes, entity framework comes in as ad hoc.” So I would recommend that if you are using entity framework, to go ahead and enable this, especially if the ad hoc query bucket is a significant percentage of your plan cache. That way, you can use less memory for the plan cache.

Erik Darling: Group question, because I’ve heard rumors but I’ve never actually witnessed it. What are situations where turning it on could backfire?

Tara Kizer: There’s one, and it’s when a query runs twice and no more times than twice.

Brent Ozar: Because you compile twice.

Tara Kizer: Compile twice, yeah. That’s usually not the case, it’s usually one or more. Usually one.

Brent Ozar: I’ve seen people add it in their setup checklist just as a default, and I never raise that. I’m like, “Okay, that cool. Yeah. If you want to use it, that’s fine.”


I’m an MCSE – what should I do now that the MCM is gone?

Brent Ozar: Alberto says, “I’m a Venezuelan guy currently living in Argentina.” You sound like a good time. I like you already. “I’m an MCSE in SQL Server and I would like to know what should be my next step now that the MCM is gone.” I don’t know, what country do you guys think he should move to next?

Erik Darling: Go for Brazil.

Tara Kizer: Brazil, yeah.

Richie Rump: Chile.

Brent Ozar: Yeah, the mountains are beautiful. All right, sadly, there is no advanced certification. There’s Microsoft’s new Big Data University, what is it that they call that thing? So, there’s some kind of Big Data University thing that they’re kind of positioning as a college degree except it’s not. It’s not like an accredited kind of thing. Sadly, you are at the pinnacle, and not just because you’re living in cool places. You’re as good as it gets. I would say, instead of thinking about spending time studying for a cert, now’s the time where you start giving back. You start writing blogs, doing presentations, because that will enrich your career way more than a certification will. Anybody can get a certification.

Erik Darling: Could try getting a job.

Brent Ozar: Try getting a job, yeah.

Richie Rump: It’s a data science “degree,” right? It’s “degree.”

Brent Ozar: It’s like the antiperspirant, it’s not really a degree.

Tara Kizer: So who’s going to be going after those data scientists? Wouldn’t that be on the BI side? And if you’re an MCSE asking about MCM and other certifications, probably not interested in data scientists.

Brent Ozar: God, I hope not. Nothing against data science. It’s wonderful.

Erik Darling: It’s a very specific career path, not necessarily one for DBAs.

Tara Kizer: It’s not DBAs, yeah.

Richie Rump: [Inaudible 00:30:39] Buck Woody doesn’t pop up behind you, [crosstalk 00:30:41] data science there.

Tara Kizer: We’re not talking bad about it, it’s just DBAs aren’t going to be going down that track, probably.

Brent Ozar: He had a blogpost recently where he said don’t learn data science. He said, “Whatever you do, don’t learn data science. Learn specific topics. Pick things, I want to solve this problem this week.” Because it’s going to take you five years to be a data scientist, just pick things you can nibble off in a week.


I have a really long question…

Brent Ozar: David says, “I’ve read that SSMS 2016 is no longer directly connected to the version of database that you’re running. I’ve downloaded SSMS 2016 and I ran into what must be an easy problem.” Then he lists four pages of things. Hold on, he says, “I’m trying to import…”

Tara Kizer: I think the answer’s going to be we don’t know.

Brent Ozar: Just at a glance, what I would do is any time you have a multi paragraph thing with an error message, just post it on Stack Exchange. Go to and you can post it over there. Hope people can get you the exact answer on that one.


Weren’t SQL Server Cumulative Updates always included in Windows Updates?

Brent Ozar: Rob says, “I see this week in the blog that Microsoft is putting out SQL Server cumulative updates in their Windows updates. Weren’t those always in Windows updates in the past?” No. Service packs were. Service packs have been for the longest time, but cumulative updates popping up in Windows updates, that is new. Microsoft hasn’t gotten its act together either. I have a screenshot in that blogpost where it shows both CU14 and Service Pack 2 inside the same thing where it’s going to tell you to download a gig and a half of updates, and one of them just replaces the other.

Tara Kizer: In bigger organizations, you’re going to have Windows updates turned off on your servers and possibly use SCCM, System Center Configuration Manager, to manage your updates and what you’re going to be pushing to the servers. A lot of organizations on the SQL servers, you might push the patches to the server but you’re not installing them yet. The servers have already downloaded them but then a DBA comes along or a sysadmin and double-clicks on the bubble that appears and just installs it. Then you can reboot and you do your failovers.


How do I keep server objects in sync between different SQL Servers?

Brent Ozar: The man with the coolest first name in the world, Rowdy, asks, “I’m working on …” And yes, I know who you are, Rowdy. Because it’s not like, how many people do we know named Rowdy? There’s one guy named Rowdy and I recognize his last name. “I’m working on some scripts to ensure that our DR instances remain in sync with our failover cluster instance. So far, I’ve got scripts to check logins and SQL agent jobs. Next, I’m working to compare the contents of sys configurations. What else would you compare?” That’s such a great question.

Tara Kizer: I would stop and don’t reinvent the wheel. There’s tools out there that can do this for you already. I used to do all of this manually and you’re talking about a lot of things you’re going to have to check. Alerts, operators, besides jobs and logins, there’s a lot of things that are stored outside of your user database.

Brent Ozar: Is there any third party that…

Tara Kizer: I know Bill Graziano has a tool, I don’t know if that’s the most popular tool. He wrote one a few years ago, I know that he keeps it up to date. I’m not sure about other tools though.

Brent Ozar: What’s the name of it?

Tara Kizer: I don’t remember his tool.

Erik Darling: So that’s Bill Graziano, G-R-A-Z-I-A-N-O?

Tara Kizer: Yeah.

Brent Ozar: I’m going to go fire up Google.

Tara Kizer: I think he is

Brent Ozar: Yeah. Let’s see here, I’m sure he’s got … yeah, go start there.

Tara Kizer: I think that there’s another tool that maybe people are using.

Brent Ozar: Redgate’s Schema Compare.

Erik Darling: SQL Compare.

Brent Ozar: Yeah, SQL Compare.

Tara Kizer: It does the server level? Okay.

Jason Hall: Toad has a server compare built in too, it’s not enabled at the moment, which is kind of a shame for this use case. You want to try to trial, you could at least do one and list off all the differences that it checks…

Tara Kizer: I’m not sure if they can hear him since he’s so far from the microwave… Microwave?


It’s almost lunchtime.

Tara Kizer: Microphone. Jason was saying that Toad has it also, but it would require you to run it manually.

Brent Ozar: So what are he’s saying is go get the trial, it’s like 14, 30 days, something like that, for free. Then you could at least compare the two servers, see what it flags, and then that’s …

Tara Kizer: That’s true, yeah, and then you can script it.

Jason Hall: [inaudible 00:34:47]

Tara Kizer: Yeah, there’s a lot of things stored outside of your user database. Usually for me it was just logins and jobs and that was it. Not a whole lot.

Erik Darling: Good luck, Rowdy.

Brent Ozar: Good luck, Rowdy.


Why is instance stacking a bad idea?

Brent Ozar: Dan asks a question that I know is near and dear to all of our hearts. “My manager wants me to install a second named instance on an existing SQL Server because we have an app that requires its own named instance. What issues can I expect?”

Erik Darling: All of them.

Tara Kizer: I would want an answer to a question first from Dan. Is this a virtual machine or a physical server? If it’s a virtual machine, just spin up another virtual machine. If it’s a physical server, then… He says it is a VM. Why don’t you just spin up another virtual machine? Maybe tune down the other server if you don’t have the resources, and split them up. We don’t like the idea of stacked instances when there’s multiple instances on a server, and especially when it’s a VM, because you have options with virtual machines.

Brent Ozar: Somebody emailed us at help and asked, because they’d heard us rant against stacked instances several times on different webcasts. They said, “No seriously, what are the issues?” So we’ll just start brainstorming some of them. My big one is always security. If I have to get someone Windows admin rights because that vendor, if he wants his own named instance, he’s going to want to remote desktop in, and then he’s going to want to change things. I only want him breaking his own VM, not other people’s VMs. Patch timing, like when I’m going to have to patch this thing, Windows and SQL Server patches, when they’re going to do restarts, I’d rather have that separate. Because sooner or later this app is going to be like, “I can only go to SQL 2012 Service Pack 1,” and everybody else wants to go to newer, like 2014, then you end up peeling them off onto different VMs anyway. Different failover options, performance troubleshooting is a nightmare. Where is my CPU usage coming from? Then people will start playing around with affinity masking and they’re hardcoding how much CPU each instance can actually get, it’s just a big pain in the rear.

Tara Kizer: Yeah, CPU and memory would be my two biggest. I mean, even if you use Resource Governor, that can’t control everything.

Erik Darling: You’re using direct attached storage…

Brent Ozar: Oh yeah, yeah, yeah. [crosstalk 00:36:56] Fighting over the same SAN [inaudible 00:37:00].


Should I go to a training class or a conference?

Brent Ozar: Kyle says, “I was given the thumbs up to attend PASS this year but I will not be given any money on training or travel next year. I was looking into your performance tuning class or your senior DBA class online, but I’ve never been to an in-depth SQL course. I’m leaning towards one of the training classes instead of PASS. What would the difference be in the long run?” Oh, this is such a good question. I’ll give you my take. PASS gives you a lot of different 60 to 75 minute sessions. It is not really a start-to-finish drill down into one topic. However, PASS is better if you want to network, meet people, build a reputation in the SQL Server community, get help from other people in person, see vendors in the exhibitor area. It’s really more of a networking thing than it is an educational thing. Don’t get me wrong, there’s good sessions at conferences. My always experience has been, I’m only going to get to say five or six sessions per day, max. And three of them might suck, so I might get happy with two of them. Whereas, when you go to our training, let’s say four days straight, I guarantee no more than two sessions max will suck in any given day. Most of them will be okay. You don’t do that much networking. You network amongst the 15 to 30 people who are there in training and you meet them for life. But it’s just a different kind of setup deal.


How many people actually automate their restore checks?

Brent Ozar: Graham says, “I’ve automated …” Oh, Graham, I like you already.” I have automated my database restores and CHECKDB with the results of each process getting logged to a table. I know that this is recommended, but how many people actually do this? And is there a tool or something that can help automate this for me?” Do you guys have anything that helps automate restoring databases to another server?

Jason Hall: We do exactly this. I will tell you a lot of people do this, some of who have it sounds like done it the hard way. Apologize for the pitch here—but LiteSpeed for SQL Server, our backup and recovery tool, about a year ago introduced a feature called Automated Restore. The whole idea is you point Automated Restore to a folder. LiteSpeed’s going to scan through that folder and automatically find the latest backup set. So if you’ve got fulls, diffs, logs, all mixed into this folder, we’ll do the hard work for you and find the latest backup set to recover. We’ll recover it to a target system. There’s a lot of reasons why you may want to do this. One of which is just test the restore, right? The best way to know it can recover? Recover it. If the restore is successful, LiteSpeed can just drop the database, right? You’re done, let’s free up the space. Let’s move on to the next ones. You can now set up this round robin. The second reason you can set up an automated restore is exactly this: after your database is restored, run your CHECKDBs against it, so LiteSpeed can do that too. We give you all the CHECKDB options that you might want to set. After the CHECKDB is done, we’ll log the results into a central repository. Again, if it’s successful, you can drop the database, free up the space, and move on to the next one. If it fails, it’ll leave the database around, so you can get in there, figure out what’s going on, hopefully fix the issues. So yes, if you’re looking for a third party approach, LiteSpeed does exactly this. It’s probably one of our more interesting features we’ve added in a while to be honest with you. We’re starting to see a lot of traction. Certainly you can do it the hard way, it’s a lot of scripting though. There’s a lot of variables you have to consider: where your backups are, the types of backups there are, the parameters you need to use for recovery, and then all the CHECKDB options. So if you’re looking for a way to automate it, then yeah, LiteSpeed for SQL Server. If you Google it, you’ll get right to the product page. There’s a free 14-day trial for LiteSpeed. So if you wanted to kick the tires on it and make sure it does what you need it to do, you can run it for 14 days. Yeah, thanks for the plug.

Brent Ozar: I have to say too, people will often ask, “Why would I still get something like LiteSpeed when I get backup compression in the product?” This is one of the reasons why. I also like a better log shipping GUI. It’s easier to reinitialize whenever log shipping breaks. It’s easier for me to get undos for particular transactions. Danny, the developer, dropped a table, he even did a truncate table, which people will say isn’t logged—it is, but just not in the way that you would expect. LiteSpeed can go through and read your log files, reconstruct the table from scratch. It’s the easiest way to get back to a point in time before Danny screwed your database.

Erik Darling: It also does object level restores. So if you have a very large database and Danny the developer drops like 10,000 rows out of a [crosstalk 00:41:32] anything like that, you can restore a particular object and you’re much better off.

Brent Ozar: I knew you couldn’t resist.

Jason Hall: Since we’re shamelessly plugging, one of my favorite features of LiteSpeed, it’s actually a subset of our object recover functionality. We call it Execute Select. You can actually run queries against your backup files. I’m sure this has happened to someone before, where somebody comes in, maybe it’s an auditor, and they say, “Hey, I need to look at some records from four years ago.” Normally you scramble, you find a server to recover the backup to, you make sure you allocate enough space. You wait four hours for the restore to finish. You run your one query and then you drop the database. LiteSpeed just takes that query, runs it against the backup file, and you get your data back. You can output either just to Management Studio results set grid, you can output to CSV. If you want the data offline, it’s a really nice option. We started to see customers even start to kind of reevaluate their data management processes now. Think of this, you can use a backup file as a read-only data source. Think of the options this gives you for ETL. If you’re backing up your databases every night, why go back to the production system to extract data if you can just extract it from the backup file? Right, maybe you shave three hours off of maintenance Windows that you can now use for better index maintenance, scheduled recording, CHECKDBs, so absolutely. Again, thank you for the shameless plug but anyone out there who maybe has used LiteSpeed in the past, or has heard about LiteSpeed, please don’t consider it just a compression tool. It’s one of hundreds of amazing features the product has. Most of the enhancements we’ve made to LiteSpeed over the last several years have been more around these exact things. How do we recover from scenarios faster? How do we help you manage backups across large installations of SQL Server? Compression is great. There are millions of ways to compress data. If you come to me and say, “Hey, all I want to do is shrink my backups,” there’s lots of ways to do that. Many of them are included in SQL Server itself. But if you’re looking for a broader data management platform that will assist with both managing backup and recovery and helping you recover from scenarios faster, LiteSpeed is a great option.

Tara Kizer: Just to add two points to the question that Graham asked. I had this automated for mission critical databases that had performance issues when the CHECKDB would run on production. If we didn’t have any issues on the databases, then CHECKDB just ran there. It was just mission critical databases that have performance issues when CHECKDB was running. There are performance issues because CHECKDB is a very IO intensive resource, CPU. So on the lower priority, or if you have enough resources on your server, just run CHECKDB in production. My second point is, if you’re using a SAN, you can do SAN snapshots, that way you don’t have to do a restore. You can have your database, if you’re on another server, within a few seconds, and then run CHECKDB against that.

Brent Ozar: I’d almost say, like if your databases are less than maybe 50, 100 gigs, you should be running it in production, just because you want the fastest alerts that your data is no bueno.


How much does LiteSpeed cost?

Brent Ozar: Scott says, “Since you’re shilling LiteSpeed, what’s the rough pricing for the product?” You should probably contact your salesperson for that. I want to say it’s between $1,000 and $2,000 a server.

Jason Hall: You’re right on. General licensing right now is per instance of SQL Server. You can buy as many of them as you want, volume discounts apply.

Brent Ozar: Yeah, the more you buy, the more you save.

Erik Darling: Don’t accuse your salesperson of shilling, you will not get a good discount.

Brent Ozar: Unless his name is Curt Schilling.



Where can I learn about replication?

All right, that gets us to the end of today’s questions. Mike asks, “Looking for training on replication, do you have any suggestions?” I would say books.

Tara Kizer: I don’t think that there is a… books, yeah.

Brent Ozar: Yeah, go hit Amazon. Hilary Cotter and Kendal Van Dyke have books on them, but other than that, I think that’s it these days.

Erik Darling: I think there’s a free Simple Talk book about replication. I couldn’t tell you who wrote it or what the name of it is, but I know Simple Talk and Redgate do a lot of the free e-books.

Tara Kizer: I learned in production replication. I’ve been using replication for many years and as you’re troubleshooting and trying to figure out why there’s errors, why there’s latency, learning on the fly and then Googling things. So I started in production with it.

Richie Rump: You’re book will be coming out when?

Tara Kizer: I will never ever again write even a chapter on SQL Server.

Erik Darling: I’d rather just learn a reliable technology.

Brent Ozar: So which training would you recommend for AlwaysOn Availability Groups? No, not ours either. It’s hard, that those kind of high availability and disaster recovery ones are, you really have to get really far in depth, and it’s whatever way your architecture is designed. Same thing with replication, so many different options for architecture. So welcome to real world doing it live.

Erik Darling: Allan Hirt has really good training for the HA and DR stuff if that’s what you really want.

Brent Ozar: Yeah, like three days, four days.

Erik Darling: He sets you up a lab, you get to play with the lab, you get to cover all sorts of scenarios that are just really hard to cover in a book.

Tara Kizer: Looks like Redgate has an e-book that Scott commented on. Fundamentals of SQL Server 2012 Replication. It’s a PDF.

Brent Ozar: There you go, nice. All right, cool.


Brent Ozar: Thanks, everybody for hanging out with us this week, and we will see you through the rest of the week on the Dell DBAs Webcast. If you want to go to, will be the webcast for the rest of this week, and then we’ll see you at the next Office Hours as well. Bye, everybody.

Tara Kizer: Bye.

Richie Rump: Happy trails.

Brent Ozar: Happy trails.

Wanna attend the next Office Hours live? Register here, or subscribe to our podcast.

[Video] Office Hours 2016/08/03 (With Transcriptions)

This week, Richie, Erik, and Tara discuss availability groups, sharding databases, using DNS entries to mask DB server names, and Erik’s passion for internet buttons.

Here’s the video on YouTube:

You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.

If you prefer to listen to the audio:


Office Hours Webcast – 2016-08-03


How can you create test data?

Erik Darling: Okay, there’s a question from Joseph. “Are there any products or tools that you like to use to create volume data for performance tuning queries? I’ve tried a couple but I’m not satisfied. I’d like something I can kind of point at a table and it’s going to then figure out what dependencies exist and create appropriate data.”

Richie Rump: Sounds like magic. Wizardry.

Erik Darling: So I’m not sure which tools you’ve already tried. The only one that I’ve messed with, and it was sort of a beta version, was a Redgate tool that created a bunch of data. I was okay with that. If you don’t do that, if that doesn’t meet your needs, then you might have to home grow something because I’m not sure of anything that you can just point at a table and create a bunch of dependencies like that. Anyone else? You guys ever do anything?

Tara Kizer: No, I’ve always worked at large organizations where we have a performance team and they use tools like LoadRunner. They have tools. It’s all magic to me.

Richie Rump: Yeah.

Erik Darling: Like, “How did you do that?”

Tara Kizer: Yeah, I just come in after the database is ready and we start a load test.

Erik Darling: There you go.

[Richie starts speaking]

Erik Darling: Oh, go ahead, Richie.

Richie Rump: No, it’s just a lot of the perfs that I’ve done is actually, “Hey, production is slow,” right? Everyone is going crazy.

Erik Darling: That one’s fast.

Richie Rump: That’s right.

Erik Darling: Now it’s slow. Now it’s fast.


How do you set up distributed availability groups?

Erik Darling: Tara, this one is right up your alley. Tommy asks, “Do you have any experience with setting up distributed availability groups? What are the requirements and best way to use them?”

Tara Kizer: What is meant by distributed though? I’m not familiar with that.

Erik Darling: Oh, I believe he means—oh, I think I read something about this. Give me a second because I have to take my internet buttons off my fingers. So SQL Server 2016, distributed availability groups… All right, I will paste that link over to you if you want to give that a look at. I’m going to paste this into chat since I don’t think any of us have a particularly good answer on that just yet because it’s maybe some new magic that we haven’t played much with. Tara would have the best chance of knowing some magic on that one.

Erik Darling: Let’s see, Joseph says, “There was some great music in the 80s, it just gets lost under the hair and makeup.” Yes. So did my grandmother. We still haven’t found her. It’s just a pile of hair and makeup.

Tara Kizer: Oh, so you have two availability groups. I haven’t done it that way. The availability groups that I’ve used have been for HA reasons and DR reasons so it was a multi-subnet configuration. We had one availability group stretching across two sites. Three replicas at the primary site, two replicas at the DR site. So I haven’t used this methodology before. Yeah, this is new for 2016 and I’ve only used it for 2012 and 2014. So I’m not sure what the advantage is of using a distributed availability group as compared to just say a multi-subnet configuration with one availability group.

Erik Darling: Maybe that’s something we could toodle around with at DBA Days when we have all that hardware at our disposal. I mean, none of it’s really like widespread so I don’t think we can get any of the funkiness that comes along, but we could try to do it.

Tara Kizer: I’ll definitely look more into it.

Erik Darling: Cool.


How does licensing work for upgrades?

Erik Darling: All right. Joseph has a follow-up question asking—we’re going to go back to the upgrade past 2005 to 2016. That double hop should not be a licensing issue. I think they give back level licensing with your latest level, right?

Tara Kizer: It depends if you have software insurance.

Erik Darling: Yeah. You have to pay that SA tax, buddy.

Tara Kizer: Yeah, just to add to that. Microsoft does want you to upgrade so if you don’t have the 2012 bits to get it installed, I would contact whoever sold you the license and see if you can get that because they really want you to upgrade off 2005. So I’m sure that they could help you here.

Erik Darling: Sure enough.


When I call queries across servers, where is the work done?

Erik Darling: Ian asks, “If I call a sproc from server A that is on server B, where will it be processed? In other words, where will the memory and CPU and disk be hit?”

Tara Kizer: It depends how you’re doing it. I believe if you use OPENQUERY that that will actually run everything on the remote server. So we’d have to see how this is being called. So where does the stored procedure… oh, it’s on server B.

Erik Darling: Yeah, so it changes a little bit. It depends on where the data is. So if you’re using linked servers, if you’re using distributed queries, Conor Cunningham has a great talk about this on the SQLBits website but I’m also going to answer it a little bit. So it really depends on where the data is and also then further where the SQL decides the best place to have the data is. Because it can actually push some results around from one server to the other. If there’s a smaller table on server A, it can actually bring that data over to server B and do a join there. Conor is a really smart guy. He’s one of the architects of SQL Server. He’s been at Microsoft forever and ever. He knows absolutely everything about this so you will probably get all the information and more that you need from this talk. I highly recommend it.


Does sharding data across SQL Servers help blocking problems?

Erik Darling: Lee asks, “What about the idea of sharding a 1.5 terabyte database into multiple databases on separate SQL instances in order to help performance issues related to blocking? I have a couple of developers who are pushing this idea on me despite my strong resistance. They believe it will help performance because it will be less data for SQL Server to traverse during queries. I feel the complexity simply is not worth it and that sharding isn’t the silver bullet in this situation.” Any initial thoughts on that?

Tara Kizer: I don’t like the sounds of it. I mean, I agree with Lee that this is going to be complex. How are you going to keep them all in sync or are these completely separate databases that you don’t have to keep in sync? What performance issues are you having now? You’re likely missing indexes, bad queries, there’s something wrong. A 1.5 terabyte database, it’s large, but it’s not that large. I’ve seen bigger databases than that.

Richie Rump: Yeah, it’s not large and I think are you going to have to pull the data together and process it, right? So now you’re going to multiple servers, multiple databases. Then I’ve got to put it into one place, then I’ve got to actually get the results I want out, and that is a pain. It is terrible.

Erik Darling: That is a really painful process. So some things you can try instead if you want to talk to those little hamster wheel developers that you have trying to get you to shard everything, like this is a dupe. So what I would suggest is a) if you’re having problems with blocking, right? You have performance issues related to blocking. Try an optimistic isolation level, like snapshot isolation or read committed snapshot isolation. Also, make sure that your indexes are effective. Make sure there are no missing indexes on that table. Make sure that there are no crazy indexes on that table. If you have a lot of duplicate, borderline duplicate indexes, or some overly wide indexes, a lot of the queries that are going to modify data can hold some pretty long locks on things and keep your other queries from doing stuff for a while. Don’t use NOLOCK, it’s not going to help you. If you find that you really do need to partition things out, I would try to go with the partitioned view idea before I would go to actual partitioning. That’s just where you create tables for a segment of data and you create a view that queries them all together. If you have the right constraints, by date, or some other identifying characteristic, you can actually get pretty good elimination of certain tables when you hit that view. So there’s a lot of stuff you can do before you go crazy separating a database out to a whole bunch of different servers that can help things.

Richie Rump: Yeah, I’ve always just used straight partitioning but when you do that you have to always query by the partitioning key at that point.

Erik Darling: Yeah, you also have to align all your indexes to the partition and… crazy stuff goes on.

Richie Rump: Yeah.

Erik Darling: This button does not partition tables.

Richie Rump: But again, 1.5 terabytes, these days, isn’t really a ton of data. I feel it’s more the norm than anything else.

Erik Darling: Yeah, we deal with some pretty big databases day to day. Just today, I was working on a database that was 450 gigs. Back when I had a real job, the biggest database I ever had was around 11 terabytes. So there’s a lot of stuff you can do before you start having to get all hamster-y about trying to shard data. Really because SQL Server wasn’t really built for sharding like that.

Richie Rump: So what was everyone’s largest database that they’ve dealt with?

Tara Kizer: The one that I was primary DBA on was 7 terabytes at one point but however the actual SQL Server DBA team the largest one I believe at the time was 16 terabytes. I just wasn’t primary on that one.

Erik Darling: Did you ever have to do anything with that?

Tara Kizer: You know that was a third-party product so they did have partitioning set up. They definitely had performance in mind and setting it up properly, had to do with a patent database.

Richie Rump: Yeah, mine was 40 terabytes.

Tara Kizer: Oh, wow.

Richie Rump: Yeah, it was fun. That had some really unique problems to it that you just had to think outside the box to fix them.
Erik Darling: So how much of that was like normal data and how much of that was just like crazy LOB columns?

Richie Rump: It was all normal data. It was all POS data coming in from 35,000 stores across the U.S. So there was just massive amounts of data there that are coming in on a daily basis. The good thing is that they only needed stuff that was for the day or for the month and then the rest of it essentially is not queried very often.

Erik Darling: Oh, man.

Richie Rump: We had a lot of rollups and data warehouses and big Teradata implementation and stuff like that. But, yeah, everything was stored in SQL Server.

Erik Darling: So Lee actually follows up and says, “We are using RCSI and have been for a while. I agree sharding is overly complex to solve blocking issues.” So if you’re using RCSI and you still have blocking issues, give us a call, because that sounds like an interesting problem.

Richie Rump: Erik will look at your database.

Erik Darling: Yeah, I want to stare at that thing for a while and just see what’s happening.


How many databases can I fit on one VM?

Erik Darling: Anker asks, “We use VM environment for our SQL 2014. I want to know the thresholds where I can say that this VM is done with new databases and time to spin a new VM up. Everyone says CPU disk memory, but how much? Do you have any SQL which can tell this? Thanks.” So guys, what thresholds do you look at for resource consumption before you say it’s time to get a new server going and put some other stuff on there? Another question would be, how many resources do you give each server and expect to see consumed by your workload?

Tara Kizer: It’s really hard to answer. If I’m looking at new hardware, trying to decide what size hardware to buy for a new database, I’m thinking about what this database is going to need and how much the company can afford. So I like to go all out as far as hardware goes. That way you can be on that hardware for maybe a couple more years than if you went cheaper. But as far as a SQL query, there really isn’t one. If you’re on older hardware and your CPU is up and maybe if you don’t have enough memory, look at your wait stats, have you tuned everything that you can? If you have, you may need to upgrade to newer hardware.

Erik Darling: So you get a button that sends up a new VM.

Tara Kizer: Yeah.

Richie Rump: That’s right.

Erik Darling: This one shuts the old ones down. What I would do is I would, like Tara said, start looking at your wait stats. If you have hit a bottleneck in one of those areas, do that. Or, sp_BlitzFirst. You can run that over spans of time: 30 seconds, a minute, five minutes, whatever. Just see what your wait stats are. See what your bottlenecks are during your load times. Because not every server is going to be the same, right? As far as just one single query that does it, you’re not really going to get that. There are queries out there that exam wait stats and aggregate them over time but you really want to know what you’re waiting on when things are busy, right? You don’t care what things are waiting on when you’re doing maintenance. You don’t care what things are waiting on when you’re doing backups and DBCC CHECKDB or index stuff, because that’s not customer facing. I would run BlitzFirst. I would get a good idea of what my bottlenecks are and I would try to tune from there. Like BlitzFirst will tell you if your CPU utilization is really high. So like with a VM, you want to typically be using a little bit more than you do with a physical box because with VM resources that are sitting around idle can actually be a performance drag on that. I believe David Klee has a good blog post on that somewhere. I don’t have it handy though. I would just make sure that my CPUs aren’t just constantly pegged at 100 percent. I want to make sure that my memory is, I’m not constantly reading from disk, getting a lot of page IO latch, or stuff like that. So I would want to measure each server differently, find out whatever bottleneck I’m hitting.

Tara Kizer: Just a quick correction. I said SP3 was released a couple weeks ago—I really meant SP2. I got my numbers mixed up for 2014.

Erik Darling: SP3 is 2012.

Tara Kizer: Yeah, exactly.

Erik Darling: This button downloads SP3 for 2012 and this one downloads SP2 for 2014. God, that’s confusing.


What are symlinks and how do they work?

Erik Darling: Monica asks—is this an Excel question? “What do you know about symlinks? We are running out of space on our SAN causing our backups to intermittently fail until we can get the space added. One of the suggestions was to move a database to another server and use a symlink.”

Tara Kizer: I don’t know what that is.

Erik Darling: Me either. Is that a SAN thing? Because that doesn’t sound like a SQL Server thing.

Richie Rump: No, it’s a symbolic link. Think of it as like a super…

Erik Darling: A pointer.

Richie Rump: Yeah, pointer. A shortcut, right?

Tara Kizer: Does SQL Server support that? I mean, is it a SAN technology that’s going to redirect it somewhere else?

Richie Rump: From the OS perspective, it thinks the file is actual there but it goes to somewhere else. I’ve never tried it with SQL Server. It seems slow. It would feel, I don’t know, but it’d be interesting. Go ahead and give it a try and see what happens.

Erik Darling: Yeah, I don’t know, I would be very concerned about… Okay, so you want to move a database to a symlink. No. It sounds like a horrible idea. Because where would the symlink be? It sounds like it would be physically further away, right?

Tara Kizer: Is it another SAN? I just wonder if this increases your risk of database corruption.

Erik Darling: Yeah, I would be concerned about a lot of things.

Tara Kizer: I’ve never heard of it being used for SQL Server.

Erik Darling: Yeah, I would rather point my backups to a symlink than anything else.

Richie Rump: Well now with SQL Server on Linux, you may get your chance of symlinking all day long with SQL Server.

Erik Darling: I always wanted SQL Server on Linux. I’m so excited. That was like the one thing missing in my life. I was like, wife, kid, good job… SQL Server on Linux, where are you? Because Windows Core wasn’t good enough. I need Linux. I want to bash everything. Run my cron jobs to defragment all my indexes. Dream of mine. It’s a dream we all live in.


Should I use DNS CNAMES to mask server moves?

Erik Darling: Gregory asks, “Have you ever used DNS entries to mask your db server names to avoid connection string changes when you change or move hardware? Kicking the idea around. Thoughts?”

Tara Kizer: All the time. I’ve used it a lot. I’ve used it back in SQL Server 2000. I actually prefer, I think that right out of the gate you should be using DNS entries and not the actual SQL Server name because over the years you’re going to be upgrading and moving hardware.

Erik Darling: So are you talking about like a CNAME alias in DNS or something else?

Tara Kizer: A CNAME alias.

Erik Darling: That was his question too. So tell us more about it because we don’t have any other questions. So if you want to expound on this…

Tara Kizer: I can’t expound on the actual DNS portion of it, but yeah, we used it a lot. As a matter of fact, there is a way for availability groups to, if your applications are running using older database drivers that do not support the multi-subnet configuration, you can set up the DNS alias so that it redirects to itself I believe is what we did at my last job. It was really fancy. It was a fancy DNS thing. I wasn’t the one who set it up like that but a sysadmin knew what to do. But yeah, I probably used at least 100 DNS aliases over the years.

Erik Darling: Nice.

Tara Kizer: I much prefer it.

Erik Darling: Yeah. So one thing that the CNAME alias is very common with is log shipping. That’s because log shipping doesn’t support any of the connection string fanciness that mirroring and availability groups do. So with log shipping, if you failover and you expect your application to just say, “I need the server,” then you need a CNAME alias so that log shipping can gracefully—or that your application can gracefully handle the log shipping failover to some other server out in the great wild west. This button has nothing to do with CNAME aliases or DNS. Nothing to do with it.

Tara Kizer: You like your buttons.

Erik Darling: This button might end the webcast if someone doesn’t start asking a question or if Richie doesn’t have something really interesting to talk about.

Tara Kizer: We have eight more minutes.

Richie Rump: I think it’s just going to be funny to see you setting all that up in AWS. I think it’s just going to just be kind of a hoot.

Erik Darling: Oh no, I’m going to make you write the code.

Richie Rump: Oh dang it.

Erik Darling: Yeah.

Tara Kizer: I thought it was already set up.

Richie Rump: Oh man.

Erik Darling: No, because they’re going to do all sorts of different things. Like I want once click to do one thing and two clicks to do another thing. Three clicks is going to turn the TV on in the middle of the night and freak everyone out. It’s going to be like, I think you could actually have like a whole Scooby-Doo episode of some dude just hanging out with IoT buttons. Just like: Ghost. Martians. Stair creaking. Chandelier drops. Just like all sorts of stuff. Like pillowcase on coat hanger down hallway. That’s what I would do.

Richie Rump: “If it wasn’t for you and you pesky kids…”

Erik Darling: You pull the mask off and it’s Jeff Bezos like, “Grrr!”

Richie Rump: “I would have stole that money from that old woman.”

Erik Darling: It’s a good idea. I think you could make that work. You could have a whole genre of mysteries surrounded by IoT buttons. By the way, I don’t get any kickback for showing these things. I just enjoy the way they feel in my hands. They’re like little maracas.

Richie Rump: You know, they only have like 1,000 clicks. So you don’t want to be wasting those clicks, man.

Erik Darling: I’m not even clicking. I just did the clicking action. It’s like practice clicking.
Erik Darling: Mike asks…

Tara Kizer: This is a good one.


What do you think of Ola Hallengren’s maintenance scripts?

Erik Darling: “What do you think of Ola Hallengren Maintenance Solution’s script jobs?” We love them.

Tara Kizer: We recommend them to all of our clients. A small minority of our clients are already using them. So yeah, we definitely like them.

Erik Darling: They handle all of those grody broom-and-dustpan tasks that no DBA actually wants to do. So backup, DBCC CHECKDB, and index and statistics maintenance. Hooray. And it’s all written out. It’s very well coded. Much better than you could do on your own. It’s kept up to date. Totally free. All sorts of bug fixes and stuff comes out with new versions. So totally recommend them. They are awesome.

Tara Kizer: Just to give you a number. At my job three jobs ago, we had it on about 700 servers so it’s definitely, it’s probably on thousands of servers around the world.



Erik Darling: Gregory asks, “You know anything about SOS_SCHEDULER_YIELD waits? I get CPU queue length waits on my VM db server sometimes.”

Tara Kizer: All right, so sometimes, but what is your overall wait stats since SQL Server has been online? Are you just seeing SOS_SCHEDULER_YIELD sometimes or is that like your top wait? Give us some more information.

Erik Darling: If it’s only SOS_SCHEDULER_YIELD that’s a little weird because usually that comes along with CXPACKET. There’s a lot of CXPACKET. But SOS_SCHEDULER_YIELD is of course when something gets on a worker to start running and for some reason it can’t keep running. Like it’s waiting for a lock on something else to release so it can run. As it exhausts its quantum, its quantum has four milliseconds to get everything it needs and keep on going. If it doesn’t get that, it goes to the back of the queue and waits its turn patiently like a good Englishman to go back and run again. So we would need some other wait stats along with SOS_SCHEDULER_YIELD to give you a better diagnosis.


Do we need to do CHECKDB on multiple copies of the same database?

Erik Darling: Mikal asks, I believe it’s Mikal [pronounces name like Michael], it’s M-I-K-A-L. If I said that wrong, I’m sorry. “We do a full DBCC on a restored production database weekly. Do we also need to do it on the actual production database?” No.

Tara Kizer: You do not.

Erik Darling: You can if you want, but…

Tara Kizer: Yeah, on these VLDBs you’re going to want to do that. You’re going to want to have a SAN snapshot or something that gets the database to another server and run CHECKDB there because you just can’t possibly run CHECKDB on very large databases.

Erik Darling: Yeah, so one thing I used to do was I would offload the full DBCC check to another server but I would periodically do physical only checks on my actual production. Just because they’re quicker and they’re a little bit more lightweight. They’re still horrible but they’re a little bit easier to stomach than the full run. So you could like once a month or once a week still run it on your primary but offloading is totally a legit strategy for the full DBCC check. As long as it’s on a full backup, clean, full backup every single time.


Are there any gotchas with storing unstructured data in a relational database?

Erik Darling: We have a couple minutes left and a couple questions left. Joseph asks, drumroll, “Do you have much experience with people storing unstructured data in the database, CLOB, BLOB or XML data, querying, retrieving, etc.? Does it perform well or does it have issues/gotchas? How does it compare to something like a dupe?” Don’t know about a dupe, do query a lot of XML.

Tara Kizer: And XML parsing. I’ve seen some clients that do a lot of XML parsing. Unfortunately, it’s just slow in SQL Server so I have a hard time telling them what to do. Move all of that XML parsing into the application and don’t do that work in SQL Server.

Richie Rump: Yeah, don’t do it in SQL Server.

Erik Darling: But if you have to do it in SQL Server there are some things you can do to help it. One thing that I usually suggest is if you are constantly depending on like an exists query in your XML to give you something then I make a computed column out of that and I just query the computed column. XML indexes are also wonderful things. They can really help if you are looking for a few consistent paths. So check out the rules around XML indexes and see if they could be useful to you. One really big downside to XML stuff in SQL Server is that it always runs single threaded, will serialize anything. So you can’t see any parallel processing of XML data. So that’s one reason why it’s always going to be a little bit slower and why you shouldn’t do it in SQL Server. But if you’ve got to, check out indexes, check out computed columns based on certain XML queries because they’re usually better options than doing all that processing on the fly.

Richie Rump: We can just put everything in JSON. That’s faster in SQL Server, right?

Erik Darling: Yeah, totally! That’s the best way to do things! That’s the way everyone should… No.

Richie Rump: Or maybe it’s the exact same code.

Erik Darling: Yeah, get the BLOBs out of the database.

Richie Rump: Agreed.

Erik Darling: With that, I think I’ve had enough for one day. How about you guys?

Tara Kizer: Sure.

Erik Darling: All right, we are at time. It’s been lovely chatting with you all. We will see you—well no, we won’t see you next week because next week we’ll be at DBA Days.

Richie Rump: So we will see you next week.

Erik Darling: So we’ll see you in different ways at DBA Days.

Tara Kizer: Make sure you register, sign up.

Erik Darling: Yeah. We’re going to break all sorts of stuff. Adios.

Tara Kizer: Bye.

[Video] Office Hours 2016/07/27 (With Transcriptions)

This week, Brent, Richie, Erik, and Tara discuss speeding up performance, how to make SSRS highly available, what to do with SSIS packages when making upgrades, Windows Core, and how to hide data from DBAs when they have SA access.

Here’s the video on YouTube:

You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.

If you prefer to listen to the audio:


Office Hours Webcast – 2016-07-27


Should I use a covering index or an indexed view?

Brent Ozar: Shaun asks, “I have a number of tables that get loaded every night in an ETL process. Most queries only use a limited number of columns and rows from these tables. Should I use a covering index, an index view, or just put that data in a separate table?” Oh, it took me a minute. I had to read it a couple times. So most of your queries only use a few columns in a wide table. So how do you go about making that kind of stuff fast? I should start picking people, and Tara is reading it, so I’ll pick Erik.

Tara Kizer: Yeah, I’m still reading it.

Erik Darling: So an index view doesn’t really help you there. There’s a lot of rules around index views and it’s just hitting a single table. Index views really shine when you need to sort of aggregate stuff and go crazy. So I wouldn’t recommend an index view there just because of the added complexity. Also, there’s a lot—well not a lot of—but there is some overhead when you insert or modify in index view. We have a blogpost about that hanging out somewhere on the site I believe Kendra or Jeramiah wrote it. But there is that. So for you, my good man, I’m going to go with a covering index to speed up those queries. And if they really only use a limited number of columns and rows, you may even be able to get away with a filtered index to further drop down the amount of indexing you have on this table that’s getting cabobbled every night.

Tara Kizer: I was going to suggest a filtered index as well.

Erik Darling: Ha, I’m awake this week.

Tara Kizer: If it’s the same rows at least amongst most of the users.

Erik Darling: Yeah. There are some funky rules around filtered indexes too and we have many blogposts about those on the site as well so give those a read through if you’re interested in more.



Brent Ozar: Breck says—Breck just throws a whole bunch of metrics out. It’s almost like he’s playing metrics bingo. He says, “100 percent CPU. Top wait. ASYNC_NETWORK_IO 40 percent. PREEMPTIVE_OS_PIPEOPS 31 percent. Would this indicate an application load spike?” Wow, that’s kind of tricky.


Richie Rump: I don’t have my Magic 8 Ball here. I need it. Where did it go?

Tara Kizer: If SQL Server is waiting on the app, for ASYNC_NETWORK_IO, almost always means SQL Server is waiting for the application to tell it to keep sending rows. The application is doing row-by-row processing. But if SQL Server is just waiting, it’s not turning through CPU during that time, I mean it’s just sitting there waiting, holding the data—I don’t know, are you sure that it’s SQL Server?

Brent Ozar: Yeah, I like that. Could be something else burning the CPU cycles.

Erik Darling: Yeah.

Tara Kizer: And the preemptive OS is external, it’s an OS process for that preemptive wait.

Erik Darling: Yeah, so one interesting thing I’ve seen is that PREEMPTIVE_OS_PIPEOPS sometimes correlates to people using xp_cmdshell for stuff. So if you see high waits for that and someone is using xp_cmdshell, you could be waiting on xp_cmdshell to fire off some other application on some other commands to the OS. So I’d be careful there as well.

Brent Ozar: Good point.


Is 140MB usage in TempDB a problem?

Brent Ozar: Shaun says, “I have a SQL 2012 R2 standard cluster. It’s got about 250 gigs of databases. Tempdb has got eight files, 16 gigs of space. During business hours, tempdb only uses like 140 megabytes, is that a problem?”

Tara Kizer: It is not a problem. You could probably shrink down your files but 16 gigabytes is really small. Disks are cheap, right?

Brent Ozar: If it’s a problem, if you would like to fix it, just go create a bunch of tables or queries that dump stuff into tempdb. That problem will be solved right away.


Idle Chit-Chat

Brent Ozar: Next up, we have J.H., asks, “Have you had a chance to check out super soldier Captain Randy Cramer as he looks and sounds like your twin clone?” I have no idea who that is, so the answer is no. Immediately Tara goes and takes her headphones off so she’s got to have like a super soldier Randy Cramer toy or something like that. No—is that true? No, she’s not. Tara, I thought you were going to come running back with a super soldier Randy Cramer thing.

Tara Kizer: No, sorry. I was turning off my fan in case that’s why my audio is bad.

Brent Ozar: Funny. Wes says, “Tara sounds like she’s in a bottle.” That’s because she is in a bottle. She lives in one of those build a ship bottles, it’s just a really large bottle.

Erik Darling: She is a resident SQL genie.

Tara Kizer: Does it sound better now?

Brent Ozar: No, I think it’s some kind of like USB audio thing. It’s like echoey but it might be that your gain is up too high or something.

Tara Kizer: I have no idea what to do.

Brent Ozar: Yeah, I know. It’s not that bad. It sounds fine, it’s not that bad. It’s like you’re literally in a bottle.


How do you make SSRS highly available?

Brent Ozar: Let’s see, Tommy says—Tommy is like [inaudible 00:04:24] initials champion here. SSRS, SSIS, HA, SQL 2016. All right, “Can you recommend articles to provide a good way to accomplish HA with SSRS?” How do you guys make SSRS highly available?

Erik Darling: I use Tableau.


Erik Darling: Get my own Tableau server, hang out with that.

Brent Ozar: That’s just mean, man. That’s like Power BI, everybody then has Excel at that point, it’s highly available. So SSRS is not cluster aware, so what you usually do is you stand up a bunch of little cheap VMs because SSRS also isn’t really resource intensive. A bunch of little cheap VMs and then you throw a load balancer in front of it, an F5, an IP… I’ve forgotten what the IP deal is. Then you just round robin across those. If one of them dies, who cares, it doesn’t matter. Licensing isn’t cheap on that but when you’re licensing VMs at the host level then it’s not quite so bad.


Is it common to see 8-10 instances on a 3-node cluster?

Brent Ozar: Ryan says—I need to take a deep breath as I ask Ryan’s question—“Our normal configuration is a large three-node cluster with about eight to ten instances setup as failover clustered instances.” I can already see us all kind of like itching away from the webcam.

Tara Kizer: I have him beat on that by the way, I am embarrassed to say.

Brent Ozar: What was your typical when you were dealing with stuff?

Tara Kizer: No, no, I have him beat as far as number of instances on a cluster. I had a four-node cluster with 11 instances. It wasn’t even the worst I saw. It’s the worst I implemented. I was forced to do this.

Brent Ozar: “I was young. I needed the money.”


Brent Ozar: He says, “I’ve been told by a few different people that this configuration isn’t very common.” You have now been told by four more people. “Some vendors have scoffed at it. Is it really that rare in favor of say a two-node with a single default instance that runs as active/passive?” Think of it as like a bell curve. Normal people are out there and you’re definitely on the edge of the bell curve. It’s fairly unusual.

Tara Kizer: Yeah, I mean the reason why I hated it back then is because we were on SQL Server 2005 and in order to install anything, SQL Server service packs, whatever, all four nodes had to be in agreement that didn’t have to reboot again. It was insane to have to install a new instance or a service pack or hotfix. I had to reboot all four servers, usually like five times each just to get the installation to finally agree, okay, all four nodes are finally fine. It was horrible. But starting with 2008, you could do one node at a time. But yeah, what’s the purpose of having eight to ten on a three-node cluster? Why not just have several two-node clusters instead?

Brent Ozar: It’s a party.

Tara Kizer: Or, do you have duplicate database names, is that why you’re having to spread out across SQL instances? Why not just put them on a three-node cluster, use two SQL instances and then have all the databases from all eight to ten on those two?

Brent Ozar: Wow, you just blew my mind with the duplicate database name thing. I’m like, why would anyone ever do that—then [makes explosion sound].

Erik Darling: Oh, yeah. Yeah, because we’ve dealt with a couple of clients who use vendor products who all have their vendor name as the database name and they’re like, “No, we can’t separate things out. We can’t do these things because everything is going to this database.” It’s like, ugh, why?

Brent Ozar: That’s incredible.


Why does sys.dm_exec_requests show suspended?

Brent Ozar: Greg asks, “Why does sys.dm_exec_requests return a status of suspended when a query is still executing?” That’s a really good question. The first thing I can think of is like a BEGIN TRAN. Somebody does a BEGIN TRAN, executes something, and now we’re suspended. What else would do it?

Erik Darling: If it’s being blocked by something else, would it be suspended? Or is that still runnable?

Brent Ozar: Oh, I bet you’re right, I bet if it is blocked.

Erik Darling: Yeah, that would be my first guess.

Brent Ozar: I like that.

Erik Darling: Because it’s like, suspend is like when it’s waiting on a resource or something else to go through I think.

Brent Ozar: Oh, he says, “It’s in a query that runs in a restore.” Oh, I wonder if it’s waiting for file activity? Or if it’s part of like, I don’t know, or it could be extended store procedures. I wonder if it’s doing something preemptive?

Erik Darling: Yeah, I would make sure if it’s during a restore, I would make sure that I have instant file initialization turned on to limit the amount of things that I would have to wait on during that process. You do that by giving the SQL service account perform volume maintenance tasks permissions and the security policy user rights assignment, secpol.msc, for the curious.

Brent Ozar: Nice.
Brent Ozar: Scott sends in a URL for super soldier Randy whatever-it-was, Randy Cramer…

Erik Darling: I ain’t clicking that.

Brent Ozar: …who claims to have spent 17 years on Mars and spending three years serving on a secret space fleet. You know, we do look a lot alike in this webcam and his picture. I do have to give it to you. Suddenly, I feel like I need to put on a nicer shirt. J.H. says, “After—”

[Erik laughs]

Brent Ozar: You looked at the picture, didn’t you? We are disturbingly alike. That’s totally going in the webcast notes. I’m going to reply to all just so everyone can see it. “This is Randy…” So then everybody can go click on that link. So it’s now in the answered questions for us.


Why can’t I shrink this file down from 5GB to 2GB?

Brent Ozar: J.H. says, “After I shrink a datafile to its minimum size…” Stop doing that. “I.e., five gigs within the Management Studio GUI, how come the space used is smaller afterwards?” Oh, wow. So he did a shrink to five gigs in the GUI but the space used… man, I had to read that three times to understand what’s going on. Let me rephrase J.H.’s question. So he’s got a file and when he used the GUI it says you can only shrink it down to five gigs. “But when I look at the space used, it’s only using two gigs, why can’t I shrink it down further?” I bet it’s a log file.

Erik Darling: Yeah, so check in sys.databases and see if there is a log_reuse_wait_desc. So just like look like database name and log_reuse_wait_desc from sys.databases and see if you’re waiting on something weird. Like if you used to have replication but you didn’t remove replication right, it could be waiting on that. If you used to participate in mirroring, if you haven’t taken a log backup, there are a lot of reasons why it may not be. You could also just try running a manual checkpoint and seeing if you can shrink it down if the database is in simple recovery.

Tara Kizer: Why are you trying to go from five gigabytes to two gigabytes is my question. That’s just ridiculous. I can understand if it was 500 gigabytes and maybe you could get it down to 200 gigabytes, but from five to two, that’s just—it just seems like a waste of time to me.

Brent Ozar: He’s running SQL Server on a Raspberry Pi, he’s only got so much space left on his thumb drive. He needs to free that up.


What should I do with my SSIS packages when I upgrade from 2008 R2 to 2014?

Brent Ozar: Anker says, “Hi, I am Anker.” Hello, I’m Brent. Actually right now, all of us are Brent. “We are upgrading SQL 2008 R2 to 2014. We have some SSIS packages. Should I open them in SQL Server 2014? I’ve heard that if I open them in 2014 it’s going to update them to 2014. What should I do with my SSIS packages if I’m doing this big, fancy upgrade process? Should I move them to the 2014 server or leave them on 2008 R2?”

Tara Kizer: If you open them up in Visual Studio that’s when in the newer version of the tools, that’s when it would upgrade. And you don’t have to save those changes. But I mean, where are you opening up these files? If you modify your SSIS packages using Visual Studio 2008 and just change the data source to point to the SQL 2014 new SQL instance, save that, and you’re good to go. Just deploy that 2008 package to wherever your 2008 SSIS repository is.

Brent Ozar: This is one of the reasons I like having a separate server for it too.

Tara Kizer: Yeah.

Brent Ozar: Build your own SSIS server, leave it in a corner, do whatever you want with it on a separate version upgrade plan.

Erik Darling: This is one of the reasons I like having Tara, because I still haven’t opened SSIS.

Tara Kizer: I used it quite a bit at my last job for [inaudible 00:12:07] type tasks.

Richie Rump: I’m systematically forgetting all of my SSIS information.


How common are database snapshots?

Brent Ozar: Ryan says, “I’m on database snapshots. Have you guys used database snapshots? How common are they to see out in the wild?”

Erik Darling: Very uncommon.

Tara Kizer: I’ve used them.

[Speaking at same time]

Tara Kizer: I used them a couple of times. I think on Twitter or on the [inaudible 00:12:32] mailing list there’s people that really like to use them and revert to them if they run into problems with an upgrade. We used them on the asynchronous database mirror for some internal users to run their horrible queries against it so they wouldn’t touch production.

Brent Ozar: Yeah, I’d have to say out of every SQL Server I’ve—we have a warning in sp_Blitz that tells us when there’s database snapshots and I can’t remember the last time I saw that warning. But that probably has more to do with the fact that people call us when they’re having performance problems and they probably delete the snapshots in order to avoid performance problems.

Tara Kizer: I’ve seen it a couple times in Blitz and then I immediately, I just ignore them, because there’s really no issue with having a snapshot. I mean as long as you’re refreshing them, you’re not just leaving them out there.

Erik Darling: Yeah, I’ve thought about using snapshots on a couple occasions for things but my main kind of gripe with them was that there are point and times when you create them and then if you want to keep refreshing them, then you have to create a new one and sort of drain users over. It just seemed like way too much of a process for what I was getting out of it.


More About Shrinking Databases and Rare Steaks

Brent Ozar: J.H. says, follow up to his shrink question, he says, “The database was actually 80 gigs in size and he was shrinking it down to five gigs.” Five gigs is good. You can stop there. That’s cool.

Tara Kizer: Yeah.

Richie Rump: Yeah.

Erik Darling: Yeah.

Tara Kizer: Yeah, that makes more sense.

Erik Darling: I have pictures bigger than that.

Tara Kizer: It’s all the detail on the tattoos.

Erik Darling: Yeah, pretty much. I get real close up on those food pictures. I want Brent to see every grain.

Richie Rump: Every spice.

Erik Darling: I want Brent to be able to see my steak’s thoughts when I take a picture of it.

Brent Ozar: Because it’s still alive too.

Erik Darling: Yeah, it still remembers its name. That’s how rare it is.


How common is Windows Server Core?

Brent Ozar: Jeff says, “Have you any of you guys stood up a SQL Server instance on Windows Core?” I’ll raise you guys on that question. Have you ever used Windows Core?

Tara Kizer: I have.

Brent Ozar: Tara has, yeah.

Tara Kizer: It was a proof of concept, it was three jobs ago, and as part of our review cycle we had to have these goals. My manager set a goal to on a Windows Core server install SQL Server, figure out all the different commands you’d have to run to say set lock pages in memory, everything that we would normally do. I went and did that and then the Windows team just wasn’t able to support it yet in our very large environment. We had like 700 SQL Servers at this job. So yeah, we had this proof of concept. I suspect that that job now has Windows Core deployed because that was a big push due to having to patch Windows every month. If you get on Windows Core there’s less patching, you might even be able to skip some months. It was painful.

Erik Darling: Was that the patching next Tuesday people?

Tara Kizer: It was, but it was a different division. So the one division I referred to was the Thursday after and this other division was the following week.

Erik Darling: So from proof of concept to poof goes the concept.

Tara Kizer: Yeah, I think it was on Windows 2012 and it wasn’t as good then. Was Core available on 2008 R2 for Windows? I can’t remember.

Brent Ozar: I think so.

Tara Kizer: Maybe that’s what it was, it was just a lot harder. You didn’t have any GUI whatsoever. On the new stuff, you can bring in some of the GUI.

Richie Rump: Yep.

Erik Darling: It’s just Microsoft’s attempt to be more like Linux.

Brent Ozar: Yeah. I taught a class on it. Dandy Wey of Microsoft was like, “I need someone to proctor something at TechEd, can you teach it?” I’m like, “Yeah, sure, what is it?” “Windows Core.” “Dang it.” I’m like, okay, all right, I’ll learn it. I go and learn it and the whole class—it was all an interactive lab where all the attendees had their own SQL Servers. The whole class, all the Q and A consisted of people raising their hand going, “Can you tell me how to do this in Core? Can you tell me how to do this in Core?” “No, I don’t know, sorry, I don’t know. I have no idea.”

Erik Darling: “I can only tell you what’s in the book.”

Brent Ozar: Yes, I felt like that. You know, like you always go to New Horizons or Global Knowledge and the instructors have no idea how to do anything. I’m like, I’m that guy. I’m that guy. Ugh. I even asked, “Is there anyone more qualified to teach this?” “No, no one else wants it either.”


Is it safe to apply all up-to-date Windows patches?

Brent Ozar: J.H. asks, “Way back, Microsoft had issues with a couple of Windows patches which caused Java security connectivity. Do you know if they’ve fixed this and if it’s safe to apply all up-to-date Windows patches?”

Tara Kizer: Just by the question, it says “way back” and now asking can he apply the Windows updates. If there was a bad patch out there, I think the one that you’re referring to that we did encounter and it was fixed like within the next week or two. So I mean if you’re way far behind on Windows updates, yeah, you can go ahead and roll forward all those patches. I can guarantee my other job have installed them all.


How do you hide data from DBAs who have SA access?

Brent Ozar: Anker asks, “What’s the best way to hide data from DBAs when they’ve got SA access?” Oh, I love this question so much. So if someone has SA access, how do you hide data from them?

Richie Rump: Put it in Mongo.

Brent Ozar: Hides it from everyone.

Tara Kizer: Encrypt it. Other than that, there’s really no way that you’re going to be able to hide from us. We can get in.

Brent Ozar: And when you say encrypted, elaborate more on what you mean.

Tara Kizer: I mean either application encryption or is it transparent data encryption, scramble the data for us.

Brent Ozar: And TDE, we can still read it. TDE only encrypts it on disk.

Tara Kizer: That’s true, yeah.

Erik Darling: You could try Always Encrypted but there are so many holes in that and ways around the Always Encrypted columns that that’s not even practical…

Richie Rump: Encrypt it before it lands to the database. Salt it, and you’re good.

Erik Darling: And pepper.

Brent Ozar: A little eggnog in there, some cardamom. When you talk about doing this in the application layer, is it like easy to do for developers? Or is this something that there’s a bunch of different approaches to do and people have to figure it out?

Richie Rump: You know, I just had to do this in Node, which is JavaScript. Literally, it was installing a package and then calling one function and everything was encrypted. I was using it for a checksum but the concept is essentially the same. The encryption has gotten a lot easier, as before, I was doing it 20 years ago and there was a lot of stuff we had to do. Now, there’s packages that really help us out to do this a lot easier than we used to have to, so yeah, it’s fairly easy. All the base encryption loggers are in any major language. You should be able to do it without a problem.


Wait – why would you want to hide data from DBAs?

Brent Ozar: Wes asks, “Why would you want to hide it from DBAs? They should be trusted with all the data.” Here’s the deal, Wes. You didn’t spell trusted correctly. So if I can’t trust you to spell the word trust correctly, I don’t know that I can trust you. We’ve all been in really weird high security type environments so it would be interesting to hear. My approach with that was usually the developer stored stuff unencrypted in the database, it was stored unencrypted the whole way through. Management wasn’t happy about that. Management didn’t trust any of us but they didn’t have a fast way of getting everything encrypted quickly. So it wasn’t just that they didn’t trust the DBAs, they didn’t trust anybody, especially you know somebody rolls in in a really nice car, like DBAs, and they’re like, “Where did you get the money for that?”

Erik Darling: Beep, beep.

Brent Ozar: “Because I sold the data to the Chinese.” Steve says, “Throw money at the DBA not to look at the data.” Man, we’ll not look at the data all day long for free. Tell us what your data is, we promise we won’t look at it. Dollar a month, that’s cool.

Erik Darling: It is safe from us.


Should I use a server level trigger to alert about sysadmin role changes?

Brent Ozar: J.H. asks, he says, “Sorry, I asked this a few weeks ago, but I missed the answer.” All right, so what I’m going to do is I’m going to wait like 60 seconds to see if you stick around just as a test. He says, “Are there any things to be aware of, such as downsides, when I’m implementing a server level trigger that emails the DBA team when the sysadmin role gets modified?”

Tara Kizer: I like it. It sounds good. Because you’re not monitoring an application table here, it’s sending an email. It’s a server level trigger, how often is a sysadmin role getting modified. I’m not sure that this is the approach that we took. We did a lot of auditing at my last job making sure that Windows administrator group, someone didn’t slide into there, the sysadmin role, things like that. We just used PowerShell and queried the data. We just queried it say every hour or every five minutes. We didn’t set up a trigger on the instance.

Brent Ozar: If I had to do it for an auditor, I might get nervous because anybody who knows enough about this to go hack you knows enough to disable your trigger.

Tara Kizer: That’s true.

Brent Ozar: They can turn off database mail. They can break database mail if they want to break your alerting. But if you’re worried about your incompetent coworkers or your sysadmin buddies who could screw something up, I love this. I’m all for this. DDL triggers are another thing, I’m not quite as fond of that. I mean, I’m fond of them, they just have drawbacks and gotchas because if the trigger fails then all of a sudden somebody’s code fails and then you know, it causes a mess.

Erik Darling: I ran into a really interesting one with a trigger recently with snapshot isolation. Wherein the trigger there was a temp table getting populated and a clustered index getting made on the temp table. But under snapshot isolation that wouldn’t work because data definition language like that isn’t snapshotted, so it just failed immediately. So you had to just take the index tree out of the trigger. It was crazy.

Tara Kizer: Was the error clear and that was the issue?

Erik Darling: Yeah. As soon as I took the index tree out of the trigger language, it ran fine.

Brent Ozar: Wow, and folks, if stories like that interest you, go onto and search for “bad idea jeans.” We have several blogposts in the past where we’ve done crazy, stupid things with temp tables and triggers because there’s one for one third-party vendor, who Erik and I know very well, did some wild things with temp tables. So I knew that they needed an index so I set up a trigger, a server level trigger whenever the table was created in tempdb if it matched this definition I would go add an index to it immediately. Talk about a bad idea.

Erik Darling: Wow. That was worse than my idea.

Brent Ozar: So bad. I had to like give big, written instructions to the client, “I’m leaving here but you better make sure you know this is in place because it’s going to break your codes sooner or later.” Well that’s all the questions we’ve got this week, everybody. Thanks for hanging out with us and we will see you guys next week on Office Hours. Bye, everybody.

Erik Darling: Adios.

[Video] Office Hours 2016/07/20 (With Transcriptions)

This week, Brent, Richie, Erik, and Tara discuss new Microsoft certs for data science, index rebuilds, replication, patching SQL server, and what to say to a manager that says he needs an active/active HA solution but has no idea what that means.

Here’s the video on YouTube:

You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.

If you prefer to listen to the audio:


Office Hours Webcast – 2016-07-20


How should I manage a thousand database schemas?

Brent Ozar: Joe O’Conner asks, he says, “Any suggestions on tools or methodology for standardizing a schema to accommodate a thousand plus instances that may all be slightly different?” Joe, let me ask a follow-up question too if you can return back and ask, do you want them to be different or do you want them to be the same? That will probably influence the answer that we give you there.


Why is replication showing high CPU use?

Brent Ozar: Let’s see here, next up. James Kelly says, “In SQL Server 2005…” I’ve got to stop using last names because somebody is going to ask a question they’re ashamed of—like James Kelly—who has SQL Server 2005. Poor guy, god bless him. It’s not a bad database platform, it’s not horrible. It’s better than 2000 was. He says sp_replmonitorrefreshjob causes high CPU when suspended mode on the distribution database. What should I look into in order to reduce CPU? Tara, I’m even sorry I read that but you’re the only one here that would be remotely qualified.

Tara Kizer: Yeah.

Brent Ozar: You know anything about that? I don’t know anything about that.

Tara Kizer: I haven’t encountered that and I used replication for, I don’t know, ten years or so, what is meant by suspended mode on the distribution database? Is that when the jobs are not working?

Brent Ozar: I think he’s doing sp_WhoIsActive. I bet he’s doing sp_WhoIsActive.

Tara Kizer: Oh, but suspended doesn’t necessarily mean anything. “Causes high CPU…” Are you sure it’s that refresh job that’s causing the high CPU? If it is, I would start taking a look at the replication monitor and looking for what kind of latency you have in the queues. I would imagine you’ve got a large backlog and maybe that’s why it’s taking it a while to pull up the data.

Brent Ozar: That makes sense. So my webcam probably just froze. I all of a sudden got pouring down rain here, you could hear it in the building, just all of a sudden it pours down and immediately my internet dies. I’m like, oh, it’s going to be like that. It’s going to be like that.

Richie Rump: What kind of backwater [inaudible 00:01:41] are you in?

Tara Kizer: Yeah, I was going to say. Richie is like, it rains all the time.


Brent Ozar: Dennis says that MS certs are great at uncovering what areas you don’t know. Yeah, but also what areas you don’t need to know.

Tara Kizer: Or what areas you do know.


What do you think about the new Microsoft degree for data science?

Brent Ozar: Greg asks an interesting follow-up question. “What do you guys think about the new Microsoft certs for data science?” Did you guys see that new degree?

Erik Darling: Yeah.

Brent Ozar: Richie clearly saw it. Erik saw it.

Tara Kizer: I only heard about it, I haven’t seen it.

Brent Ozar: I didn’t look at it, I just saw the headlines and went, “Hmm, what?” So Richie, you made such a face. There must be a good reason you made that face, or a bad reason.

Richie Rump: Calling it a degree is weird to me. It’s just strange. I don’t understand it. I spent five years getting my four-year degree, so am I going to spend six years trying to get my seven-year MS degree? I don’t understand. I don’t understand it.

Erik Darling: Is it really that long term? I thought it was like hours.

Brent Ozar: Well, but it’s also the kind of thing you’re going to do spare time. It’s not like, I hope to god you’re not going to take out a loan and go do that kind of thing. Go spend $1,000 on exams or something.

Erik Darling: It was like $500 or whatever, or something. I don’t know, I didn’t read all the fine print.

Brent Ozar: There’s going to be like no training on it, third party for a while, because it’s all brand new. And that platform changes so fast. I’m like, I don’t get that. So I’m not keen on that.

Richie Rump: I mean basic stuff it doesn’t, but I mean is that what they’re going to teach? They’re going to teach me the basics of data science independent of their platform? Or are they going to say, “This is how we do it at Microsoft?”

Erik Darling: … golden hammer from Microsoft.

Brent Ozar: There was a statistics course in it, or like one statistics test. But then the rest was heavily Microsoft. Yeah, exactly. I was like… especially if, you want to know more than one toolset if you’re going to do the data science thing. In the database community, you can know just Microsoft and you can make a wonderful living for yourself. It’s fantastic. Data science is not—you don’t get to learn just one tool, you’ve got to learn a lot of tools.


Yadda Yadda Yadda

Brent Ozar: James says he’s waiting for his boss to get a credit card and register for the Senior SQL DBA class next week. Cool, very cool. We’re doing an online class there for four days.


Brent Ozar: Nate Johnson says, “SQL Server Management Studio 2016 says it won’t work with 2005 instances but it’s working okay for me. Have you guys used SSMS 2016 with 2005?” I will answer that one. No, we were doing La Cucaracha the day that 2005 was out of support. Erik was doing a countdown for that date. He could not wait. So we haven’t used it at all. Usually what that means with Microsoft is just that they don’t have it in their test matrix anymore. It usually works, there’s edge cases where it won’t but usually you’re okay.


Brent Ozar: Thomas says, “I think if you’re new, the certs are good because they show that you’ve been at least exposed to areas of SQL Servers. In my day job, I don’t do any analysis services, reporting services, or integration services so the certs will give me some exposure to them in an organized way.” You can die of exposure, you know that. That’s dangerous.

Erik Darling: Naked and afraid.

Brent Ozar: Naked and afraid, in indexes.

Richie Rump: Unsubscribe.


What should I do about an index rebuild that takes 90 minutes?

Brent Ozar: Michael Tilly asks a question that is really inconsistent, multiple parts, and a novella. So if you guys want to read into it, we’re going to read through it. Michael says he has a question about index rebuilds. He works at a large university, actually Brent’s alma mater—I don’t think you’re allowed to say that unless you graduate—and University of Houston for three semesters, so yeah. “I have a large students table…” 35,000 students a year, if I remember correctly. “That has many, many millions of rows of active and inactive students. I have a maintenance plan that does a nightly index rebuild on the…” Holy cow. “On the…” Yeah, already we’re shaking our heads. “It seems to take an excessive amount of time, 90 minutes. Right now we’re rebuilding it online. I’m thinking that maybe I should turn it offline. What should I do?”

Erik Darling: I think you should turn that maintenance plan off.

Tara Kizer: Off, yeah, why are you doing this?

Erik Darling: Because my whole thing with what you’re saying is like you probably get a whole bunch of students once a year.

Brent Ozar: What are you trying to say?

Erik Darling: I’m trying to say that you enroll students once a year.

Brent Ozar: Some of them graduate.

Erik Darling: Yeah, but you enroll students once a year. So like you add in a new crop of students and at that one point, your table might be fragmented. Then after that, you might have some dropouts, but you just change them to inactive. You don’t like delete them from the table. I can’t imagine the students table is getting heavily fragmented aside from enrollment and graduation. Or like even then, graduation is probably just switching to inactive. So I just can’t imagine a way for fragmentation to even accrue that much on a daily basis.

Brent Ozar: I think if they had an index on estimated completion or estimated graduation date, my record would have moved around a lot during the course of me being there. But I think for the rest of the people, not so much.

Richie Rump: So what can he do, Erik?

Brent Ozar: Oh, yeah, good point.

Erik Darling: You would have to go back. I wrote a blog post recently, no surprises there.

Brent Ozar: What?

Erik Darling: Peanut gallery, all of yous—it was about when you want statistics instead of an index but it touched a little bit on why index fragmentation isn’t really that important and up-to-date statistics are far more important. So what I would do in your shoes is I would stop rebuilding the index every night. I would stop rebuilding indexes pretty much in general and then just update the statistics daily and see how you fare. You’ll probably just end up in the same place now where performance is what it is because the statistics got updated, it didn’t reflect all of the fact that you were rebuilding the index every night and wasting hours and hours of your life and precious disk and CPU time.

Tara Kizer: And for the update statistics, because you have several million rows of data in that table, you could probably just get away with like a 20 percent, 30 percent sampling. You probably don’t need to do a full scan on that table.


How do I resync AGs when replicas have different drive letters?

Brent Ozar: Nathan says he’s got an availability group and they very rarely have to resync their dbs. They’ve added new databases and “we also have to start syncing those. But we’re not able to do a full sync because the drive letters don’t line up anymore. For example, I have some drive letters that exist on the primary but not on the secondaries. Is there a way to automate this process without doing backups and restores and joining them manually?”

Tara Kizer: You have to do a backup and a restore and say with move in order to get the different drive letters. So, no. There isn’t a different way.

Brent Ozar: Yeah, that blows.

Erik Darling: Rename your drives.

Tara Kizer: Yeah, rename your drives. Script this out so that it’s automated at least, you’re not having to do it manually.

Erik Darling: Yeah, there are actually a bunch of questions on about how to automate adding databases to availability groups. There are some, not like fully fleshed out scripts, but there were some pretty good outlines of scripts on there about how to get started but you are going to have to make certain considerations for the fact that you have different drive letters and that stuff when you’re writing your script.

Brent Ozar: I want to say there’s a trick with the SUBST command too. Like there’s this command line S-U-B-S-T that can let you make fake drive letters to point to other drive letters. Totally don’t quote me on that but just like google it to see. I’m sure it’s not supported but just as a fun trick.


Why isn’t Brent giving money to the University of Houston?

Brent Ozar: Michael from my fake alma mater comes back and says, “As it turns out, we do have constant fragmentation because we keep all our student history.” Yeah, I know because you keep asking me for donations which seems pretty stupid because I dropped out. Why would I give you money if I didn’t finish? He says, “As we constantly update the home address and phone number and other info, so lots of updates.” Listen, you’ve got to unsubscribe me. I’ve asked several times. That’s not true. Although you did bulldoze the dorm that I was in and I was a little pissed off about that one.


How do I change account owners on an endpoint?

Brent Ozar: Let’s see here, Thomas says, “We have a former DBA who granted the rights on the service accounts to an endpoint.” I think he used his own personal service accounts on an endpoint, like mirroring or availability groups. “I can’t drop his login without changing that. Is there a way to change which service accounts have permissions on an endpoint without dropping and recreating the connection? I tried ALTER AUTHORIZATION and that didn’t work.”

Tara Kizer: What’s the endpoint being used for? I mean can’t you just rebuild whatever that is? Is it mirroring? Is it an availability group? What is it?

Brent Ozar: Yeah, it should be a brief outage, right? Just recreate it.

Tara Kizer: Yeah, exactly, get a maintenance window for this.

Brent Ozar: Yeah, I like that. And you want to be patching anyway. I’ll also take the crazy thought of, “Why change it?” Leave it under his account. As long as you have the password.

Tara Kizer: Or maybe rename it so that it’s like a service account now and not someone’s—you know, because the [inaudible 00:10:26] will remain the same. So it will appear as a regular account at that point.

Brent Ozar: The service account formerly known as…

Tara Kizer: Yes.


More about synchronizing a thousand schema differences

Brent Ozar: Joe comes back with the thousand schema thing. Joe says, “Are there any suggestions or tools or methodology for standardizing to a single schema that will accommodate slight differences in a thousand plus instances without losing any data?” Wow.

Tara Kizer: It’s a thousand plus SQL Server instances and trying to get them all into one database with different schemas? I’m confused what the question is.

Brent Ozar: You know what I bet he’s doing, Erik. I bet he’s doing what your favorite ISV does. Yes, I bet, so we blog about this kCura Relativity, it’s a piece of software out there and customers are allowed to change tables but your deployment scripts still have to work. How do their deployment scripts work, like when they drop a new version of their app?

Erik Darling: The last time I did it, the deployment scripts were in a numbered folder and called by a C# program—by some program. Don’t quote me on the C#. But it’s tough because you have to specifically change only your indexes. The last thing didn’t go so well because there was some custom objects that they didn’t account for that I had to like go back and manually change. So I don’t know that I would recommend that process as an overall process. But, you know, they do alter tables directly from scripts and everything.

Brent Ozar: Run them in order based on their version upgrades. They also have dozens of developers to manage that kind of thing.

Erik Darling: Yep.
Brent Ozar: Lee asks, “I’m curious, why does the model database, the system database model, why does it have recovery set to full?”

Tara Kizer: Well that’s just because it’s the default for new databases so when you create a new database and you don’t provide all of the settings, it uses model as the template and it’s just the default that Microsoft has set. I really wish that they would switch it to simple. If they’re going to have everything be default to full, then how about you default to setting up transaction log backups too?

Brent Ozar: Oh, yes. Yeah, I like that. We should have a connect item for that.

Tara Kizer: Closed. Won’t fix.

Erik Darling: Immediately.


Why does my log file grow even in simple recovery model?

Brent Ozar: Lakshmi says, “I have a database in a simple recovery model and we make a lot of changes on say Friday nights. The transaction log grows to like 20 gigs, why is it that my log file is growing even in simple recovery model?” Did I say simple recovery mode at the beginning? That’s Tara’s hot point.

Tara Kizer: It’s one of mine, yeah. Everyone says “recovery mode.”

Brent Ozar: Yeah, simple recovery model.

Tara Kizer: You’re logging of your transactions doesn’t change based upon your recovery model. It’s still the same amount of logging. So if you have 20 gigabytes of data changed, or at least logging, it’s going to be the same with simple or full. It’s what happens at the end of the transaction is what’s different between the two recovery models. With simple, that is cleared from the transaction log after it completes, either rolls back or commits. With full, it stays in there until something backs up the transaction log. So the size doesn’t change based upon recovery model.

Brent Ozar: Some yoyo doesn’t BEGIN TRAN, your log is going to be growing for a while.


Does antivirus slow down SQL Server?

Brent Ozar: Matthew says, “Have you guys ever experienced noticeably degraded performance as a result of having antivirus installed on a SQL Server?”

Tara Kizer: Yes. Definitely. That’s why you need exclusions in place. At one point we had something like 46 exclusions on the database servers. I think it went even higher. I forget what the number was, 60s or 70s I think. But there’s a list of exclusions that you need to have in place on a SQL Server. It’s SQL Server executable, your MDF files, LDF, NDF, all these… There’s just tons of them and I believe that this is documented somewhere, what your exclusions should be. I don’t know where it is though.

Brent Ozar: There’s a KB article. If you Google/Bing whatever “SQL Server antivirus.” Then you site: If you site: in Google and restrict it to a single site you’ll get the answers just from the Microsoft Knowledge Base.


Where can I read about lifecycle management?

Brent Ozar: Graham says, “Other than Microsoft’s content,” so I guess we know how he feels about Microsoft’s content, “are there any good end of life or lifecycle management references for SQL Server versions?” Did you guys ever see any good database documentation around when you should upgrade or how to handle upgrades other than Microsoft’s?

Erik Darling: Most of the ones that I’ve seen reference a lot of Microsoft’s stuff, like using the Best Practice Advisor and the Upgrade Advisor and using those as sort of jump off points for what you should do from then on out. Microsoft does, at least I think, a fairly decent job for the Upgrade Advisor stuff, like giving you the scripted out things that you need to do your migration with.

Brent Ozar: Tara, with all the SQL Servers that you guys managed, how did you do processes for upgrades? Did you like write out, “Here’s exactly what we’re going to do” and which order, or was it like different every time you guys did a SQL Server?

Tara Kizer: When I was at QualComm and I was the primary DBA for a specific division it was really based upon what projects were in the pipeline and if we could get an upgrade placed into that project. If they were doing a major release, maybe we could get that database upgraded. I’m sure to this day they still have SQL Server 2005 instances out there. I’ve been gone from them for about three years but I’m positive just because how many there were when I left. There weren’t any 2000 but tons and tons of 2005. Some of them probably just get stuck there. Not all companies care about not being supported by Microsoft. So what if you don’t have any security hotfixes. If you have a secure environment, you may not care about that type of stuff. Companies really care about getting releases out, making their customers happy. Do they really care about the SQL Server version behind the scenes? The DBA team cares, you know, what do we have to support.

Brent Ozar: That’s why we still at the company we still run Windows 95 here at Brent Ozar Unlimited. It works really well.

Richie Rump: Well I’m on 3.11, that Workgroup edition.

Brent Ozar: I’ve been trying to get him to upgrade that Trumpet TCP/IP stack is just not…

Richie Rump: It’s an oldie but a goodie, man.

Brent Ozar: Oldie but a goodie, that Solitaire, no Minesweeper like 3.11.

Richie Rump: Hot dog theme, that’s what it is.

Brent Ozar: Oh my god, the hot dog theme is correct. So for those of you who ever get out in the community if you ever get the chance to Erland Sommarskog speak, Erland Sommarskog is a presenter in the SQL Server community and he usually uses the hot dog theme on his desktop. It’s just awesome. Comic Sans everywhere, pastel colors.

Tara Kizer: That would drive me crazy.


How often should I patch my SQL Server?

Brent Ozar: Adam asks, “What do you guys recommend for patching? What is a good balance of staying up to date with SQL Server patches while not just being on the hamster wheel continuously and doing constant updates? How often do you recommend patching SQL Servers?”

Tara Kizer: So based upon these large corporations that I’ve worked with, most of them want to do monthly patches. As the person who has to do the patches, I don’t want to do monthly patches because it is painful when you have to do 100 servers, 700 servers. We had a lot of servers that had to be patched. It wouldn’t just be one person having to do all this patching, it would have to be a team of people. It was disruptive. There was always some issue on some server that patches would not work, maybe failovers weren’t working that day. There was always some issue. It was just painful. It took way too much time. I wanted to do quarterly but large enterprises want to ensure that their environments are secure. Now, that goes against what I just said about SQL Server 2005 and not having security hotfixes—and these are the same corporations I’m talking about. But yeah, as one of the people on the patching team, I did not want to do it monthly. I would much prefer quarterly. Oracle doesn’t do monthly. Or, I should say, Oracle teams don’t do it monthly. I don’t mean just one company. It’s very common for them to do quarterly or twice a year. Why is it so important on Microsoft? I mean I realize it’s because of all the bugs there are as far as security goes.

Erik Darling: The really good thing about quarterly patches too is that it gives you some time to apply them to a development environment and let them burn it and make sure that they don’t do anything too funky. Because you know a lot of times a patch will get released and within the first 24 hours a lot of people will be raising their hands like, “It broke something.” So it gives you time to test things out as well. Quarterly is a good span of time to run across something awful.

Tara Kizer: Yeah, one of the companies I worked for, the patches came out on Tuesdays, “Patch Tuesdays” was what everyone called it for Microsoft patches. Then by Thursday we had to have installed, in production. I mean that was very aggressive. We ended up switching it to the following Tuesday, but still, it’s still very aggressive.

Erik Darling: So was that the “see you next Tuesday?”

Tara Kizer: Yeah. I mean, yeah, they’ve been installed in a test environment and maybe there’s some automated testing that has occurred on those, but that’s not enough time to burn those patches in.

Brent Ozar: That would suck.

Erik Darling: That’s horrible.


Should I use Desired State Configuration for SQL Server?

Brent Ozar: Graham says he’s trying to move to DSC, Desired State Configuration for SQL Server upgrades and migrations. I haven’t seen anybody successfully use DSC for SQL Server. Stack was trying to use it and failed. They had to give up on it and they have really bright PowerShell guys over there. I don’t think it’s quite there yet for SQL Server. It makes total sense for Windows, just not quite there yet for SQL Server.

Tara Kizer: I’ve never even heard of that.

Brent Ozar: The idea is you programmatically declare what you want the server to end up as and then Windows takes care of patching, installing the right features and getting it to that point for you. It’s not quite there.

Tara Kizer: Isn’t that what System Center Configuration Manager does? I mean it just knows what you’re missing.

Brent Ozar: Yeah, except you’re also supposed to be able to declare in code the whole state you want to get to, server names, what IIS tools you have installed, SQL Server features, where you turn things on and off. Yeah, it’s pretty tough.


Thanks for putting the Everything Bundle on sale

Brent Ozar: Lakshmi says, “Thank you, I just made my manager purchase the Everything Bundle from your training. Hope I can learn much more from there.” Well what are you doing on our free webcast? Go over to the good stuff. Here we’re just answering questions. These people don’t know what they’re doing. Go learn the good things. No, but thank you for picking that up. That’s awesome. We have a good time building that stuff.


Why does the Database Tuning Advisor only tell me to drop indexes?

Brent Ozar: Anker says, “Hi, I’m using the database tuning advisor in SQL Server 2014 for recommendations but it’s only giving me drop index recommendations.” [Laughter]… listen, I was trying to keep a straight face while asking this question. “Does the DTA ever provide any index suggestions too?”


Tara Kizer: Why are you using DTA?

Brent Ozar: So why should he not use the DTA?

Erik Darling: DTA is craaaaazy.

Brent Ozar: How else are you going to get those?

Erik Darling: We have a tool called sp_BlitzIndex which is very helpful for finding missing index requests. There are some differences between it and DTA. Missing index requests are fueled by a process in the query optimization process called index matching. The stuff in the DMVs that gets logged is per query. The stuff that DTA logs is per workload. So you might have something in your settings that’s not giving you missing index requests for your workload that you can do both and you can click a bunch of stuff where you want suggestions, even for index views I found out recently. So that’s the difference between what they each do. But if it is only generating drop index recommendations and you have missing index recommendations enabled, I would really like to see what comes out of sp_BlitzIndex for you because that’s a wild time.

Richie Rump: Yeah, I have a blogpost that I was working on and it introduces a script that prevents the DTA indexes from being created.

Brent Ozar: So the blogpost, I don’t think it’s gone live yet.

Richie Rump: No, I was working on it. I haven’t loaded it yet. Still writing it. But the script is—I wrote the script. The script is great.


Nice video on the Dell DBA Days blog post

Brent Ozar: Greg Smith says, “Great video on the Dell DBA Days blogpost today.” We’re excited about that. We’ll all be down in Round Rock sweating through our shirts in August. We’re going to try and stay in the air conditioning.

Richie Rump: What else is new, Brent?

Brent Ozar:          What else is new.


What would you say to a manager who wants…

Erik Darling: There’s a really good question up top, if you don’t mind.

Brent Ozar: Oh yeah, who asked?

Erik Darling: It’s a Scott Kelly question. “What would you say to a manager that says they need an active/active HA solution but has no idea what that means?”


Erik Darling: I think the most diplomatic way to handle that is to ask them to sketch out what that would look like and provide a budget for it. Then go from there because—just don’t walk into your manager and say, “You don’t know what that means.” Ask them to spend some time sketching it out, you know, a Visio diagram or just a whiteboard of what it would look like and how much he thinks it would cost. That’s where I would go from there if I was in your shoes.

Tara Kizer: This is a topic that I had to deal with a few times at one of the companies. Every few years management would be like, “We want to do active/active.” So every single time, we would go to the vendors and say, “What can we do to make this happen?” Here comes the price tag and we’d send it to management and they’d say, “Oh no, never mind.” It is very expensive. We’re talking two million dollars probably.
Erik Darling: In hardware, not just licensing.

Tara Kizer: Yeah, exactly. That’s several years ago last time I looked at it.

Brent Ozar: And Graham, if you get our download pack, if you go to and click First Aid up at the top, we have a download pack. We have a high availability and disaster recovery worksheet in there that helps you sketch out how much data you’re willing to lose, how long you’re allowed to be down for. One of the options is a zero/zero failover solution. It’s a million bucks and up. So just be honest and talk through it with the manager. The other thing I’d ask the manager is, “What’s driving that? What is that’s making you want that?” It may just be that they want cost savings or whatever and I’d say, “All right, so can you introduce me to someone else you’ve worked with at one of your past places who can help us design that?”

“Well, I’ve never worked with anything…”

“Oh, me neither. Sounds like it’s a good time for me to go to training.”


Are a lot of companies running SQL Server on Windows Core?

Brent Ozar: Let’s see. I had another question in here. Tom asks, “From what your team is seeing, have many companies started using Core installations yet? Windows Core.” Have you any of you guys seen SQL Server on Windows Core yet?

Erik Darling: Nada.

Tara Kizer: I started playing with it as far as what we would have to do to install everything and what does a database team need to learn. But this was back on 2008 Core and it wasn’t very good.

Richie Rump: Yeah.

Tara Kizer: Yeah, so we abandoned it and I’m sure that company is now using Windows Core because we really wanted to get to that to do Microsoft patching because there’s a lot less to patch on Windows Core than there is on the full versions.

Richie Rump: Yeah, same thing here. Where I was at a company and they were testing it. The IT folks really wanted to go Core and the DBAs were like, “Whoa, wait a second. I don’t know how to handle some of this stuff.’” They tested it and they were like, “Maybe not.” So, they abandoned it.

Tara Kizer: Was it Windows 2008 Core where it first was released?

Brent Ozar: Yeah, I think so.

Tara Kizer: Yeah. It was terrible. We were researching how to set like lock pages in memory via code. All of these things that you have to do on the database server when you need to set up a SQL Server. Once it’s there, you probably have some PowerShell stuff. But back then, it was just painful. I mean we figured it all out, but we decided we just didn’t want to support it yet so we were waiting for Windows 2012 Core.

Brent Ozar: If I remember right on 2008 Core you also had to enable TCP/IP through the SQL Server configuration manager. And even that [inaudible 00:25:39] goes through the GUI so.

Tara Kizer: Yeah.


Do we have to pay for the First Responder Kit?

Brent Ozar: Anker says, “Hi, I love using your scripts from the First Responder Kit. I want to use them in production to get performance recommendations but my manager stopped me in doing so after reading the license agreement of yours. Do we have to buy it from you?” No, it’s totally free. In fact, if you go to it has the open source license on there. When you go to go click on the GitHub link at the top right. It has a license that’s very normal for open source products. You are free to use it in any capacity that you want to use it as long as you keep that copyright in there. If you try to for example bundle it into a paid product and you don’t include that node of where it came from, then it gets a little sketchy but you can use it for—if you want to take over the world with SQL Server using Service Broker and linked servers, whatever you want to use it for, go knock yourself out.


How do I grant access to stored procedures for low-privilege users?

Brent Ozar: Nate says—this will be the last one that we take because he says, as soon as I see a question that says, “I don’t know if you have time left,” that means I probably should have read it before I do it. It says, “When I was a young boy…” He says, no, “If I had a stored proc that needs to be executed with a low level user but requires higher permissions and I don’t want to get the low level user…” He wants to give people access to run stored procedures but they need to be peons. What does he need to do? If you go to, that sounds like it’s an advice column. It’s not, it’s about a stored procedure but it has a little instruction on there on how you use certificates in order to grant permissions to users to run stuff. Really simple. Sounds horrifying but it’s super simple. That certificate word sounds awful.

Erik Darling: We’re probably going to change that URL at some point now, huh?

Brent Ozar: At some point I need to move all the rest of the documentation over there because people are going to go, “What do you mean Ask Brent? That doesn’t make any sense.”

Erik Darling: Yeah. Which one?


Brent Ozar: You changed your name on here too now.

Erik Darling: I did, magic.

Brent Ozar: Oh my goodness. Wow. We look professional. Well thanks everybody for hanging out with us. We’ll see you guys next week at Office Hours. Bye, everybody.

Erik Darling: Bye-bye.

[Video] Office Hours 2016/07/13 (With Transcriptions)

This week, Erik, Tara, Jessica, Doug, and Angie discuss queries, installing multiple instances of SQL server on a Windows VM, using DENY Database rules, migrating databases to Amazon RDS, availability groups, using filtered indexes, and more!

Here’s the video on YouTube:

You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.

If you prefer to listen to the audio:


Office Hours Webcast – 2016-07-13


How should I convert a database from Access to SQL Server?

Jessica Connors: From Steven Mitchell, he says, “For converting MS Access 2010 to SQL 2012 would using SSMA tool or upsizing tool be preferred? Or some other method?” I have a feeling that we don’t know.

Tara Kizer: We don’t use Access.

Angie Rudduck: Sorry.

Jessica Connors: That’s important.

Doug Lane: Having used Access in the past, I would shy away from that because Access has a nasty habit of rewriting things in a way that just really makes no sense. It might be logically equivalent but you’re probably better off just importing. Like if you want to do the quick and dirty way, just do import data from Access data source. Go through all the tables, import the data that way. Because you’re going to end up with—and maybe you want this, but probably not—like keys coming through, constraints, weirdness. You’ll end up with fields that get converted into datatypes you don’t want. So you’re probably better off just importing that data and manually rigging up how you want those columns to come in.

Jessica Connors: All right, enough with Access.


When will Microsoft release an updated BPA?

Jessica Connors: Question from Henrico. He says, “When will MS release BPA for 2014?” Do we know?

Erik Darling: Ask MS.

Jessica Connors: Microsoft.

Doug Lane: I’m not even sure what BPA he’s referring to.

Tara Kizer: It’s the Best Practice Analyzer but it’s already out. I used it at my last job. Maybe the question is about 2016 since that was just released?

Erik Darling: Probably.

Jessica Connors: Do not know.


What is free virtual memory?

Jessica Connors: Wow, James really wants us to answer this question. He asked it twice, three times. “What is free virtual memory? What is free virtual memory? How to resolve the issue?”

Tara Kizer: What are you trying to solve? We need some context.

Jessica Connors: Yeah, anyone know what is free virtual memory? Is that a proper question?

Erik Darling: I guess it’s when you get virtual memory for free from the virtual memory store.


Erik Darling: It was on sale that day.

Angie Rudduck: Yeah.

Jessica Connors: All right, James, we need more context.


Should I enable Lock Pages in Memory on VMs?

Jessica Connors: Onward to Edward. He says, “I see conflicting posts on the internet for lock pages in memory versus VMs. Thoughts? Pros and Cons? I’m thinking of adjusting the minimum memory to one half of max.”

Tara Kizer: I always set lock pages in memory as just say standard across servers. I worked in large enterprises so we just had a document to fall on, lock pages in memory was always set on the dedicated database servers. As far as min memory to half of max, that’s also what are standard was at these large corporations.

Erik Darling: Yeah, I think for VMs more specifically though you have to be careful because if you’re not setting reservations at the host level, then I think stuff can get ripped away anyway.

Tara Kizer: Yeah, we never even worried about that on VMs but maybe that’s just… I don’t know.

Erik Darling: I mean, it’s a VM, you don’t care about performance anyway.

Doug Lane: Didn’t this come up as a question last week where Brent answered it too and he said I could go either way but I usually do it?

Tara Kizer: That’s right, yeah.

Doug Lane: Rings a bell.

Angie Rudduck: He said it was not the first thing to worry about or something like that.

Doug Lane: Yeah, yeah.

Angie Rudduck: If that’s the only thing you have to worry about, good job.


What’s the fastest way to copy data between servers?

Jessica Connors: Question from J.H. He said, “Would select * into be the fastest way of copying large data, or some type of bulk method, or something else from one server to another via linked server?” He says, “My tests look like select into is very fast.”

Erik Darling: It depends on what version you’re on. SQL Server 2014 and up select into can run parallel which is a nice added bonus for that. If you’re going to move a large amount of data, I would probably shy away from select into and probably use a method that sort of respects your transaction log a bit more and breaks it up into batches.

Tara Kizer: I would actually use SSIS for this since it can do it in bulk. It has a fast way of transferring data between servers.

Doug Lane: What do you guys think about minimal logging in that case?

Erik Darling: If it works, it works. It doesn’t always work. Don’t forget that TABLOCK.

Doug Lane: Right.

Tara Kizer: The problem with minimal logging is you’re reducing your recovery points though. So, yeah, you can minimally log things but you don’t have recovery points in that window.

Erik Darling: Yeah, so if it’s a one-time data move and you don’t care about anything during that window, then go for it. But otherwise, if your database is in full recovery mode, you’re not going to get much out of minimal logging.


What’s the best way to recover free space from data files?

Jessica Connors: I’m going to butcher this name—I don’t know how to say it. “What is the best way to regain the free space in data files after deleting a large amount of data?”

Tara Kizer: Well, I mean, it’s shrinkfile. So a one-time shrink is okay if you have to delete a lot of data and you want to reclaim that space. If you don’t need to reclaim that space, leave it there. If you have enough storage, just leave it there because when you shrink the file down you’re going to be causing some performance issues, fragmentation. One time shrink though, go ahead and do it. DBCC SHRINKFILE.

Erik Darling: The thing with shrinking data files, at least when I’ve done it is that if you only shrink it to the point where you have free space, like if you just sort of truncate it to where your data ends or a little bit over that, like a gig or 512 megs over that, you don’t really see the fragmentation problems because you’re just getting rid of empty space at the end of the file. So you kind of don’t see the same problems, at least I haven’t. I just shrunk down a bunch of databases on my availability group VM thing because I blew up Stack Overflow with some horribleness. But I shrunk it down to just sort of where the [while 00:05:48] things ended and didn’t see much fragmentation change. So maybe that’s a 2016 thing though. Be careful with that.

Tara Kizer: Maybe. I’ve always just shrunk it down so that there was 10 percent or 20 percent free space that way autogrows didn’t have to occur immediately after we’re done shrinking.

Erik Darling: That’s a good plan.


How can I run cross-server queries without linked servers?

Jessica Connors: Question from our friend Justin. He says, “Other than using a linked server, do any of you SQL super geniuses know of a way to run a query from instance A against instance B?”

Tara Kizer: You can use OPENQUERY but usually we want you to do this work in the application. Have the application query the two different data sources. OPENQUERY, you can use OPENQUERY, it’s just like a linked server.

Erik Darling: It’s not necessarily better, it’s just different.

Tara Kizer: Yeah. It does have a benefit of getting around an issue where with a linked server it can query the entire table even though you have a where clause on it. So if you have a ten million row table and your results set is only going to be ten rows, linked server can pull over those ten million first and then do the filtering. OPENQUERY can get around that issue.

Erik Darling: There’s a really good talk by Conor Cunningham, one of the bigwigs at Microsoft in the SQL Server department, on the SQLBits website. So the SQLBits conference, they put all their videos online. They did one a few years back on distributed queries where he talks a lot about the pros and cons of different methods of querying across servers and how it works behind the scenes. You should definitely check that out if you’re really interesting in querying data across servers.


Why shouldn’t I install multiple instances in a VM?

Jessica Connors: Question from Raul. He says, “Are there any downsides to installing more than one SQL Server instance on a Windows VM?”

Tara Kizer: How many do you have, Erik?

[Erik puts up 10 fingers]

Tara Kizer: 10?

Erik Darling: More. I’d stick my foot up, I don’t want to gross anyone out.

Tara Kizer: On a test system, go for it. But on a production system, we don’t recommend it.

Erik Darling: Yeah, stacked instances in any environment, whether it’s physical or virtual is just bad news. If you’re already virtualized, why are you stacking instances? Spin up a new VM, it’s right there anyway.

Jessica Connors: I feel like we get that question a lot.

Angie Rudduck: Yep. That and shrinking, or replication, which there’s already one in there too.

Jessica Connors: Or, what are your favorite monitoring tools?

Erik Darling: I don’t mind answering that. That’s a nice question.

Jessica Connors: But it never changes. I mean, has it changed?

Doug Lane: Well, there all nice questions.

Jessica Connors: There’s no such thing as a dumb question.

Erik Darling: I just assume when someone asks that question it’s a vendor plant. “They always mention us! Do it again!”


Doug Lane: “You don’t believe me? Watch, I’ll ask.”

Erik Darling: It’s the same thing every time, it’s amazing.


When would you use the DENY database roles?

Jessica Connors: Question from Samuel. He says, “What would be a scenario where you would use the DENY database roles?”

Erik Darling: Developers.

Angie Rudduck: Yeah, I think he says two things. He says, “Why does DENY even exist if you give datareader isn’t it the same as denydatawriter?” But not necessarily true. I think the key, my understanding anyway, is that SQL is the most restrictive. So if there’s a denydatawriter, even if you have datawriter as well, you’re getting denied. So I think it’s kind of safeguard, but I don’t know.

Doug Lane: It’s an easy way of making exceptions security wise. So you say, “They should be able to write across all these databases, except I’m going to DENY this particular one.” So the DENY is very explicit whereas it can be used as sort of an exceptions tool to just kind of cut out, “All right, don’t give them this. Give them this, give them this. But they can do everything else that the other permission that I gave them allows them to do.”

Tara Kizer: We used the DENY at my last job for sensitive data. We had active directory groups. We had a lot of people in certain active directory groups and a lot of people in a different one. One specific group got the denial on certain columns in a table, like address information, emails, sensitive information. The other group was allowed to see it but if you were in both groups, you got the denial because DENY overrides the grant.

Angie Rudduck: I like that.

Doug Lane: I was going to say you’ll get a lot of questions on that if you ever take a Microsoft certification exam.

Tara Kizer: Oh really?

Doug Lane: Yeah.


How do I get data into Amazon RDS?

Jessica Connors: Question from Kimberly Lu. She’s migrating to Amazon RDS. She says, “My first experience migrating a database to Amazon RDS has been painful because I could not use backup/restore. I had to backup/restore to a local server, generate scripts, recreate users, import/export data using wizard. Do you have a less painful process?”

Tara Kizer: Nope. It’s RDS. If you’re using Amazon EC2 it would be easier since it’s a virtual machine, but yeah, RDS, it’s like Azure SQL, right? Limited features.

Erik Darling: Yep.

Angie Rudduck: Somebody else asked about going to the cloud and how to do it. I think that that’s the key. If you’re doing the VM versus the hosted database, if you go to the hosted database at all, you have to script all of your data into or out of. There’s no backup/restore to hosted databases. So if you’re not prepared for that, maybe don’t go to a hosted database.

Erik Darling: Yeah, the one sort of exception is going to Azure. Well, you can use it for RDS too I believe. There’s something called the SQL Database Migration Wizard. It’s available on CodePlex. It just does sort of a BCP out and in to Azure. I’m pretty sure you can use it for RDS as well but I haven’t tried. So it’s okay for that but it’s still slow if you’re migrating a lot of data because you are doing a BCP out, writing a file, blah, blah, blah. Not the fastest thing in the world.


Can I use different drive letters with Availability Groups?

Jessica Connors: Availability group question, question from Tim Smith. He says, “One of the benefits of availability groups is not having to have identical hardware. Does that include drive/drive letters? I tested and it looked like you could have a database on different drives on each node but a recent outage has me questioning that.”

Tara Kizer: I’m pretty sure you can. I’ve always had the same drives across because it’s just so much easier with restores but I’m pretty sure it acts just like mirroring where you can have different drives because you’re telling it what to do during the restore command. I’m not too sure what issue he’s referring to on the failover though. But I’ve always used the same drives. I’m pretty sure that it supports it across different drive letters.

Erik Darling: Yeah, anytime that I’ve had to build a cluster or anything else, I’ve always made sure that the drive—even like different servers using the same sort of hardware, I always have the drive letters be the same and do the same pathing for my database and log files, just so that at least it’s consistent across all my stuff and I know where to expect things to be. So you can do it, but it’s not really a best practice to have things be totally different across servers like that, especially when you are counting on a one-to-one copy when you get over there.

Tara Kizer: Yeah, we would need more info on what happened when you did the failover.

Angie Rudduck: He said, “On the restore of the database after failover the NDF file was gone.” I feel like there was a different problem there. Like maybe you had a problem with your drive and somebody actually deleted something. Maybe there wasn’t just a failover. Maybe that’s why you had a failover.

Tara Kizer: I wonder if the NDF came later after the availability group was set up? If another file was added and it didn’t get added to the secondaries? I don’t know. I haven’t tested that.

(Postscript: yes, this is where different drive letters will fail.)


Can you turn off auto-updating for SQL Server 2016?

Jessica Connors: Question from Scott Kelly. Isn’t that an astronaut? Scott Kelly? Didn’t he spend a year in space?

Erik Darling: Yeah.

Jessica Connors: Scott Kelly is here, guys.

Doug Lane: That’s Commander Scott Kelly to you.

Erik Darling: How’s your bone density?

Jessica Connors: Yeah, they did an interesting test on him and his twin. Let’s see here. “Is SQL 2016 auto updating not able to be turned off?”

Doug Lane: Oh, boy.

Angie Rudduck: Like Windows updates but SQL updates?

Erik Darling: You can turn SQL updates off. You can turn automatic updates off for anything. Have you had an update? Because I haven’t.

Doug Lane: So there’s the phone home thing that I think in—what is it—Developer and in Express editions you can’t turn it off. If I remember right. But I don’t know if that’s the same as the auto update. I think you can disable auto updates from SQL Server.

Erik Darling: Yeah, the Customer Experience Improvement Program.

Jessica Connors: Yeah. Scott Kelly wants us to know that he did not work for NASA.

Angie Rudduck: No, he did. He’s not the astronaut but he did work for NASA.

Doug Lane: How about that.

Jessica Connors: Oh, he did work for NASA. That’s awesome.

Tara Kizer: I went backpacking with a guy who currently works for NASA this past weekend. He’s a physicist or something like that. It was really cool.

Angie Rudduck: Did he do better at the hike?

Tara Kizer: He stayed back with the person that was having high-altitude issues, or altitude sickness issues, I should say. He actually had to carry her out on his back because it was so bad. Yeah, I’ll tell you guys later.

Jessica Connors: You did a fourteener, right? You made it to the top…?

Tara Kizer: I completed it. It was rough. It was really rough.

Jessica Connors: Good job, Tara.


Can you do columnstore indexes on top of GUIDs in SQL 2012?

Jessica Connors: From Joshua. He says, “Is there any workarounds that allow columnstore indexes with a G-U-I-D in SQL 2012?”

Tara Kizer: GUID.

Jessica Connors: I’m like, it’s not GUI.

Tara Kizer: I don’t know, Erik?

Erik Darling: What is the question? “Are there any workarounds to using a columnstore index with a GUID?”

Doug Lane: Do they not allow GUIDs in 2012? I don’t even know.

Tara Kizer: I don’t like GUIDs so I don’t use them.

Erik Darling: Yeah, columnstore in 2012 in general is kind of bogus. You couldn’t update it and there were a lot of limitations so probably not. I haven’t tried explicitly to use a GUID with it and I certainly haven’t tried to use a GUID with it in 2012 because that was just a no fun version of columnstore. It was just lackluster.


Do you recommend Converge Solutions?

Jessica Connors: Question from Julius. He says, “Do you folks recommend running SQL Server on Converge Solutions? Any experience, drawbacks, benefits, etc.?”

Tara Kizer: I don’t even know what that is. Do you guys know?

Doug Lane: I have never heard of it.

Jessica Connors: All right, never heard of it, Julius.

Doug Lane: Sorry.

Tara Kizer: Our answer is no.


Tara Kizer: I’ll have to look that up.

(Postscript: I’m guessing they weren’t referring to a brand name, and instead just general converged infrastructure, which is a pretty big/broad topic.)


Why is only one of my TempDB data files growing?

Jessica Connors: Ronny… oh boy, this looks long. Do you want to read it, Angie?

Angie Rudduck: Oh sure. So Ronny says, “I have multiple tempdb files and only one file is growing, actually the second file that was created. All other files are still reflecting the initial memory size.” He has trace flags 1118 and 1117 enabled—oh, no, just 1118, and wants to know that he also needs to add 1117. Does he need to delete the other files and start over before adding the additional trace flag?

Erik Darling: No. What you need to do is make sure that you grow all the other files to the same size as the second file. Elsewise, all of your queries will still be attracted to that larger data file and it will keep growing. So if you can, grow out your other files to match the size of the second file. If not, you’re going to have to deal with the kind of painful and possible corrupting experience of shrinking a tempdb data file.

Tara Kizer: They actually are saying it hasn’t been a problem—or, Paul Randal said corruption on shrinks has not been a problem in several years. They fixed it with 2012 I believe he said, or maybe even earlier. They just never made it public.

Erik Darling: Oh, okay, well, that’s good. I hope you’re using 2012 or later then, pal.

Tara Kizer: It might have been earlier. I can’t remember if it was an email or what it was.

Angie Rudduck: Yeah, if only they publicized when corruption is not happening anymore.

Erik Darling: They should just tell people that shrinking databases will cause corruptions so everyone will shut the heck up about it.

Doug Lane: So you also want to make sure that your file growths are set to zero, otherwise your tempdb files will keep growing. You don’t necessarily want that. It’s better to set it at the right size and then just kind of let it go with that because that may be why you ran into trouble in the first place is that you’ve got file growth on that one file and not on the others.

Tara Kizer: I actually quote/unquote disagree with that.

Doug Lane: Okay.

Tara Kizer: It’s not that that’s bad, it’s just it has to do with monitoring. So on the Brent Ozar website it says to grow your files all the way out to whatever that drive is for tempdb. If you do that, then your monitoring software is going to be alerting, the on-call DBA is about to get a disk space issue. Now of course, you could have an exception for the tempdb drives but I try to avoid exceptions. So instead, grow your files out to what they need to be and then keep autogrowth on. Have the file sizes be identical and you start out with the same exact autogrowth and they should grow proportionally based upon free space. Should not have an issue. If you do have an issue, I wonder what’s going on. You might have some kind of weird thing going on. I like to keep 20 percent free space to avoid alerts. Then when you do get an alert, grow out the mount point or drive.

Angie Rudduck: That’s for good IT teams that have alerts.
Tara Kizer: Yes.

Angie Rudduck: If you don’t have alerts, Doug has a good point. If you never allow them to grow, you never have your disk go over space, but your tempdb could go offline.

Tara Kizer: That’s the problem with that solution is that if you have a query, this wild query that runs, it’s going to fail if it needs more space in tempdb. Whereas if you had free space, well, if you have alerts in place, DBAs are going to start receiving alerts that we’re running out of space and you might be able to add the space before the query fails.


How can I move all my nonclustered indexes to another filegroup?

Jessica Connors: Speaking of running out of disk space, Gordon asks, “I’m running out of disk space for a one terabyte database with a single data file. So the plan is to move clustered indexes to a new file and a new filegroup. Given that everything is currently on the primary filegroup, would it be a good idea to move all the non-clustered indexes as well?”

Erik Darling: I would set up two different filegroups actually because if you’re doing this sort of work and you’re going to get some benefit out of it and you want to be able to restore the database as files or filegroups separately, you’re not going to want everything in the primary file group because you’re going to need that primary thing online first in order for the database to be accessible and restore other files and filegroups. So I would actually create one for clustered and one for non-clustered and move them off into one into each so that I can keep the primary part small and just have like the system databases and stuff in there. Be careful. Depending on what version you’re on because not all of the rebuilds are able to be online if you have max column types prior to 2012. So if you’re planning on a seamless move, mind your datatypes.


Are there any gotchas with SQL 2014 SP2?

Jessica Connors: Did SQL Server 2014 SP2 just come out? I thought they weren’t doing that anymore. You guys just told me they weren’t doing that anymore.

Tara Kizer: It will be rare.

Erik Darling: Starting in 2016.

Jessica Connors: You’re liars. Gotcha. “Any issues with this SP that y’all are aware of?”

Tara Kizer: Not yet.

Erik Darling: No, it’s a lot of fun.

Tara Kizer: It’s too new.

Angie Rudduck: It is too new.

Erik Darling: A lot of the stuff in Service Pack 2 was oddly catching SQL Server 2014 up to Service Pack 3 of 2012. It did add a couple of other things but most of it was just playing catch-up there with a lot of like the stuff that you can see execution plans and finding out about memory grants.

Tara Kizer: Oh, the memory grant issue. Memory grant columns are in Service Pack 2?

Erik Darling: They are and I’m going to be updating stuff.

Tara Kizer: Yeah, I’ve got a client that’s been waiting on those columns so I’ll let them know. Just real quick on the Service Pack 2 thing. When I said it’s too new to know about issues, just remember that service packs are fixing hundreds or thousands of bugs. So by not installing it, you’re more at risk of not installing it than you are of installing it. So service packs, hotfixes aren’t necessarily a bad thing and you don’t necessarily have to wait a while to see how other people are doing.

Erik Darling: If you’re worried about it, install it on a dev server. Let it hang out for a month and do your thing on it and then push it to prod. If you don’t have a dev server, then quit your job.


Are there any downsides to filtered indexes?

Jessica Connors: Question from Joe. He says, “Are there any particular downsides to using filtered indexes? I’ve recently been performance tuning some queries that have non-parameterized search arguments and they’re like magic.”

Tara Kizer: They are like magic. The only issue is if you have a query that has the opposite of what your filtering condition is, then it’s going to have to do a scan. But you know, you want your filtered indexes to be what your queries are usually going to do. Like is deleted equals zero for instance. If you have an is equals one, probably that’s going to be a maintenance job that runs at night to clean out data. It’s okay for those things to do a scan.

Doug Lane: The higher the cardinality on whatever it is you’re filtering on, the more effective that index will be. So if you’ve got data that’s split 50/50, it will be helpful. But if you’ve got data where you’re going to be looking for something where the filter condition there is down to say ten or fewer percent of the rows, then you’re in really good shape. Incidentally, I just put out a statistics course that does include filtered stats, which you get with filtered indexes.

Tara Kizer: All right.

Doug Lane: So you can check that out too.

Jessica Connors: I think about three months ago we just included access to our entire SQL training library for anyone that does the SQL Critical Care. If you want any of these guys to take a look and assess the health of a database, let me know. Reach out. For some reason, July and August are slower for us. I think it’s because summer.

Tara Kizer: Everyone is vacationing.

Angie Rudduck: Too much beer and barbecue.

Jessica Connors: Yeah, vacations. So if you are on this webinar and you reach out to me,, we are supplying some special deals within the next two months for our SQL Critical Care product. So not only do you get the Critical Care but also the additional training, access to our SQL training library.

Angie Rudduck: Isn’t Brent come out with a new one? Not us three… but the other Brent?

Tara Kizer: Not these fabulous Brents.


Angie Rudduck: Isn’t he coming out with a new video, Doug? I just expect you to know.

Doug Lane: I think so. I know I’ve got another one coming.

Angie Rudduck: Oh, see. There we go.

Tara Kizer: He painted his wall green so he must be working on something soon.

Doug Lane: Oh.

Tara Kizer: Not you—Brent.

Doug Lane: Oh, Brent, yeah. I’m thinking, “Is my tint that bad?” More stuff on like execution plans and querying.

Angie Rudduck: Nice.

Jessica Connors: Cool beans. All right, guys, that is all the time we have for today. I’ll let you go. Adios.

Erik Darling: Later.

[Video] Office Hours 2016/06/22 (With Transcriptions)

This week, Brent, Richie, Doug, and Tara discuss Growing databases, most useful SQL certifications, replication issues, group discounts, backup software and more.

Here’s the video on YouTube:

You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.

If you prefer to listen to the audio:


Office Hours Webcast – 2016-06-22


Brent Ozar: All right, we might as well get started here. Let’s see, we’ve got all kinds of questions pouring in today. We’ve got 75 folks in here, let’s go see what they’re asking questions about. Going through… oh, Scott asks an interesting question. Scott says, “How would you handle a new manager that insists his databases should not grow?”

Doug Lane: Insist his business should not grow.

Brent Ozar: I like it.

Richie Rump:

Brent Ozar: Yeah, you’re never allowed to sell anything. You’re never allowed to add new data. What I would do is make sure that he understands that update statements don’t make the database grow. So if he updates your salary, it doesn’t take additional space in the database, it just changes an existing number.

Tara Kizer: I wonder if there’s more to the question though. Is the manager SQL Server savvy and is saying preallocate it big enough that it never has to auto grow? He is probably not a good manager though, doesn’t understand SQL Server technology or databases in general.

Doug Lane: We’re quick to judge in these parts.

Brent Ozar: Scott says, “No, he is not technical.”

Tara Kizer: And he’s saying that SQL databases should not grow. That is just odd.

Richie Rump: If he’s not technical, why is he giving technical advice? That doesn’t make any sense.

Brent Ozar: It’s probably a budgeting thing. He’s like, “Look, I’ve got to keep my budgets the same number. It’s very important.” Scott says, “He is very used to monitoring the database.” What you do is, I had a monitor—I shouldn’t say—okay, we’ll I’ve started down the road so I just might as well. So I had a manager once, not in my department but in another department, who was asking anal things like that. Like, “Oh, I want to make sure CPU never goes above five percent.” So what we did was we hooked up his monitoring tool to point at a server no one used and we just called it SQL whatever and told him that’s where his stuff lived. He was totally happy. He was completely happy. Thought we were amazing. Yep, that’s my job.


Brent Ozar: All right, so Gordon asks a related question. Gordon says, “I’m not really sure what to set autogrowth to on my very large databases.” Usually when people say VLDB they mean like a terabyte or above. He says, “Yes, I should be growing manually during a suitable maintenance window, but I also have an autogrowth value set just as well in case. Is one gig a good number or what should I do for a one terabyte database?”

Tara Kizer: I used one gigabyte on the larger databases, sometimes even maybe a little bit bigger. As long as you have the instant file initialization for the data files so they could be zeroed-out so that the growth isn’t a slow growth. Log files may be a little bit smaller. I did a lot of times [inaudible 00:02:25]. Sometimes I did one gigabyte on larger databases where I knew the patterns and it was going to use a larger file for certain things. But I tried to preallocate those.

Doug Lane: And IFI is something that comes on by default if you want it to in SQL 2016.

Tara Kizer: Oh really? I didn’t know that.

Brent Ozar:          There’s a checkbox in the install.

Doug Lane: There’s a little checkbox that says, “I want IFI to work with this installation.”

Brent Ozar: Microsoft calls these things “delighters.” They’re trying to add delighters into the product. I’m like, “I am delighted! That’s actually wonderful.”

Richie Rump: It’s just faster.

Brent Ozar: It is faster. It’s just faster. And they’re right. I like them.

Doug Lane: It works.


Brent Ozar: I have an interesting question from Wes. Wes asks, “What are the most useful SQL Server certifications?” So we’ll go through and ask these folks for their opinion. Richie, we’ll get started with you because you’re on the left on my monitor. What do you think the most useful SQL Server certifications are?

Richie Rump: The one you have. That’s it. That’s the only useful one.

Brent Ozar: The A+?

Richie Rump: Yeah. Certified Scrum Master. No, the MCM, right? I mean that’s by far the most useful one you have. I mean as soon as you get it, you’re recognized as an expert anywhere.

Brent Ozar: You say that but nobody still believes that I actually passed that test, for rightful reasons. I wrote a check, it was a really large check. Then I brought another bag of $20s and I gave that to the instructor and off we went. Tara, how about you?

Tara Kizer: I’m against SQL Server certifications. A while ago they had all these practice tests online and I am a terrible test taker. I felt like at the time I was really good at what I did, you know, SQL Server, DBA for a long time, and I could not pass the test. So I feel like it’s for people that don’t have experience that are just trying to get their foot in the door. I already had experience, I don’t know that certifications are required at any job when you have as many years of experience as I do but I could not pass the test. I also wasn’t willing to study for these tests. Some of the stuff is just useless information I didn’t need to know. So why add that stuff to my brain?

Brent Ozar: Doug, how about you?

Doug Lane: It depends on how you define useful because is it useful in the sense that it will get you a job or is it useful in the sense that it will make you better at your job? Certifications will tell you what you don’t know as you test for them but apart from their value as actually holding the certification, there’s very little value to it. It’s the kind of thing where you decide if you want it on your resume or not. In most cases, it won’t matter. Again, apart from exposing blind spots in what Microsoft thinks you should know about SQL Server, it’s really not going to help you that much.

Brent Ozar: It does teach you a lot—go ahead.

Richie Rump: As a former hiring manager of both developers and data folks, I never looked at certifications at all. It didn’t help you; it didn’t hurt you. It just never came into play because it’s just a test. It’s not exactly how you work, like Tara said, it’s just a test.

Tara Kizer: I had one job that if you did do the certifications that it was something to put on your review, that this was something that you worked towards. So it was a review cycle thing, a possible extra bonus or promotion, but it was just a bullet point on the review. You had all the other stuff on your review as well.

Brent Ozar: For the record, we don’t bonus our employees that way. If you want to take a test, that’s cool. We’ll pay for it. We also pay for passes or fails, it doesn’t matter because I know from taking them too, I’m like, I walk in, I look at the test, I’m like, “They want you to know what? XML, what?”

Tara Kizer: And there will be more than one right answer. They want the most perfect answer and it’s like, well, there’s three of them here that could be the right answer.

Brent Ozar: Yeah.

Richie Rump: PMP was crazy like that. I mean it was, “Oh look, they’re all right. But what’s righter-er-er?”

Brent Ozar: PMP, Project Management Professional?

Richie Rump: Professional, yep.

Brent Ozar: There we go.


Brent Ozar: Nate Johnson says, “It may be a waste of 15 minutes of company time but I do enjoy these pregame antics.” For those of you who just listen to the podcast, you miss out on when you come in and join live we just shoot the poop—there goes a bunch of old jokes but I’m just going to keep right on, I’m not going down there.


Brent Ozar: Tishal asks, “Is it possible to see the size of the plan cache using T-SQL?” The answer is yes and none of us know how by memory. There is a DMV for it, if you Google for that, there’s a DMV for it. In the show notes, we’ll go find that and track it down for you.


Brent Ozar: David asks, “Replication question.” And then he never types it in. Oh, no, he does later. “We us replication extensively. In this scenario…” What do you mean your scenario? Is this like a game show? Are you trying to test us? “We have a bunch of reference tables that hardly ever change replicated out to dozens of locations. Should we use transactional replication or snapshot replication or an ETL process and just refreshing them once a day would be fine?”

Doug Lane: What are you the most comfortable managing?

Brent Ozar: Oh, look at that Doug. Go on, elaborate.

Richie Rump: Welcome back.

Doug Lane: If you feel really good about setting up some sort of SSIS package to do this, then by all means do and get away from replication. But this is the kind of thing where it really comes down to a comfort level. Replication will never be your best friend. It’s just too finicky and complicated and aggravating to work with. But it can get the job done.

Brent Ozar: When you say finicky and complicated and aggravating to work with, that describes most of my best friends so I’m not sure what you mean by… yeah, Richie is pointing at himself.

Tara Kizer: I had a scenario like this for reference tables. We actually did not replicate them. The only time that these tables changed was during deployment. So if we needed them on the subscriber database, we just deployed to both the publisher and the subscriber for those tables. That way we didn’t have to add them to the publication. There’s not really any overhead as far as transactional or snapshot except when it has changes coming through. But why have them in there if they hardly ever change and it’s part of a deployment process?


Brent Ozar: James asks, “What’s the best practice for setting minimum server memory? There’s a lot of guides out there on how you set max server memory but what should I set min server memory to?”

Tara Kizer: We took the max server memory, best practice, four gigabytes or ten percent whichever is greater, then we divided it by two. That was our min server memory. That was our standard across all servers.
Brent Ozar: I like that. I think in our setup guide it doesn’t even give advice on it because we never—if that’s your biggest problem, you’re in really good shape. I love that you’re asking this question because that’s a really good, detail-oriented question.

Tara Kizer: We had the standard because we were mostly a clustered environment. We had, I don’t even know how many clusters, maybe 100 clusters or so, a lot of them were active active, not a lot of them, but some of them were active active and you want to make sure that when the failover occurs and you’re running on one node that the SQL instances—the one that has to failover can get memory. We would set max down also in the active active environment.

Doug Lane: It also kind of depends on how much you’re piling on that server because if it’s your Swiss Army knife server, you’re probably going to have trouble if you’re trying to run Exchange and other stuff on it, but you know [inaudible 00:09:37]. You’ve got all the BI stack running on it too then you want to make sure that under no circumstances can other stuff steal away from SQL Server to the point where your database engine is actually beginning to starve a little bit. So keep in mind whatever else is on that box. If you really just have a dedicated SQL Server database engine box, then yeah, it’s not going to be as big of deal because it will take whatever it needs and there really won’t be competition for that memory in terms of like it getting stolen away.


Brent Ozar: Mandy asks, “We’ve got SQL Server 2014 and our tempdb files are on local solid-state drives. Recently we’re starting to see higher and higher IO waits on those tempdb files, upwards of 800 milliseconds. I’m new to solid-state, is this normal or is this indication of a problem?” That’s a good question. My guess is, depending on how old the solid-state drives are, their write speed actually degrades over time. It can get worse over time. The other thing that’s tricky is depending on how many files you have, if you spider out tempdb to say one file per core and you’ve got 24 cores, solid-state may not be able to handle that workload as well. So generally speaking, we aim for either four or eight tempdb files when we first configure a server. This is one of those instances where more can actually harm you rather than having fewer but I would just check to see. You can run CrystalDiskMark against those solid-state drives and see if write speed has degraded since they were new. It’s certainly not normal though.


Brent Ozar: Wes asks, “Are any of you speaking at the PASS Summit?” Well, all of us will be speaking, we’re all going to be standing around the hallway talking to all of our friends. Are we going to be presenting? That we don’t know yet. That announcement comes out today. So we’ll find out later today. I keep looking over at Twitter to see whether or not it’s come out and it hasn’t come out. So as soon as it comes out, we’ll say something.


Brent Ozar: Wes says—and I have no idea what this is in reference to—“Use Walmart as a precedent.”

Richie Rump: Enough said. I don’t think we need to say anything more about that.

Doug Lane: For the “adios pantalones” shirt.

Brent Ozar: That’s probably true.


Brent Ozar: Next up, Tim says, “I’m fighting for only using stored procs. I don’t want to use inline SQL even for simple queries. My developers are fighting against this and they want to use things like Entity Framework. Am I wrong for pushing hard for only using stored procs?”

Tara Kizer: I have a lot of experience on this topic. I was very, very pro stored procedures for the longest, longest time. Slowly, as developers changed, they wanted to use prepared statements, parameterized queries from the applications, and we didn’t want to stop them from the rapid development that they were doing so we did allow that. Once we realized that the performance was the same between stored procedures and prepared statements, parameterized queries, it became okay from a performance standpoint. However, from a security standpoint, you’re having to give access to the tables rather than just to the stored procedures. So that was just something that we had to think about. But as far as Entity Framework goes, I know Richie is very pro Entity Framework. Entity Framework, and what’s the other one? NHibernate. There are some bad things that it does that can really, really harm performance. So it’s something that you have to watch out for. They use nvarchar as their datatype for string variables and if your database is using varchar, you’re going to have a table scan on those when you do a comparison, the where clause, and you’ll be able to tell in the execution plan. It will say, “An implicit conversion occurred.” You’ll see that it said nvarchar and you’ll be like, “Whoa, why?” Your table is using varchar. It’s because of the application specifies nvarchar. Something that you can override, but if you’re not overriding it, this is what they’re going to do.

Richie Rump: So this just in, that is not a bug. That is a problem with the developer’s code. They didn’t specify that the column was a varchar so because .NET uses Unicode as their string type, it automatically assumes everything is nvarchar. So there’s a way that we could go in and say, “Hey, this column is nvarchar.” If you don’t do that, that will cause the implicit conversions. That’s only if you’re using code first. If you’re using the designer, the designer does the right thing and doesn’t put the N in front of it so it doesn’t put it as nvarchar so you get that implicit conversion. So that’s only for code first and if the developers aren’t really doing the right things when they’re doing their mappings in code. And just because I have Julie Lerman’s phone number doesn’t mean that I’m pro Entity Framework.

Tara Kizer: You’re pro because you speak about it. You present on the topic.

Richie Rump: Oh, okay. So if you go to the pre-con, you’ll hear me talk more about it but it’s more of a heavy, it’s a balanced way—we’re not going to be able to tell developers not to use. Microsoft is saying to use it. So if you’re saying that, then you’re saying, “Don’t do what Microsoft says,” and that’s a much bigger uphill battle than you probably want to face as a DBA. So the general rule of thumb is usually for most things, it’s okay. But for complex things, if it’s going to be complex in the SQL, it’s going to be complex in the link, and now there’s two hops it’s got to go through to figure out what the actual plan is. One it’s got to change that link into a SQL statement and it’s got to change that SQL statement into a plan. That’s probably going to be 50 pages long, which nobody ever wants. So at that point, cut your losses, then do a stored procedure and everybody is okay. But there’s a big difference between when we have to write SQL as developers, when we’re typically not very good at it, as opposed to, “Oh here, let me just do context.tablename.get” and then it just does it for us. So there’s a speed issue here to development and there’s usually a lot more developers than there are of you. So unless you want to stay up all night writing SQL statements…

Brent Ozar: Is that a threat?

Richie Rump: Yeah. You guys get paid more than us so I don’t understand what that is either.


Brent Ozar: John says, “I just saw your announcement about pre-con training in Seattle. Do you guys offer group discounts?” We do actually. If you’re going to get five or more seats, shoot us an email at or just go to and click on contact up at the top. Then you can contact us, tell us which class you want to go to and we can for five or more seats we give group discounts there.


Brent Ozar: Gordon asks, “If I’ve got an Azure VM that replicates data, I want to send it down to an on-premises database. It’s a VM, it’s not using Azure SQL database, what are my HA and DR options?”

Tara Kizer: That’s unusual to go in that direction. I don’t have an answer but I’ve never heard of anyone doing that.

Brent Ozar: Should be able to use replication, log shipping, AlwaysOn Availability Groups, anything that you can use on-premises you can use up in Azure VMs, I’ve got to be really careful when I say that. The hard part of course is getting the VPN connection between on-premises and your on-site stuff. That’s where things get to be a bit of a pain in the rear.


Brent Ozar: Jennifer asks, “Is the MCM still available?” No. They broke the mold after I got mine, thank goodness.

Tara Kizer: Yeah, 2008 R2 was the last version right? I mean it was the only version really. It’s been a while.

Brent Ozar: Whoo hoo.


Brent Ozar: Kyle Johnson asks—I try not to use last names but Kyle, that’s a generic enough last name and your question is so cool, it doesn’t matter. It’s totally okay. You shouldn’t be ashamed of this question. It’s a good question. I’m not just saying that to get you a gold star on the fridge. He says, “I was on a webinar yesterday where you covered sp_BlitzIndex. Are you aware of any scripts or [inaudible 00:17:06] columnstore indexes? Or is there anything I would look at in order to learn whether or not columnstore is a good fit for me?” Everyone is silent. So there’s your answer. The closest I would go to is Nikoport,, Niko Neugebauer he’s from Portugal, he’s got a really hard to pronounce last name—has lots of information about columnstore indexes. He’s like Mr. Columnstore.


Brent Ozar: Tim says, “I’ve just inherited a data warehouse project.” Well, you have really crappy relatives, Tim. “With five weekly updated data marts. The largest table is closest to 300 million rows and it’s approaching a terabyte. My loads are taking longer than usual. What’s the best way to diagnose performance tuning on stuff like this?” So, data warehouse that’s got a bunch of data marts, tables approaching a terabyte, and my loads are taking longer. Where should I look for performance tuning?”

Tara Kizer: What is it waiting on?

Brent Ozar: What is it waiting on? And how do you find that out?

Tara Kizer: You run a query, a saved script. I don’t have the DMVs memorized. I mean, you could run sp_WhoIsActive I guess. I assume that would work in this environment and while it’s running, see what it’s waiting on. You know, is it a disk issue? Something else?

Brent Ozar: My favorite of course because I’m me is sp_BlitzFirst. sp_BlitzFirst, if you run it with the SinceStartup equals one parameter, SinceStartup equals one will tell you what your waits stats have been since startup.

Richie Rump: The script formerly known as sp_BlitzFirst.

Brent Ozar: Yeah, we’re running a contest now to figure out a new name for it because we just open sourced a whole bunch of our scripts and I don’t want to have it called sp_BlitzFirst anymore because the strangers are going to start checking in code and I don’t want that answer to be reflecting on me. “I asked Brent and it said I was a moron.”

Richie Rump: That’s already in there.

Brent Ozar: Yeah, it’s in right at the end.


Brent Ozar: Ankit asks, “How do I troubleshoot SQL recompilations increasing on SQL Server?” So there’s a Perfmon counter recompilations a second and recompilations is increasing. What should he go look at it?

Doug Lane: Did anyone recently add option recompile?

Brent Ozar: Oh, okay. I like that.

Doug Lane: Like someone may have tried to solve a parameter sniffing problem on a frequently run query. Just one idea.

Brent Ozar: Some yo-yo could be running update stats continuously. You can capture a profiler trace—this is horrible—you could capture a profiler trace and capture compilation events and that will tell you which SQL is doing it. How else would I troubleshoot that? Oh, no, you know I’m surprised Tara didn’t ask the question—it’s the question that we usually ask, “What’s the problem that you’re trying to solve? What led you to recompilations a second as being an issue that you want to track down?” That’s an interesting thing.

Tara Kizer: I wonder if it’s due to they say that your recompilations should be like 10 percent or under of your compilations. I wonder if that’s something that they’re monitoring, maybe that’s increased. Or maybe it’s a counter that they’re already tracking and the number has gone up.

Brent Ozar: Yeah, I don’t think any of us—have any of you guys run into a situation where that was the problem, recompilations a second?

Tara Kizer: That’s a Perfmon counter that I always pull up along with the compilations and I just take a quick peek at it and then I delete those two counters from the screen. A very, very quick peek.

Brent Ozar: Yes, yeah.

Tara Kizer: Of course, if recompilations are occurring more frequently, you probably have more CPU hits. So if you see that your CPU has risen, maybe it is something to look into.

Brent Ozar: If you put a gun to my head and said, “Make up a problem where recompilations a second is the big issue.” Like if I had a table that was continuously being truncated and then repopulated, truncated, and repopulated where the stats are changing fast enough that was causing compilations, even then I don’t think it’s going to be too many recompilations a second. So that’s a really good question.


Brent Ozar: Tim asks a great question. This isn’t the other Tim, who also asked a great question as well, it’s a different Tim. “Is performance tuning approached differently from a transactional system versus an analytical system? When you approach an online system versus a reporting system do you do troubleshooting for performance any differently?”

Tara Kizer: I don’t as far as reporting but I’ve never really supported a true OLAP environment.

Doug Lane: Yeah. There are so many options that don’t apply to regular OLTP environment that do apply to OLAP, specifically talking about cubes because they’re all different kinds of models that you can do. You can do hybrid. You can do ROLAP OLAP. All different ways of kind of choosing how much of that data you want to compile ahead of time. So the troubleshooting process would be very different if you’re talking about SSAS cubes for example. If you’re talking about the source data, usually people don’t care about the underlying data because that ends up in some other final format, like a cube, so I mean I guess if I were to look at a database that was just straight—what would that be, it’s been a while—ROLAP I think, where you get it right out of the database. Then I suppose I would use some of the same troubleshooting steps, like looking at wait stats, looking at long-running queries, and so on and so forth. But if you’re talking about troubleshooting a cube, that’s a whole different bag from OLTP.


Brent Ozar: Adam asks—not that your question—I just said “Adam asks.” It’s not that your question is bad. I didn’t say it was a good question. It’s still a good question. I can’t say “good question” every time or else people won’t take me seriously. “How would you approach doing replication in AGs?” So if I have the publisher in an Availability Group, do I have to reconfigure replication again when I failover to my DR replica?”

Tara Kizer: So the distributor isn’t supported as far as AG. So if the DR’s environment has its own distributor, the answer is yes you do. Hopefully you’re scripted and hopefully when you have done a failover to DR it wasn’t an automatic event. Usually because DR is so far apart you can’t have automatic failovers occur. So if it was a manual DR, hopefully you were in downtime window, all the applications were down. You made sure that there was no data that was left behind, you know, that hadn’t been fully sent to the subscriber. If that’s the case, you just need to run your scripts to start up replication again right where you left off. You don’t have to reinitialize. This is a topic that I’ve done quite a bit, failover to DR, using replication AGs, pretty much every technology.


Brent Ozar: And we have a bad question from Nate. I’m not going to lie, this question is bad, Nate. You shouldn’t feel bad, but it’s a bad question. He says, “Is a self-referencing linked server as slow as a real linked server? And is it generally a bad idea or not?” How’s that work guys?

Tara Kizer: What problem are you trying to solve? Why are you self-referencing itself?

Brent Ozar: I’ve seen people do this and it wasn’t a good idea then either but I’m just going to repeat what they did. So they had linked servers inside their code so that they could have the same code whenever they moved it from server to server. Then sometimes they would have reporting servers where they changed the linked server to point somewhere else. They thought that somehow doing linked server queries was going to abstract that away. Like they could move some of the tables at some point to another server. So for those of you who are only listening to the audio version and not seeing our faces, none of our faces are happy at this point. We’re all sad. Sadly, it is as slow as a regular “linked server.” SQL Server doesn’t know that that’s a remote server.

Brent Ozar: Let’s see here, what’s up next? All kinds of questions here. Nate says, the context: he has a replicated view that points at two servers. So what you should do because this is kind of a multi-paragraph thing. He’s got a few things inside there. Post this on Post in as much details as you can and talk about what the problem is you’re trying to solve. Generally when we talk about doing reporting servers, we’d rather scale up than scale out to multiple servers. You kind of see why here. Managing multiples is kind of a pain in the rear.

Tara Kizer: I think that the answer should be just don’t use linked servers though. If you need to be able to contact another server, do that through the application, not within SQL Server. Programming languages can handle this, joining two different datasets together.

Brent Ozar: Yeah. Conor Cunningham has a great talk at SQLBits when he talks about the difficulties of distributed queries. It’s pretty bad.


Brent Ozar: Nate also asks—Nate redeems himself by asking another question. Nate says, “Finally, a backup software question. What do you guys like/prefer in terms of backup software? There’s a bunch of different versions out there. Whose tools should I buy? Are they all compatible with Ola scripts?” I think Ola scripts work with everything at this point, like Idera, Redgate, and LiteSpeed. In terms of like who we prefer, we’re kind of vendor agnostic since we don’t have to manage anybody’s backup software. But just in terms of experience, we’ll go through and ask. Richie, have you ever used third party backup software and how was your experience and which ones were they?”

Richie Rump: I’ve never used backup software.

Brent Ozar: All right, Richie doesn’t use backup. He just puts things in GitHub and lets the rest of the world backup his source code.

Richie Rump: I let the DBAs handle that.

Brent Ozar: Tara, how about you?

Tara Kizer: I have a long answer. I’ve been using SQL Server a long time and backup compression didn’t exist in older versions, so yes, we started off with Quest LiteSpeed, worked really, really well. It was fairly expensive. We wanted to get the Redgate’s SQL Toolbelt and they gave us a deal and we were able to get the backup software—we were able to completely replace all of our LiteSpeed licenses, which we had already paid for, it’s not like we got a refund from all these and we put Redgate out there instead. The reason why we did that is because all new incoming servers were going to use the Redgate software instead. So it made sense to have one tool rather than multiples. But both of them we did testing, we did a ton of testing on them, they pretty much produced the same compression ratio, the same file size, the same restore time. I mean, absolutely everything was—the difference was so minor. One was just cheaper than the other.

Brent Ozar: Yeah, everything is negotiable. Back like ten years ago, there might have been differences, today, not so much. Doug, how about you? Have you used any third party backup tools?

Doug Lane: I yield my time to the consultant from California.

Brent Ozar: Nice. I’ve used all of them as well. They’re all good. Anything is better than trying to roll your own.

Tara Kizer: And, yes, I mean they definitely are compatible with Ola, especially the two that he’s listed. I know you said that they probably are, these two specifically are.

Brent Ozar: Yeah, absolutely. Well that wraps up our time for today. Thanks everybody for coming and hanging out with us. We will see you guys next week. Adios, everybody.

Tara Kizer: Bye.

Doug Lane: Bye-bye.

[Video] Office Hours 2016/06/15 (With Transcriptions)

This week, Brent, Angie, and Tara talk through your questions about monitoring tools, transactional replication, configuration management, source control software suggestions and much more!

Here’s the video on YouTube:

You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.

If you prefer to listen to the audio:


Office Hours Webcast – 2016-06-15

Why did some of my drives disappear?

Angie Rudduck: Bruno says he has an instance with several DBs and suddenly a couple of them became unavailable and the physical disks where the data log files disappeared. No Windows events. How can he audit what happened at SQL Server level?

Tara Kizer: It’s unlikely a SQL Server problem. I’ve encountered this many, many, many times. You’ve got to talk to your sysadmins or you’ve got to talk to your SAN admins, server admins, they’ve got to take a look. Something happened. It’s almost certainly not a SQL Server issue.

Angie Rudduck: Yeah, if your drives disappeared, it’s probably not SQL Server’s fault.

Brent Ozar: When the drives disappear, I don’t know that you would see a Windows event unless there’s some kind of error message that pops up from the SAN or whatever. I’m assuming it’s a SAN.

Tara Kizer: You would eventually see a SQL Server error once it finally has to write to disk. I mean, it’s going to be a little bit before that happens since SQL Server does everything in memory. So it’s not going to know about it for a while. But the checkpoint, any kind of writing to disk. It’s finally going to start throwing errors and those should be posted in the event log.

Brent Ozar: Backups.

Tara Kizer: Yeah. We’ve encountered weird bugs on like Cisco hardware that caused it and just various weird things. But it has happened numerous times, across many servers, many different hardware platforms, different SANs. It just happens.

Brent Ozar: I think it’s usually it’s just human error. I mean like Robert Davis, a fellow Microsoft Certified Master, just ran a blog post on how he’s like, “Even I screw up.” Drops the wrong one and all these database transaction logs disappear.
Angie Rudduck: Oh yeah, I’ve dropped two databases from prod before. Two separate occasions I have dropped a database from prod. Thankfully both were quick enough recovery. The second one turned out not really used, so that was okay.

Brent Ozar: It’s a matter of time. That’s how you become senior too, you have to have those experiences.

Angie Rudduck: I was just going to say I feel like something that, I’ve met people having only been a DBA for three years, I run into people who have been DBAs for ten years and I know things they don’t only because it’s things I’ve experienced that they never did because maybe they were a smaller shop and I worked in bigger places. Just all about what experience you had.

Brent Ozar: Yeah, everything that involves replication. Tara knows everything.

Angie Rudduck: Somebody already, “Since Tara is here, blah blah replication” question.

Brent Ozar: Here we go.


What’s the best SQL Server monitoring tool to use?

Angie Rudduck: Constantino—I butchered your name, I’m sorry—he has a long-winded, easy question. Basically they’re trying to look for a good monitoring tool for production servers. They’re looking specifically for health monitoring that can alert them when something happens or is going to happen. So don’t get Ignite, it’s not in your list, but don’t get Ignite. He’s looking for a full-rounded solution. They’ve tested a bunch: Spotlight, Foglight, Redgate, SQL Sentry, Idera. Do we have any favorites that we would point them to for health monitoring and for SQL alerting?

Tara Kizer: SQL Sentry provides both with the performance advisor and then the event manager tools I believe. Both of those together can give you everything you need. At previous jobs, we used SQL Sentry at my last job and previous jobs we used Spotlight. I wasn’t a big fan of Spotlight. It was great for looking at certain things. I did set up some availability group alerts but it wasn’t as comprehensive as I wanted. We also had Foglight which I think is now called Performance Analysis. Then we had SCOM, so Microsoft’s System Center Operations Manager with the SQL Server management pack. But SQL Sentry, their two big tools did replace SCOM and the performance analysis tool for us at the time. But it’s pretty expensive. SCOM plus another tool is not as expensive. But SCOM requires almost a fulltime monitoring person that knows how to use it. It’s very complicated.

Angie Rudduck: Yeah.

Brent Ozar: I’ve used all of those too. I’m fine with all of them. It comes down to personal preference.

Tara Kizer: Yeah.

Angie Rudduck: Did he mention Dell’s? That’s Spotlight, right? Dell is Spotlight.

Tara Kizer: Yeah, Spotlight and Foglight. Foglight is the name that we used to call them. I think it’s Performance Analysis, I think. People may still refer to it as Foglight.

Brent Ozar: They renamed it again.

Tara Kizer: Oh they did? They went back to Spotlight?

Brent Ozar: Yes.

Tara Kizer: Oh, I didn’t know that. They were probably sick of people calling it Foglight and they’re like well we should just call it that too.

Brent Ozar: A friend of mine calls them FogSpot. He’s like, “I don’t know which one it is. I’ll just call it FogSpot.”

Tara Kizer: Yeah, one of them.


What should I do about the replication error “undelivered commands”?

Angie Rudduck: All right. So we will move along. Let’s see—not that one—we will go to Nate, with the transactional replication. They have a setup where often they get stalled transactions from the “alert for undelivered commands job.” Any thoughts?

Tara Kizer: Stalled transactions. I’d probably need to see the full error. So undelivered, so it probably means that it’s sitting at the distributor and it hasn’t been sent to the subscriber. I would take a look at the throughput. Take a look at the distributor and the subscriber to see if there’s any kind of CPU issue, possibly it’s just a lot of data got pushed through. Yeah, I don’t know for undelivered commands. Usually it’s a bottleneck on the publisher with reading the transaction log. Maybe there’s just a lot of stuff in there, you’re not backing it up often enough so the amount of data that has to go through is bigger. Mirroring, availability groups, and—well those can add to replication latency because everything gets stored in the transaction log.
Angie Rudduck: All right. So I realized I missed this very small question from Greg, so I will give him some attention. He said he saw some tweets recently that stated you should have four cores per NUMA node. What do we think about that?

Brent Ozar: Somebody was pulling your leg. It’s not configurable. It just comes down to for Intel processors it’s the number of cores per processor. If you turn on hyperthreading, it’s going to suddenly magically double. There are differences under virtualization, unfortunately, it’s such a huge topic you can’t possibly say, “You should always have four cores.” It depends a lot on the host hardware that you’re using and whether or not that hardware is identical across all of the hosts in your cluster. But yeah, anybody who just says four, they’re over simplifying something. Or it might have been for just one particular guy’s setup, like if one guy had just one host design.

Angie Rudduck: Yay for answers where people are pulling your leg.


What’s the best way to create a SQL Server inventory?

Angie Rudduck: Okay. Samuel wants to know, “What is the best way to create a SQL Server CMDB/inventory without buying third party software?”

Tara Kizer: I don’t know what that is.

Brent Ozar: Configuration management. Idera just had a new tool. If you go to and click on free tools, I want to say it’s Instance Check, it’s got something along the names of inventory in it or it’s not discovery, but it’s something along the lines of inventory. So go to Idera and click on free tools. The other thing to search for is Dell Discovery Wizard. Dell Discovery Wizard will go through and survey your network and discover SQL Servers and identify them for you. Put them into a database. Another tool that you can use is SQL Power Doc. SQL Power Doc is an open source PowerShell script from Kendal Van Dyke. If I had to pick one that I like, I used Dell Discovery Wizard a bunch of times. Idera’s looks pretty easy as well. SQL Power Doc, not very easy, but super powerful.

Angie Rudduck: Very good.


Should I use multiple Availability Groups or just one?

Angie Rudduck: Eli has a question about availability groups since Bret Ozar II isn’t here. They’re planning on upgrading from 2008 R2 to 2014 to take advantage of availability groups. They would like to know if there is a performance advantage to having databases spread across multiple AGs instead of one single AG. His example is having the primary on one AG but be on a different node than another primary to take advantage of the hardware.

Tara Kizer: Yeah, I mean, definitely. The first part of your question, there is no advantage to spreading them across multiple AGs unless you are putting the primary on a separate replica. But you know, you do have licensing implications in that case.

Angie Rudduck: Curse licensing. Always out to get you.

Brent Ozar: That was a smart question. I’m glad he said move the split around different primaries because I was like, “No, there is no advantage—Oh yeah.”

Tara Kizer: There is an advantage there.

Angie Rudduck: Tricky wording there.


Why am I getting an external component error when installing SQL Server?

Angie Rudduck: Kimberly, welcome back, I haven’t seen you in a bit. She is installing SQL Server 2008 R2 on Windows Server 2012 R2. This is compatible based on MS docs she checked. However, she’s getting the “external component has thrown an exception error.” What is she missing?

Tara Kizer: I wonder if there is a prerequisite that you need to install first. At least on older versions of SQL Server and Windows it was supported on newer versions of Windows but you had to install something first. I don’t remember what it was and I don’t know that that’s why you’re encountering this error. This is the type of thing that I’d probably open up a support case with Microsoft.

Brent Ozar: The other thing, go download that, whatever the ISO file or the [XE 00:09:32] that you got for the download, go download it again and just save it to another place and try it again. Because I’ve gotten so many funky setup errors just from a corrupted ISO file. Then when I go and get another ISO, like bloop, works perfectly. I’d also say anytime you get them, I’m kind of paranoid like this, but anytime that you get an error during setup, I’d rather wipe Windows and start again. I’m just paranoid. I want to build something that’s going to last forever. So if you’re having repeated errors on the same Windows box, hey, go download a new ISO and then try again on a fresh install of Windows.

Tara Kizer: You can go through the setup logs to see if there’s a better error because that’s probably a pretty generic error. The problem with the setup logs is it’s hard to find the errors. Scroll all the way to the bottom and then you might have to start scrolling back up to see the failure. Because even though it failed, it might have done a lot of extra work afterwards and all of that is going to be logged.

Brent Ozar: There’s like 50 million instances of the word error in the log.

Tara Kizer: Yeah, exactly, it’s awful.

Angie Rudduck: I do like the trick that I learned about filtering the Windows log, starting here during triage. I had no idea about that and then one day when I watched you do triage and you right clicked on the left side bar, I was like, “What? I only knew…” Because half of the time during triage I have to ask the client to move my head because it’s always floating right over the filter log on the right panel in Windows events, so that happens a lot. I’ve been trying to work around not asking them to move my head because it sounds weird to me.


How should we do source control with SQL Server?

Angie Rudduck: Since we’re talking about a lot of software, let’s ask another question from Scott. Do we have any suggestions on source control software? When Richie is not here of course.

Tara Kizer: Yeah, I was going to say, Brent and Richie love Git.

Brent Ozar: So there are two ways you can do this. One is you can do source control before you deploy your code, meaning you go make a stored procedure change, you check the code into GitHub or Subversion or TFS, whatever tool you want to use. That’s proactive. Then you go deploy the code after you’ve checked it in. Man, almost nobody does that. Plus too, you end up having DBAs who need to change indexes in production or need to make an emergency change. So the other way you can do it is reactive source code control which means this tool goes out and looks at your SQL Server every x number of hours and then goes grabs any changes and checks them into source control. So this gives you a postmortem log of everything that changed but not who did it and not the exact time that it changed. I am personally a fan of reactive source control as a DBA. I don’t really care as much about who did it but I want what was changed. I want a breadcrumb list of everything that changed on objects. So Redgate’s source control for SQL Server has that ability that they’ll just go through and patrol your SQL Server periodically and check in any changes. It’s just not the source control that your developers are used to. That proactive change control is really, really hard.

Tara Kizer: We did both proactive and reactive at my last job. We used Visual Studios Team Foundation Server. Anytime we did deployment of the application, that was always proactive. And of course, DBAs, you know, are having to make changes. The DBAs were supposed to go in and do a schema compare and then update TFS. That didn’t always happen. Other tasks were more important. So whoever that next person was that touched that database, when they did the schema compare to create the deployment scripts, they would see that ther are these other things that shouldn’t be in my deployment that they’ve already been deployed to production but weren’t in source control. Besides that though, because you could have databases you never touch again. So besides that, twice a year they went through and did a schema compare against all databases and got them up to date.

Brent Ozar: Scott asks, “I didn’t know about reactive source control. Who makes it?” It’s a technique, not a product. It’s just part of Redgate’s source control as well. I think I even still have a blog post on our blog about how you do it with a batch file. Like I wrote a batch file in VBScript to do it with Visual SourceSafe. I need to burn that in a fire.

Angie Rudduck: That sounds complicated to somebody who’s going to try and totally mess up. That was cool. I was about to ask Tara if you could do them together. So that’s cool that you have seen them both together because I was like I feel like one place we didn’t consider indexes, we didn’t let developers change indexes. So if a DBA throws them in and then doesn’t check it in, that would be great to have the reactive right there.

Tara Kizer: Yeah, as long as you have a schema compare option in the tool that you use. Or you can get another schema compare. Then you can see what the changes are between source control and your database.

Angie Rudduck: Very cool.


What’s the fastest way to modify a big table?

Angie Rudduck: J.H. wants to know, “What is the fastest and/or safest way of exporting a large table and then reimporting it and maintaining its primary key auto identity seed ID … SELECT * into temp table from large table or bulk copy out or something else?”

Brent Ozar: Okay, so I’m going to tell you the terms to google for: modifying a table online Michael J. Swart S-W-A-R-T. So because you said fastest, Michael has an interesting set of blog posts, it’s like a five-part blog post on how you go set up the new table, how you build something to keep the old and new table in sync and then you move data across in batches. So this way end users notice very minimal downtime and yet you’re able to keep the two in sync as you get larger. The only time I would go that route is if, “You cannot take any down time. We’re willing to let you put a whole lot of development work into it” and it’s more than like say 50 gigs in one table. If it’s less than 50 gigs in one table, I would probably just do a select with a tablock and move the data across that way.

Tara Kizer: Then you can use the identity insert option to handle the identities. That way you keep the values the same between the two tables. So SET IDENTITY_INSERT ON. You can only have one table at a time do this so make sure you set it off when you’re done.


How should I manage identity fields with replication?

Angie Rudduck: That’s a perfect follow into Paul’s question. He has existing replication where he wants to change the identity management of primary keys have identity 1 1. He wants to change the primary keys to identify 1 2 on the publisher and identity 0 2 on the subscriber. Is there a way to do this without recreating the tables?

Tara Kizer: You do have the DBCC command where you can change the seed but I don’t think that you can change the increment. Usually, in a scenario like this though what people do is they have the publisher, it’s inserting positive numbers and then on the subscriber inserting negative numbers. So you would have, you know, if it’s an integer, you could have two billion rows for the subscriber and two billion rows in the publisher. That usually satisfies most tables. Otherwise, go to bigint.

Brent Ozar: So there’s a DBCC command to reseed the identity. I cannot remember for the life of me what the syntax is but if you search for that.

Tara Kizer: Yeah, I think it’s [Inaudible 00:16:26] IDENT is the command.

Brent Ozar: Yeah, you just run that on the one where you want to change them.

Angie Rudduck: Good info.


Should I use checksum when taking backups?

Angie Rudduck: Samuel wants to know, “Is it best practice to always add checksum when taking backups?”

Brent Ozar: Did you do that when you were a DBA?

Angie Rudduck: I didn’t.

Brent Ozar: You too, both of you, yeah, yeah.

Angie Rudduck: I didn’t know it existed.

Brent Ozar: I don’t think most people do.

Tara Kizer: I knew it existed. Did we do it? Probably not. It does add overhead to the backups and we were—at least a lot of the critical systems we would always, not always, but we would have a backup restore system. So we were testing our backups regardless. So do you need checksum if you are going to be testing your backups?

Brent Ozar: Yeah, I learned about it after I got started consulting. I’m like, oh, that’s an interesting idea. I went on a little quest of “I’m going to get everybody to do checksum on their backups.” I put it in sp_Blitz as a warning, “Hey, you’re not doing checksums on your backups.” Universally, people were like, “What is that? Why would I want my backups to go slower?” So I took it out as a recommendation just because people don’t like their backups going slower.

Tara Kizer: Does Ola’s solution, does it do the checksum by default?

Brent Ozar: Not by default, yeah.

Angie Rudduck: I think it does.

Brent Ozar: Oh, does it?

Angie Rudduck: Because I’ve been playing around. Yesterday I was playing around, let me double check my savings here but I ran the scripts default and then took a look. So would have to double check, but it’s included as an option at the very least.

Brent Ozar: And doesn’t his do verify by default too out of the box?

Angie Rudduck: Yeah, maybe it does verify by default and not checksum by default. But the verify, I mean the one thing I don’t think people think of is like how it can impact because you might be, “Oh, my one gig backup is taking 20 minutes.” I don’t know. But it’s because it’s just doing the restore verify only command against the backup it just took. So it’s just saying, “Oh, is this still a valid backup?” And at the basic level, right? Correct me if I’m wrong, but it’s only saying, “Oh, yes, I can open this as a file. I don’t know its validity inside.” Just that it could reopen it as needed. So that’s just something to be considerate of, it’s not the saving grace. “Oh, I did verify only.”

Brent Ozar: Yeah, it could be all corrupt data in there. It could be 100 percent corrupt. The way you’re going to have to find out is to run CHECKDB.


Why don’t our SELECT queries show current data?

Angie Rudduck: All right.

Brent Ozar: We’ve got all kinds of questions coming in. It’s crazy.

Angie Rudduck: I know, they’re definitely falling in now. Okay, so Andrea says they have been having issues with data not showing up in reports for sometimes up to 15 minutes. They are an OLTP shop running 2012 Web. Is this possibly a thing with SQL or is it due to something else?

Tara Kizer: I think we would need more information as to how is the data getting into this database? Is it queueing? Is there a backlog in say a Tibco queue or something like that? Or, you talk about reporting, do you have a replicated system? Or in availability groups, readable secondary, maybe there’s a delay in getting the data to those. I don’t think we have enough information to answer it.

Angie Rudduck: Yeah, I agree.

Brent Ozar: It’s never normal to do an insert in SQL Server, commit your transaction, and then not have it be available for a select immediately.


Why am I getting tempdb-full errors when my tempdb is 4GB?

Angie Rudduck: Let’s see what David has to say. He’s getting this on a server with four by one gig temp data, eight gig temp log, insufficient space in tempdb to hold row versions. Need to shrink the version store to free up some space in tempdb.

Tara Kizer: That’s a pretty small tempdb. I’ve supported tempdbs that were half a terabyte in size just because we had business users running ridiculous queries. So, first of all, why is your tempdb so small? Are you shrinking it down? You probably need some more space. Version store, are you running read commited snapshot isolation level? So you need more space for tempdb.

Brent Ozar: And then how much space do you need? Generally, if somebody puts a gun to my head and just says go pick a number, I’m going to go with 25 percent of the size of all the databases combined. So if you have say 100 gigs on the database server, you probably need at least 25 gigs for tempdb.

Tara Kizer: A few jobs ago, we set up hundreds and hundreds of servers. So we just made a policy and tempdb we set at 70GBs. These were shared servers with lots of databases and we didn’t know what was going to occur on them. We would have alerts to warn us if data files or the log file was creeping up on, if they were going to fill up, so we could react to those. But 70 GBs for all of the tempdb data files and I believe 30GBs for the tempdb log file. That was just our default.

Brent Ozar: I don’t get out of bed for less than 70 gigs.

Angie Rudduck: Silly, silly.


If I don’t know monitoring tools, will that hold me back in job interviews?

Angie Rudduck: Ronny supports about 25 prod and dev databases as a DBA. He’s not in the corp Windows DBA group and does not have access to all the tools monitoring performance, etc. “All monitoring I have in place is based on scripts that run and report issues. Will the lack of experience working with the tools like that hurt my chances with pursuing a new DBA job? I know it really depends on what the hiring manager is looking for but is knowing tools like that an important skill to have?”

Tara Kizer: I don’t think it’s an important skill necessarily, I think it’s obviously going to depend company to company but if you don’t have any experience with monitoring tools, I think that that’s fine as long as your other experience, your actual SQL Server experience, is what they’re looking for. You can get up to speed on these tools, I wouldn’t say fairly quickly, but you can at least click around and figure things out and with some guidance get some pretty in-depth knowledge of these tools. For the most part, I don’t think that companies are paying for tools like this. So I think that it’s pretty rare that companies have these tools in place.

Angie Rudduck: Yeah, unless you’re going to a large DBA shop, I don’t feel like you’re probably going to have very many of these tools.

Brent Ozar: And you’d have to know all of them. I mean, you know, if you only knew one and then somebody doesn’t use that one, you’re screwed.

Angie Rudduck: It’s not the same thing as not knowing SQL Server versus MySQL versus Oracle. They all run pretty similarly and nobody expects you to know all of them or they’re only going to hire you if you know this one. Like if you only know Redgate, great, because they’re a Redgate shop. That’s usually not the case.

Brent Ozar: Yeah, when we do interviewing for consultants for example, so when we go and hire people, we will often give them a virtual machine and say, “Now you’re on, take remote control of this thing. Show me why it’s slow.” Or, “Troubleshoot why this query isn’t working.” If someone comes to me and says, “Well, I’m sorry, all I can do is use a tool,” like I only ever troubleshoot this with Dell or Idera or Redgate and I’m lost without a third party tool, you’re not going to do well as a consultant because we can’t rely on those tools either. When we parachute in, man, I have no idea what’s going to be happening on the other end. So it pays better to know the native ways to doing things.


Idle chit-chat about smoking weed and your friend sp_WhoIsActive

Angie Rudduck: I think we have probably time for one more question. Did you guys see anyone while I scroll back and look?

Brent Ozar: Greg says he remembers that the tweets about tempdb stuff was four tempdb files per NUMA node. They were smoking weed too. I don’t know who that was.

Angie Rudduck: They must be in Portland.

Brent Ozar: Yeah, Denver, something like that.

Angie Rudduck: Someone chimes in, Eli says, “The sp_WhoIsActive is your friend about the monitoring” to you Ronny. That is a good point, we love…

Tara Kizer: WhoIsActive and the Blitz stuff.

Brent Ozar: Yeah, free tools. Pro tip: If you’re going to apply for work at our company, you may want to try using our tools. Know that they’re out there. If you come in and use someone else’s tools, it will not be a good time for you.

Angie Rudduck: Yeah, I agree.

Brent Ozar: Not that you ever need to know how to use our tools to work here. We teach you those too after you get here. But, yeah.


Is PostgreSQL better than SQL Server?

Angie Rudduck: Yeah. I feel like there was one that I… There’s a couple that are like…

Tara Kizer: Wes asks the same question, I think he wants—they’re 20 minutes apart.

Angie Rudduck: He really wants me to read his question. Wes, I’m going to tell you my answer is SQL Server pays our bills. Wes wants to know what our thoughts are on Postgres versus Microsoft SQL Server. SQL Server.

Tara Kizer: We’re SQL Server professionals so our answer is going to be SQL Server. If you want me to support your system, I don’t do Postgres SQL so I can’t support it. I mean, I could probably learn it but I don’t really have any interest in learning it.

Brent Ozar: See, I don’t support it either. But I always try to learn about other stuff. There’s stuff that’s really cool about Postgres. Unlogged tables is a classic example. If you search for Postgres on our site, we’ve written a couple blogposts about different features in Postgres that we would want in SQL Server. But boy, at the same time, I kind of like parallelism. Man, Microsoft SQL Server has had parallelism for a long, long time. That’s kind of nice in today’s huge, multicore environments where 16 cores isn’t a big deal. 32 cores isn’t a big deal anymore. Parallelism is pretty freaking awesome. And they’re iterating super fast. So, yeah, I kind of like Microsoft SQL Server. If I was going to start a career from scratch, so like Microsoft is where it’s at in the enterprise environment and Postgres is where it’s at in the startup environment. Well, thanks everybody for hanging out with us today and we will see you guys next week.

[Video] Office Hours 2016 2016/06/01 (With Transcriptions)

This week, Brent, Angie, Erik, Tara, Jessica, and Richie discuss SSMS issues, security auditing, snapshot replication, SSIS Cache Connection Manager, AlwaysON Availability Groups, deadlocks, and Jessica’s trip to Mexico.

Here’s the video on YouTube:

You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.

If you prefer to listen to the audio:


Office Hours Webcast – 2016-06-01

Jessica Connors: All right, I guess we should be talking about SQL Server.

Erik Darling: Nah.

Brent Ozar: Oh no.

Erik Darling: Boring.

Jessica Connors: That one product.

Angie: Meh.

Erik Darling: Snoozefest.

Brent Ozar: Which is out new today. So ladies and gentlemen, if you’re watching this, SQL Server 2016 is out right now. You can go download it on MSDN or the partner site. There’s places where you can go it. Developer Edition is free so you can go download the latest version right now. As we speak, Management Studio is not out yet but will be coming any moment.

Jessica Connors: That was our first question too: Is 2016 out yet?

Brent Ozar: Dun dun dun.

Jessica Connors: Are you hearing any rumblings on problems with it?


Brent Ozar: We all start laughing. There were a lot of problems with the community previews. For example, SQL Server Management Studio would crash every time I would close it. So I’m really curious. Usually you don’t see stuff quite this buggy as you get close to release. But at the same time, I’m like, well, no one ever goes live with it in production the day it comes out anyway. People are just going to get widespread experience in development, in dev environments, and QA, then they’ll go find bugs hopefully and fix them. Hopefully.

Angie Rudduck: Wait. So I shouldn’t install that in our production servers running everything?

Brent Ozar: Yeah, no. I would take a pass for a week or two. Just let things bake out just a little bit.

Jessica Connors: Let it just wait.


Jessica Connors: Let’s take a question from Dennis, SSMS question. Is there a way to have SSMS format the numbers in the output messages? Not data in the query, like the row count at the bottom?

Tara Kizer: What are you trying to solve here? Because this is a presentation layer issue. Management Studio, it’s just a tool for us to query data, why does the formatting of the output matter? If you have an application you’re developing in .NET, format your data there. The row count at the bottom. No, Management Studio, there isn’t a way to format it. You can change the font and things like that in the tools options but I’m not sure that that’s what you’re asking.

Richie Rump: Is there a way, Brent? Could we format that?

Erik Darling: One thing you can do is if you’re interested in just having commas in is you can cast it as money or convert it to money with a different culture and you can get commas put in. But other than that, I’m not really sure what you’re after so if you’re a little more specific.

Brent Ozar: Well and return it as results. Whatever you’re looking for, return it as results instead of looking at what comes out of SSMS. Then you can format it there as well.

Jessica Connors: Dennis hasn’t replied to us.


Jessica Connors: Let’s go to Ben. He says, “[inaudible 00:02:24 old] to SQL. Hearing rumors about going to the cloud, MS, or Amazon, specifically in terms of security. What are the gotchas and pain points? Security is not our forte.”

Brent Ozar: This is so totally different from on-premises because on-premises you don’t have any security risks at all. No one could possibly access your data. I’m sure it’s locked down tighter than the pope’s poop chute. I mean it is completely secure as all get out. Just me, I’m usually like… Erik says, “Pull my finger.” I would say usually it’s more secure because you don’t go wild and crazy with giving everybody sysadmin. So I just turn it back to people on-premises and go, “So let’s talk about your security. Let’s go take a look at what you got. Everybody is SA. You haven’t changed your password in three years? Yeah, I think you should get out of on-premises. On-premises is probably the worst thing for you.” Nate says, “The pope’s poop chute? Really?” Yes. This is what happens when you work for a small independent company. You can say things like “tighter than the pope’s poop chute.” Probably can’t say that but we’ll find out later.


Angie Rudduck: You’ve already said it at least three times, we’re going to find out. You’re going to get an official letter from the pope.

Brent Ozar: The Vatican, yep.

Angie Rudduck: Yeah.

Brent Ozar: “The pope does not have a poop chute.”


Erik Darling: Going for a world record, most references to the pope’s butt in one webcast.

Angie Rudduck: Stop it.

Brent Ozar: Dad always said that to me, so yeah, there we go. Someone else should probably tackle the next question.

Richie Rump: Yeah, somebody else talk now, please.

Jessica Connors: Brent, I’ll just put him on mute.

Erik Darling: Looser than Brent’s…


Erik Darling: Wallet, wallet, wallet.

Angie Rudduck: Wallet on the company retreat.

Brent Ozar: There we go.

Jessica Connors: I’m glad it’s a short week.


Jessica Connors:  Question from J.H. “Would creating a server trigger and emailing our DBA team if someone makes changes to the server role safe? Hard triggers affect performance, but it’s rare in our case that we have server role changes but want to catch it if a network admin puts himself in the sysadmin role without letting us know.”

Tara Kizer: We had security auditing at my last job. I’m not too sure what was used. Well, I think the other DBA who set this all up, he just set up a job and queried for the information. Then the job would run every few minutes I believe and would send the DBA team an alert if anything changed.

Brent Ozar: Yeah, I like that. My first reaction was Extended Events.

Tara Kizer: We had really strict auditing that we had to put in place due to the credit card information. It was encrypted but we had to be very careful with everything.

[Erik and Brent speaking at same time]

Brent Ozar: Would you say you had tight security? How tight was security? Go ahead, Erik, I dare you.

Erik Darling: Oh, sorry. I was going to say that you can set up the event data. You got me all flustered now. You can set up event data XML. It’s pretty good for modification triggers like that. It’s not like, you know, if you put triggers on tables and you’re doing massive shifts of data or you know before and after stuff. It’s a pretty lightweight way to just log changes as they happen.


Jessica Connors: Let’s see here. Question from Terry. “Is there a way to set up databases in an AG without doing a backup and restore?”

Erik Darling: Not a good one.

Tara Kizer: No.

Brent Ozar: 2016 there is. 2016 we get direct seeding where we can seed directly from the primary, so starting today you can. But unfortunately, not before today.


Jessica Connors: All right, a security question. This is from Nate regarding security auditing. “Any suggestions on getting some basic setup that tracks and alerts for security changes and schema changes?”

Tara Kizer: I don’t know.

Brent Ozar: I don’t know either. Is there like an Extended Event or something you could hook into?

Tara Kizer: Probably. What we had set up for security would have just been queries, just to query for the information. Look at the system tables and views. For schema changes, I don’t know.

Angie Rudduck: I think somebody set up a simple, “Hey, there’s somebody new in this group” for a security group. I think it was PowerShell at my last place just to like all of a sudden somebody is in the DBA sysadmin group. How did you get there? It would fire off of one server in the domain but I don’t know anything about schemas.

Brent Ozar: Yeah, schemas are tricky because you can log DDL changes. The problem is if your trigger fails, then the change to the table can fail and that can be kind of ugly. You can also set up event notifications and dump stuff into a queue with Service Broker, but it is kind of challenging and kind of risky. If you want to learn more about it, search—god, I’ve got to type this woman’s name out—Maria Zakourdaev. So if you search for “event notifications and SQLblog,” that’s what you do: “SQLblog Maria.” SQLblog is all one word. Maria Zakourdaev, and I’m sure I’m butchering her name, from Israel has a post on how you go about setting up event notifications and how they break because they do break under some circumstances.

Erik Darling: Everyone mark it, not only with SQL Server 2016 release today but Brent recommended Service Broker.


Brent Ozar: It’s a great solution.

Richie Rump: You didn’t see the disdain on my face when he said that? You didn’t see that at all?


Jessica Connors: Let’s talk about snapshot replication from Trish L. “I have…

Tara Kizer: I’ve got to go get my coffee.

Brent Ozar: I know, we’re all like, “I’m out of here.”

Jessica Connors: Maybe we could tackle this. “I have snapshot replication which is scheduled to run one time per day but recently I’ve started to see blocking done by the snapshot replication. Do I need to [Inaudible 00:07:53] the distribution agent as well because it is running automatically now?”

Tara Kizer: I’m not sure about that but the blocking, you’re going to encounter that because it has to lock the schema. That’s one of the last steps it does. So anytime you have to do a snapshot, whether it be snapshot replication or transactional replication, I assume with merge replication too. Anytime you have to do that initial snapshot or reinitialize a snapshot, it does block changes—data changes, schema changes, you’ll see a lot of blocking as it’s going through the last bits of the snapshot creation.

Brent Ozar: What would make you choose snapshot replication? Like what would be a scenario where you’d go—or have there been any scenarios where you go, “Hey, snapshot replication is the right thing for something I encountered.”

Tara Kizer: I’ve never used to it but if users are willing to accept that their data is a day old, let’s say. Any time that I’ve used transactional replication, they’ve wanted near real time data. They wanted zero latency. We couldn’t deliver that in replication. But yeah, snapshot replication, it just depends on what your user wants as far as the data goes.

Richie Rump: I’ve used it for reporting solutions.

[Richie and Erik speaking at the same time]

Jessica Connors: What?

Erik Darling: I was asking Tara if a different isolation level would help with that blocking.

Brent Ozar: Oh.

Tara Kizer: We were actually using RCSI so, yeah, it was definitely a schema lock. We definitely still had blocking.

Brent Ozar: Makes sense. It was probably worse without the schema, or without the snapshot or CSI, probably horrible.

Tara Kizer: It was very rare we had to do the snapshot but sometimes replication would be broken for whatever reason and we couldn’t figure it out and we’d just have to restart replication. Our database was large. It took like five to eight hours to do. Not the snapshot portion, the snapshot took like about 45 minutes I believe but there was a lot of blocking during that time.

Richie Rump: I like snapshot replication for reporting purposes, right? So again, you just dump the data over there and it’s okay that there’s a time delay for the reporting aspect and there’s your data.

Tara Kizer: I just wonder instead of snapshot replication if people should be, not backup and restore because that might take too long on larger databases, but a SAN snapshot, a daily SAN snapshot, because it’s just available right away. You don’t have to wait for anything.

Brent Ozar: No schema locks, it doesn’t matter what the volume of change data is, yeah.


Jessica Connors: While we’re on the hot topic of replication, there’s another one from Paul. “I am replicating a database using merge and had an issue where if the developers changed a procedure on the original database, the change would not be pushed to the replicated database. Replicate schema changes is set to true. Any guidance on the reason why the changes won’t replicate? I did a snapshot before initiating replication.”

Tara Kizer: So replicate schema changes has to do with the table changes, it does not have to do with stored procedure, views, functions, or anything like that. So if you do an alter table, add a column, that will get replicated if you have the replicate schema changes set to true but you would have to also have in a publication either your current publication or a different one to also replicate the stored procedures.

Brent Ozar: I wouldn’t do that in merge either. Like I would—if you’re going to change stored procedures, just keep them in source control and apply them to both servers.

Tara Kizer: Yeah.


Jessica Connors: Let’s move onto a question from Justin, SSIS Cache Connection Manager question. “I want to load several objects into cache, about one to five million records, but can’t figure out how to access that cache’s source of data. It’s quite a bit faster for us to load to a cache versus staging tables. Is this possible? If not, how would you store this?”

Brent Ozar: Have any of us used the caching stuff in SSIS? No, everybody is…

Tara Kizer: No, I’ve used SSIS a lot and have not used that.

Brent Ozar: The one guy I know who does is Andy Leonard. If you search for Andy Leonard SSIS, he’s written and talked about this. I know because it was in his book. I didn’t read the book, I just remember seeing the book. It was on my shelf at one time. It was a great paperweight. Smart guy, really friendly. Just go ask him the question, he’ll be able to give you that right away. Normally we’re all about, “Go put it on stack exchange.” Just go ask Andy. Just go “Andy Leonard SSIS” and he’s super friendly and will give you that answer right away.

Erik Darling: Tell him Brent sent you.

Brent Ozar: Tell him Brent sent you on this.


Jessica Connors: Question from Tim L. He says, “I’ve got an ancient Access expert here at my company. I’m having SA access. He has a lot of ODBC from multiple Access dbs into my 2008 R2 SQL Server. How do I find out what tables he updates? There’s nothing in terms of jobs or stored procedures that references his data pull and updates.”

Tara Kizer: You could do an Extended Event, run a trace, add a trigger.

Brent Ozar: It’s 2008 R2 though. I like the trigger.

Angie Rudduck: I like cutting his access.

Richie Rump: I love that, “ancient.”

Tara Kizer: Yeah, why does he need SA access?

Brent Ozar: Just go ask him. He’s ancient. He’ll be a nice guy. He’s mellow by now. If you run a trace, that’s going to be ugly, performance intensive. The trigger will be intensive.

Erik Darling: Well you can at least filter the trace down to table name.

Brent Ozar: Well but if he wants to know what tables he’s doing, it’s going to be every time…

Erik Darling: Oh, never mind.

Brent Ozar: Yeah.

Tara Kizer: He could filter by his login at least, if that’s what it’s going through at least to connect to SQL Server.

Brent Ozar: And don’t try to log his insert statements or updates deletes. Just put a record in a table the first time he does an update, delete, and then immediately turn off the trigger on that table, or the trace on that. But, yeah. That’s tough. Just go ask the guy. Go talk to the guy. It would be nice.

Erik Darling: Shoot him email.

Brent Ozar: Yeah, shoot him an email. Buy him a bottle of Bourbon.

Erik Darling: Yeah.

Brent Ozar: It’s a human being.

Richie Rump: Yeah, just don’t give away the wine. Right, Brent?

Brent Ozar: If you were going to give somebody wine, you should give them like Robert Mondavi.


Brent Ozar: He’s Access. He’s not, you know. That’s not true. Cliff Lede, ladies and gentlemen. This webcast is brought to you by Cliff Lede wines.
Jessica Connors: Do any of us participate in SQL Cruise?

Brent Ozar: I cofounded that with Tim Ford. Tim and I cofounded it and when we split up the consulting company versus the training and cruise-type business, I wanted to let him go do his own thing there and not be on it because I felt like I would kind of shadow in on it and make the thing murky. It is a wonderful experience. I strongly recommend it to anyone who thinks about going. It’s fantastic for your professional development. It’s limited to just say 20 attendees and like 5 to 10 presenters, so the mix, the ratio of presenters and attendees is fabulous. You get to hang out with them. You get to have dinners, from all of this you get to know them really well. So it can be a rocket ship for your career and it helps you really build networking bonds with not just the presenters but the other attendees who are there. The downside is you get to hang out with the presenters in hot tubs so that may be a pro or a con depending on what your idea of a good time is there. So it’s not for everybody but it is truly fantastic.

Erik Darling: Grant Fritchey in a speedo, ladies and gentlemen.


Jessica Connors: Do you still go on the cruise then? Are you done?

Brent Ozar: I don’t. I totally stopped doing that. I go off and do my own cruises. My next one is in Alaska in August I think, going on that one with my parents. But I haven’t done a technical cruise since. Most of the time what I like to do now is just go out on a cruise and not talk to anyone. I like to go out and sit and read books.

Erik Darling: You did Alaska before, right?

Brent Ozar: This is my fifth time I think, yeah. Absolutely love it. It’s gorgeous. I never was a snow kind of a guy but you get out there in the majestic snow and mountains and bears and all that, it’s beautiful.

Jessica Connors: Nice.

Angie Rudduck: Minus the jacket.

Brent Ozar: Yes.


Jessica Connors: Let’s talk to Graham Logan, he’s got some problems. He says, “SSMS crashes when expanding database objects in objects explorer. Database is about 1.2 terabytes and has about two million objects.”

Tara Kizer: Oh good lord.

Jessica Connors: But, he says, “[inaudible 00:15:43] design. It’s not mine. How to view all database objects without SSMS crashing.”

Tara Kizer: You just cannot use object explorer. You’re not going to be able to use object explorer. You can’t use the left pane in Management Studio. You’re going to have to write queries to see things. It’s very unfortunate but that’s a heck of a lot of objects in the database.

Brent Ozar: Before you expand the list, you have to right click on the tables thing and click filter. Then you can filter for specific strings but without filtering, it’s useless… I’d go information schema tables, information schema, yeah, all columns, all kinds of stuff.


Jessica Connors: Kyle Johnson has a new one. “We have a 4.2 terabyte database with a single data file. I’m working on a plan to migrate to multiple ones. Shrinking the database to a level with the data between files isn’t really practical with a six-hour window of no users. Have any other suggestions? Reindexing tables and specifying the file groups to move the table to two?” From Kyle Johnson.

Brent Ozar: Not a bunch of good options here.

Erik Darling: Brent is getting ready to tell you about Bob Pusateri.

Brent Ozar: I was. You are psychic. You are phenomenally psychic. Tell us more. I want to subscribe to your newsletter.

Erik Darling: Bob Pusateri, which I feel like this webcast has been obscene enough without me saying that, has a blog post about moving file groups, a lot of the gotchas, and you know, bad things that can happen to you. I will track down the link for it and send it to you but I would not do it justice just explaining what goes on it, because it’s scripts and everything, so.

Brent Ozar: Bob had a 25 terabyte data warehouse with thousands of files in it because the prior DBA thought it was a good idea to create a separate file group for every employee and then later came to regret that decision so he has a great set of scripts on how you go about moving stuff around and keeping them online wherever possible. So it’s really slick. So you do that prepping leading up to the six-hour window so that your six-hour window is only dealing with stuff that you can’t do offline, like moving the LOB data if I remember right.


Jessica Connors: Question from Claudio. “I’m trying to understand the differences between the new AlwaysOn Basic Availability Groups and the synchronous commit mode and the mirroring and high safety mode but they look identical except AlwaysOn seems more complicated to set up and manage. Are there any benefits to either solutions, features, performance, licenses, liability? Which one would you recommend to adopt?

Tara Kizer: Database mirroring is being deprecated so you’re going to want to move over to the AG basic availability groups. Get on it now. It’s the replacement for database mirroring.

Brent Ozar: The drawbacks, so you’ve managed both too. What would you say the strengths of AlwaysOn Availability Groups are over mirroring and vice versa? That’s not a trick question, I promise.

Tara Kizer: Mirroring you’re not failing over groups at a time. You’re failing over a database at a time. So availability groups let you failover in groups which is good when you have an application with multiple databases that it needs.

Brent Ozar: To be clear, so you’re saying the guy is saying Standard too, so you only do one database at a time. You could script those too, just like you would with mirroring. I’m trying to think if there’s anything that would be… have to have a cluster but you don’t have to have a domain with mirroring. But you don’t either with 2016 either. You can do it between standalone boxes.

Tara Kizer: With mirroring, if you want the automatic failovers, you need a witness. With AGs you do need a quorum but it could be a file share on another server, you know, on a file server that you have or a disk on a SAN could be a quorum. Mirroring does require another box, a VM, it can be Express Edition.

Brent Ozar: Yeah, I used to be the biggest fan of mirroring. I’m having a tough time coming up with advantages as 2016 is starting here.

Tara Kizer: I did a lot of failovers with mirroring, log shipping, and then later availability groups and by far I like availability groups best for DR failovers. It was just so much easier. You just run a failover command and you’re done. With mirroring, you’re doing it database by database. Log shipping is, you know, all sorts of restores going on. Mirroring is certainly easy, definitely easy, but I like the slickness of availability groups and readable secondaries and the choice of asynchronous and synchronous.

Brent Ozar: Yeah, that’s where I was going to go too. Because even in Standard, you get choice between synch and asynch now. And you can use one technology that works on your Standard stuff and your Enterprise stuff so you only have to learn one feature instead of learning two. That’s kind of slick too.

Tara Kizer: When we used mirroring, we would use asynchronous mirroring to the DR site then for high availability solution at the primary site we used failover clustering. So availability groups it just solves both solutions in one feature, plus reporting, we got rid of replication.
Jessica Connors: All right. Let’s move on to a question from Chris Woods, a regular attendee. He says, “Migrating MDF with LOB data, L-O-B data, I don’t know how you call that, from one drive to another with minimal/no downtime. Can you use log mirroring to mirror it to a new database on the same server that shut down the original during a quick downtime?”

Brent Ozar: You can’t do mirroring to the different database on the same server, can you? You can do log shipping, can you do mirroring to the same instance?

Tara Kizer: No.

Brent Ozar: You can do log shipping to the same instance. That will work. Your downtime will be super fast. Because what your failover process would look like is when it comes time for failover, you do a tail of the log backup up on the main database, then restore that tail-log on the other database. Rename the old primary as like, the old primary database just like “database old.” Then rename the new one as “database new” and then whatever the new database name is or the original database name is. So you could do that in like a 30-second outage. You don’t have to change connection strings because it’s all the same server still. So that’s kind of slick.

Tara Kizer: If this is a SAN drive, even moving from one SAN to the next, we did all this stuff live. I don’t know what the technologies are called but we would move arrays live. The SAN administrators did some magic and it just copied over the data and once the copy was complete, it did a switcheroo between the two pointers, or, I don’t know what the technology was but the SAN can handle this without any down time.


Jessica Connors: Rob is adding a new instance to an existing active active cluster. I think he’s talking me about his process so that we can say if it’s yea or nay. He says, “I would need to failover the existing instances to one node, install the new instance on the node with no instances. Service pack it up and failover the instance to the node I was just on. Then run the install in another node, apply SPS, then rebalance the instances.” Does that sound about right?

Tara Kizer: It does but you know we don’t recommend active active clusters. What happens if you lose a node? I don’t at least. I’ve had four-node clusters where all four nodes were active. It’s just a nightmare. If you lose a node, can your other nodes support all of the instances at the same time until you get that other node fixed?

Brent Ozar: Richie is showing something on his iPad. What I would say is…

Erik Darling: It’s too bright.

Brent Ozar: I still can’t see it. We do recommend active active with a passive node.

Tara Kizer: Yeah, okay. Right.

Brent Ozar: Yeah, multi-instance clusters, just have a passive in there somewhere. Your scenario is exactly why you want a passive node laying around.

Tara Kizer: At least what you wrote out here for the question, yeah, that is the process.

Brent Ozar: Also known as miserable.

Tara Kizer: Yeah. At least since SQL Server 2008 we’ve been able to have where it can install it just on one node. Prior to that, all nodes in the cluster had to be online and have the exact right status in order for the installation. Because the installation occurred across all nodes at the same time. Service packs, the engine, everything. On a four-node cluster, there’d always be one node that was misbehaving. It just says, “I need a reboot.” And you’d reboot it 20 times and it would still say, “I need a reboot.” Then finally that one would be okay and now another node would say, “I need a reboot.” It was just ridiculous. So I’m glad that Microsoft changed the installation process starting with 2008.

Brent Ozar: It’s like taking kids on a road trip. “Everybody ready…?” “No.”

Erik Darling: “I have to pee.”

Richie Rump: I got excited, I thought we had a Node.js question but I guess not.

Erik Darling: Never have, never will.

Richie Rump: Brent has.

Brent Ozar: I have.


Jessica Connors: Let’s take one more question. Let’s see here. “Good morning, Brent and Tara, Erik, Richie, and Angie. Says, “Yesterday we had a problem with the process that normally moves data from table queue and delete it after it’s done. This is a standalone database. We stopped the inflow of data but it didn’t help. We got thousands of deadlock alerts. I notice that the disk queue length on the log drive is higher than usual. Here is a sample of the deadlock.” He provides it. “Is there anywhere I could look for this issue?”

Tara Kizer: If you’re getting deadlocks you should turn on the deadlock trace flag 1222, maybe run an Extended Event to capture the deadlock graph. Having just the deadlock victim isn’t enough to be able to resolve it.

Brent Ozar: It’s a separate technique I think that not a lot of database administrators get good at because it’s one of those things where you’re kind of like, “Hey, you should fix your indexes in your queries.” Then people go off and do their own thing. It’s one of those where when you do want to do it, it takes a day or two to read up and go, “Here’s exactly how the [Inaudible 00:25:00].” There’s also not a lot of good resources on our site for it. We don’t go into details on deadlocks either. Have any of you guys seen resources on deadlocks that you liked?

Erik Darling: I like just hitting Extended Events for it. The system health session has quite a bevy of information on deadlocks and you can view the graphs and everything which is pretty swell.

Tara Kizer: I attended a session at PASS in 2014, Jonathan Kehayias from SQLskills, it was all about deadlocks. It was invaluable information. He went over different scenarios and stuff. He said that he loves deadlocks. It was like, whoa, I don’t know that anyone has ever said that before. But it was really great information. I haven’t looked at—I do read his blogs—but I suspect he’s got a lot of deadlock information on the blog to help you out.

Richie Rump: He also loves XML.

Brent Ozar: He loves XML and Extended Events. If you have a Pluralsight subscription. So Pluralsight has online training. I want to say it’s like $39 a month or something like that. I think Kehayias has a course on deadlocks. I’m not 100 percent sure but if you search for SQL Server deadlocks if Kehayias has a course on there, it would be wonderful.

Erik Darling: Also, if you don’t have Pluralsight but you want to try it, Microsoft has a Dev Essentials site I believe where if you sign up for that, you get a 30-day free trial of Pluralsight and you also get Developer Edition and a copy of Visual Studio that’s free, Visual Studio Community or something for free. So it’s not just the Pluralsight subscription for 30-days but you do get a couple other goodies in there that last you a little bit longer.

Richie Rump: The course is called SQL Server Deadlock Analysis and Prevention.

Angie Rudduck: Someday still has a Pluralsight account.

Jessica Connors: All right guys, that’s all we’ve got for today.

Brent Ozar: But thanks for hanging out with us. Man, time goes so fast now. Gee, holy smokes.

Erik Darling: And they’re sobering up.

Brent Ozar: Well, back to work. The Cliff Lede, ladies and gentlemen. Enjoy the High Fidelity. See you guys next week.


[Video] Office Hours 2016 2016/06/08 (With Transcriptions)

This week, Angie, Erik, Doug, Jessica, and Richie discuss DB migration, rebuilding large indexes, recommendation for SQL dev ops tools, best practices for disabling SA accounts, compression, and more!

Here’s the video on YouTube:

You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.

If you prefer to listen to the audio:


Office Hours Webcast – 2016-06-08

Jessica Connors: Question from Justin. He always asks us something. Justin says, “Is it advisable to move the public’s role from being able to query sys logins, sys databases, and/or sys configurations in master?”

Erik Darling: Advisable for what? I’ve never done it. I never cared that much. But I’m not like a big security guy. Any other big security guys want to talk about it…?

Doug Lane: Yeah, I’ve never done anything with public’s role and I’ve never seen it be a problem, but again, we’re not security experts.

Erik Darling: Again, we always recommend that when people ask sort of offhand security questions, Denny Cherry’s book Securing SQL Server is probably the go-to thing to read to figure out if what you’re doing is good or bad.

Jessica Connors: Yeah, Justin says that they got audited and [Inaudible 00:00:47]..

Erik Darling: What kind of audit was it that brought those up? I’d be curious.


Two Servers, One Load Test

Jessica Connors: Let’s move on to a question from Claudio. He says, “I would like to load test a new SQL Server instance with real production data. Is there anything we could put between clients and two SQL Servers that will intercept the queries for them to both SQL Servers and return the response only to one SQL Server?

Erik Darling: Yes, and I also have a magic spell that turns rats into kittens. No. That’s a bit much and a bit specific. You’re going to have to come up with something else. If you want to get really crazy, you’re going to have to look at Distributed Replay and come back in three years when you finish reading the documentation.


How do I configure multi-subnet AG listeners?

Jessica Connors: Okay. Let’s see here. This is a long one from Richard. Let’s tackle this one. “I will be adding a remote DR replica, non-readable, to an existing local availability group on a multi-subnet cluster to be able to use the listener at the DR site. I know a remote site IP address will be added to the listener. Is there anything else that has to be configured in the availability group or cluster besides DNS and firewall rules?”

Erik Darling: Brent?

Doug Lane: Yeah.

Jessica Connors: Where are you, Brent?

Erik Darling: I don’t know actually. I would be interested so I want you to try it out and email me if you hit any errors because I would be fascinated.

[Angie Rudduck enters webcast]

Jessica Connors: Oh, hi.

Doug Lane: Oh, we heard Angie before we saw her.

Angie Rudduck: Thought I had my mute on.

Doug Lane: As for the AG mystery, we’re going to leave that one unsolved.

Jessica Connors: Unsolved mysteries.


How should I configure database maintenance tasks?

Jessica Connors: Question from David. He says, “For routine tasks, index maintenance, backup, etcetera, is it preferred to use agent jobs or maintenance plans? It seems to be the DBA preference. Any reasons to lean one way or the other?”
Erik Darling: Ola Hallengren. Angie, tell us about Ola Hallengren.

Angie Rudduck: Ola Hallengren is amazing. I tell every single client about Ola Hallengren. I used it at my last place and in production across every server. You can do all backups, full disk logs. You can do it separated for your user databases versus your system databases. You get your CHECKDBs in there, user versus system databases. You even get index optimize and even better, Brent, aka Erik, has a really good blog post about how you can use it to just do update stats which is a great follow-up from his post about why don’t do index maintenance anyway, right? Just update stats. I love Ola. I’m working on a minideck to pitch all of his stuff in one instead of just the indexing.

Erik Darling: Nice.

Angie Rudduck: But I’m too busy with clients.

Doug Lane: Plus, he’s a Sagittarius.

Angie Rudduck: Gemini.

Erik Darling: I’ve heard rumors that I’m a Scorpio but I’ve never had that confirmed.

Jessica Connors: Use your Google machine.

Doug Lane: [Imitating Sean Connery] Do you expect me to talk, Scorpio?



How do I set the default port for the DAC?

Jessica Connors: Let’s take one from Ben. He says, “Oh, SQL stuff. Here’s one. In old SQL, we had to set a registry key to set a static remote DAC port. Is there a better way in SQL 2012, 2014, 2016? What’s the registry key?”

Erik Darling: A static remote direct administrative connection port?

Jessica Connors: Mm-hmm.

Erik Darling: Weird. No, I don’t know, I’ve never done that.

Doug Lane: Yeah, me neither.

Angie Rudduck: What is old SQL? Like what version is old SQL?


Angie Rudduck: 2005?

Doug Lane: 2005 he says.

Erik Darling: Hmm, I don’t believe that’s changed much since then.

Richie Rump: Yeah, it sounds like a blog post you need to write, Erik.

Angie Rudduck: We’ve got something on the site about remote DAC because…

Doug Lane: That doesn’t say anything about the port though.

Angie Rudduck: No, but it’s pretty detailed, isn’t it? I don’t know maybe go check that out, Ben, and go from there. I think it’s just go/dac. I don’t know. I’m making up things now.

Erik Darling:, D-A-C.

Jessica Connors: What’s the oldest version of SQL you guys have worked on?

Erik Darling: ’05.

Angie Rudduck: 2000.

Doug Lane: In Critical Care, ’05.

Angie Rudduck: Oh, yeah.

Richie Rump: No 6.5 people? No?

Angie Rudduck: Tara is not here.

Jessica Connors: Yeah, she’d probably have a story about the oldest version she’s used. She’s got the best stories.

Erik Darling: “It was on a floppy disk…”


Doug Lane: I worked on 7 once upon a time. I didn’t actually like do real work on 7, it was just, believe it or not, writing stored procedures in the GUI window.

Angie Rudduck: Query explorer or whatever it is?

Doug Lane: No, it was like the properties of the—it was crazy when I think back on it. There was like no validation of any kind except the little parse button. This was back when Query Analyzer and Enterprise Manager were separate and I was doing it in Enterprise Manager.

Angie Rudduck: We had a 2000 box at my last place and I knew nothing about 2000. I tried logging in there and I was like, “Wait, where is Management Studio?” That was really hard to try to figure it out. The management administrative part is really scary in 2000 and I was on the server directly. It was like already a precarious server about to tip over. So, scary.


What’s the best way to rebuild a 2-billion-row table?

Jessica Connors: Question from Joe. He says, “What is the best way to rebuild a very large index without taking outage or filling the log? Rebuilding after two billion record delete.”

Doug Lane: Oh, are you sure you need to delete two billion rows from a table?

Erik Darling: Maybe he was archiving.

Doug Lane: Yeah, I don’t know if you want to flag them as deleted and then move them out some other time or what, but, wow, that’s a lot of log stuff. You can do minimal logging if it’s a table that you really don’t care about it being fully logged on but there are disadvantages to that too.

Erik Darling: What I would probably do, I mean, if you’re on Enterprise you’re kind of out of luck either way, right? There’s no online index operations there. You can help with the log backup stuff if you put it into bulk logged and continue taking log backups, but at that point, if anything else happens that you need to be recoverable after it starts bulk logging something, you’re going to lose all that information too. So bulk log does have its downsides. It’s not a magic bullet. So depending on your situation, you might be in a little bit of a pickle. A better bet is if you’re deleting two billion records and depending on how many records are leftover, you might just want to dump the stuff that you’re not deleting into another table and then do some sp_rename and switch things around.

Doug Lane: You can actually just drop the index and recreate it. Sometimes that goes a lot faster.


Are there any problems with SQL role triggers?

Jessica Connors: Question from J.H. He says, “Anything to be aware of or downsides of setting up SQL role triggers, mainly sysadmin role changes?”

Erik Darling: All these security questions.

Doug Lane: Yeah.

Erik Darling: We bill ourselves as not security people.

Doug Lane: Like the one before, I think we’re going to punt on that.

Jessica Connors: Thomas Cline says, “No security questions.”

Angie Rudduck: Too bad the slides aren’t up.

Jessica Connors: Yeah.

Erik Darling: “For security questions…”

Angie Rudduck: “Please call…”

Erik Darling: Yeah, there we go.

Angie Rudduck: I’ll do them because it usually works for me.

Erik Darling: Attendees… staff… Angie. I’ll just mute you, just kidding. There we go. You are presenting.


What are the HA and DR options with Azure VMs?

Jessica Connors: All right, who wants to answer some Azure questions?

Erik Darling: Nope.


Jessica Connors: Does anybody here know the HA and DR options with SQL 2012 Standard in Azure VMs?

Doug Lane: Oh, no. Not me.

Erik Darling: Using a VM? If you’re just using the VMs, I assume it’s the same as are available with anything else. It’s only if you use the managed databases that you get something else but I think it’s mirroring either way. I know Amazon RDS uses mirroring.

Richie Rump: Yeah, I think they have like three copies and if one goes down it automatically fails over to the other two or something like that. Don’t quote me.

Jessica Connors: Okay, we’re all being quoted. We’re actually all being transcribed. We’re all being recorded. We’re all being watched.

Erik Darling: Really?


Is there a better solution for replication than linked servers?

Jessica Connors: Question from Cynthia. She says, “My developers have a product that uses linked servers for parameter table replication. I’ve read that linked servers aren’t the greatest. Is there another way to do this?”

Doug Lane: Okay, that’s actually kind of a two-part question because you’ve heard that linked servers aren’t the greatest. You’re right. So with SQL Server 2012 SP1 and later, you don’t have to blast a huge security hole in order to get statistics back from the remote side in linked servers. It used to be that you had to have outrageous permissions like ddl admin or sysadmin in order to reach across, get a good estimate, when it then builds the query plan on the local side. That’s not the case anymore. The problem that you can still run into though is that where clauses can be evaluated on the local side. Meaning, if you do a where on a remote table what can happen is SQL Server will bring the entire contents of that remote table over and then evaluate the where clause locally. So you’re talking about a huge amount of network traffic potentially. That’s what can go wrong with them. The other question, “Is there a better way?” That kind of depends on what flexibility the app gives you because you say that this is a product. So I don’t know if this is something that you have the ability to change or not but if you’re talking about replicating from one side to the other, there’s any number of ways to move data from A to B.

Jessica Connors: And why do linked servers suck so bad?

Doug Lane: I just explained that.

Jessica Connors: Oh, did you? I didn’t hear you say why they suck so bad, sorry.

Doug Lane: Because you can end up with really bad plans either because permissions don’t allow good statistics or you end up pulling everything across the network just to filter it down once you’ve got it on the other side.


Are there any good devops tools for SQL Server?

Jessica Connors: Question from Joshua. This might be one for Richie. “Do you have any recommendations for Microsoft SQL dev ops tools?”

Richie Rump: There’s not a ton. I guess Opserver. I guess from Stack Overflow would be one of them but not that I know of that there’s like out-of-the-box ways to do that kind of stuff. I know when I was consulting with one firm, they had built their own dev ops tools. I think they had Splunk and then they just threw stuff out from SQL Server logs and then did a whole bunch of other querying to put dashboards up so they could do monitoring amongst the team and do all that other stuff. I think Opserver does a lot of that stuff for you but it’s a lot of configuration to get it up and running. I’d say test it out, try it out, and see if that works for you but I’m not aware of any kind of things you could buy and it’s kind of ops-y things. I don’t know, what do you think guys?

Erik Darling: I agree with you.

Doug Lane: I don’t live in the dev ops world.

Jessica Connors: I agree with you, Richie.

Angie Rudduck: Yeah, whatever the developer says.

Jessica Connors: What he said.


Should we disable the SA account and set the DB owner to something else?

Jessica Connors: Question from Curtis. He says, “I’m looking for a clarification on SA usage. sp_Blitz [inaudible 00:12:03] to having DB owner set to SA, not a user account. But what about the best practice of disabling SA? Should DB owner be set to a surrogate SA account?

Erik Darling: Nope. It’s not really catastrophic because it’s something that you should be aware of because usually what happens on a server is someone will come in and restore it. Someone will come in and restore a database, usually from an older server to the new one. They’ll be logged in with their user account so they’ll be the owner of that database. The owner of the database has elevated privileges on the database equal to SA, which you may not want always and forever. That’s why SA should be the owner, even if it’s disabled and the user account shouldn’t be. Even if the user is a sysadmin, you kind of just don’t want them to also be the owner of a database.


How do I migrate databases in simple recovery?

Jessica Connors: Question from Monica M. “We are migrating and upgrading from SQL 2008 R2 to 2014. We use simple recovery as our reporting/analysis rather than OLTP. Our IT department said after I copy/restore the databases to the new server it will take them two weeks to go live. By this time, our DBs will obviously be out of sync. What simple method would be best to perform this move?”

Angie Rudduck: Every time I moved, we did some server upgrades where we just created a new VM and ended up renaming it to the old server name eventually but what we did was we took a full backup like the day before, hopefully, but if you have to do two-weeks, we took the full backup when we knew and then we took a differential right when we’re ready to make the cut over. So let’s say at 6:00 p.m. the maintenance window is open and I’m allowed to take the database offline. I put it in single-user mode. I took a differential and then applied that to the new server. Then took it out of single-user mode on the new server. Then we did all of our extra work. So it’s not perfect for two weeks of data change, so if you could keep applying the fulls until like the night before, that would give you a little bit better change over.


Jessica Connors: Trying to find some questions here. You guys are real chatty today.

Erik Darling: Everyone is all blah, blah, blah, problems, blah, blah, blah.

Jessica Connors: “Here is my error…” They copy and paste it. I’m never reading those.

Erik Darling: “Here’s the memory dump I had.”

Angie Rudduck: Jessica likes to be able to read the questions and she doesn’t read SQL, so nobody reads computer. Nobody really reads computer, including us.

Erik Darling: “Yeah, I found this weird XML…”

Jessica Connors: Richie reads computer.

Angie Rudduck: That’s true, Richie reads computer.

Richie Rump: I was reading XML before I got on.

Angie Rudduck: That’s disturbing.

Erik Darling: Naughty boy.


How do I shrink a 1.2TB database?

Jessica Connors: Here’s a question from Ben. He says, “I have a large 1.2 terabyte [inaudible 00:14:51] queuing database. Added a new drive and a new file device. DBCC SHRINKFILE does not seem to be working on the original file. Seems that the queuing application reuses space before it can be reclaimed. Any suggestions?”

Angie Rudduck: Don’t shrink.

Erik Darling: I don’t know what you’re trying to do. Are you trying to move the file to the new drive or what are you up to? I don’t think you’re being totally honest with us here.

Angie Rudduck: Yeah.

Jessica Connors: But you shouldn’t shrink, huh?

Doug Lane: Spread usage across drives, okay.

Angie Rudduck: Maybe put it on one drive, I don’t know? I guess that’s hard to do with such a large file size.

Jessica Connors: 1.2 terabytes.

Erik Darling: So you have your database and you bought a new drive. Did you put like files or file groups on the new drive? Did you do any of that stuff yet?

Angie Rudduck: He says he has to shrink because the original drive is maxed and he needs workspace. I think it’s just not creating—maybe he has to do what you’re saying, Erik, about creating an additional file group to be on the other drive.

Erik Darling: Right, so what you have to do is actually move stuff over to that other file. So if you haven’t done it already, you have to pick some indexes or nonclustered or clustered indexes and start doing rebuild on the other file group.

Angie Rudduck: Then you’ll be able to clear out space to shrink your file.

Erik Darling: Hopefully.

Angie Rudduck: Maybe, yeah. Let us know next Wednesday.


Has anybody played with SQL Server 2016 yet?

Jessica Connors: Have we played with SQL 2016 yet?

Erik Darling: Oh, yeah.

Doug Lane: Yep.

Jessica Connors: No? Some of you?

Erik Darling: Yes.

Jessica Connors: Have you played around with the 2016 cardinality estimator and do you know if it works better than SQL 2014?

Erik Darling: It’s the same one as 2014.

Jessica Connors:         Is it?

Doug Lane: So there’s the new and the old. Old is 2012 and previous and the new is 2014 plus. There’s all kinds of other new stuff in 2016 but the cardinality estimator actually hasn’t been upgraded a second time.

Erik Darling: Yeah, Microsoft is actually approaching things a little bit differently where post 2014 with a new cardinality estimator, they’ll add optimizer fixes and improvements for a version but you won’t automatically be forced into using those. You’ll have to use trace flag 4199 to apply some of those. So even if you pop right into 2016, you may not see things immediately. You may have to trace flag your way into greatness and glory.


Are high IO waits on TempDB a problem?

Jessica Connors: Here’s a good question from Mandy. She says, “I’ve been on a SQL 2014 standard cluster with tempdb stored on SSDs for several months. The last few days we’ve been seeing a lot of alerts and spotlights saying that we have high IO waits on those tempdb files. The IO waits are as high as 500 to 800 milliseconds. Is this a high value? I’m new to using SSDs with SQL Server and I admit that I just don’t know what high is in this case. Any thoughts?”

Doug Lane: It’s high but how frequent is it? Because if you’re getting an alert that like once a day that you’re hitting that threshold, it may not be something you need to worry about too much depending on what it is that’s hitting it. So what you want to do is look at your wait stats and look at those as a ratio of exactly how much wait has been accumulated versus hours of up time. If you’re seeing a lot of accumulated wait versus hours of up time, not only will you know there’s a problem but you’ll also be able to see what that particular wait type is and get more information about what’s causing it. Then you can put that together with what might be happening in tempdb and possibly come up with an explanation for what’s going on.

Erik Darling: Yeah. I’d also be curious if something changed that started using tempdb a whole lot more or if maybe you might be seeing some hardware degradation just after some time of use.


What should I do when my audit stops working?

Jessica Connors: Question from James. He says, “I’ve installed a SQL Server audit and noticed it stopped working. Is there anyway to be alerted when a SQL Server audit stops or fails?”

Angie Rudduck: Is that the Redgate tool? Because I feel like Redgate had some auditing tool or encrypting tool that went out of support. When I was at my last place and we had to change over so I’m not sure what that is.

Doug Lane: If it throws a certain severity error then you can have SQL Server notify you of those kinds of things. But as far as like audit as a product, I’m not sure.


Will backup compression compress compressed indexes?

Jessica Connors: Then we’ll move on to J.H. Says, “When compressing all tables page option in a database does compressing its backup gain more compression?”

Erik Darling: Yes.

Angie Rudduck: Compression squared.

Erik Darling: Compression times compression. Are you really compressing all your tables to get smaller backups?

Jessica Connors: Is that really bad?

Erik Darling: No. It’s just kind of a funny way to approach it.

Doug Lane: I don’t know if that’s the purpose but…

Angie Rudduck: I think he has no drive space, tiny, tiny, tiny SAN.

Erik Darling: Buy a new thumb drive.

Doug Lane: Talk to Ben because he apparently has the budget to have new large drives.


Are there performance issues with SSMS 2016?

Jessica Connors: We have somebody in here that’s playing with SQL 2016. He says, this is from Michael, “SQL Server Management Studio 2016 sometimes goes into not responding status when using the object explorer window such as expanding the list of database tables. These freezes last around 20 seconds. Is there any known performance issues with SSMS 2016?”

Doug Lane: I found one. I was trying to do a demo on parameter sniffing where I return ten million rows of a single int-type column and maybe about half the time SSMS would stop working and it would crash and force the restart. So I think SSMS 2016, at least related to the RTM release, is a little bit flakey.

Jessica Connors: For now.

Erik Darling: Yeah, it might depend on just how many tables you’re trying to expand too. I’ve been using it for a bit and I haven’t run into that particular problem with just expanding object explore stuff. So how many tables are you trying to bring back would be my question.

Angie Rudduck: I was just about to say, we had that question last week or the week before about SSMS crashing when they tried to…

Erik Darling: Oh, that’s right.

Angie Rudduck: Remember? They were trying to expand their two million objects.

Erik Darling: Yeah, that’s not going to work out well.

Angie Rudduck: So maybe this is the same person, different question.

Doug Lane: I was going to say I think it might just be a little…

Angie Rudduck: Yeah. It’s brand new, what do you expect? It’s a week old. It’s going to be flakey.

Richie Rump: Something to work when you release it?

Angie Rudduck: No, come on.

Richie Rump: I’m just saying, it’s a crazy idea, I know. I have all these crazy ideas but…

Angie Rudduck: Unrealistic expectations, Richie.

Erik Darling: That would require testing.

Jessica Connors: Richie has never released anything with bugs.

Angie Rudduck: Who needs to test things? I did have a client recently ask me what test meant when I was talking about test environment.

Jessica Connors: I know, what?

Richie Rump: What’s this test you speak of?

Erik Darling: Just for the record, Richie wipes cooties on everything he releases.

Angie Rudduck: Kiddie cooties.

Doug Lane: All right, looks like we’ve got two minutes left. Lightening round, huh?

[Group speaking at the same time]


What’s the best SQL Server hardware you’ve ever worked on? And the worst?

Jessica Connors: Question from Dennis. He wants to know, “Tell me the best SQL hardware environment that you have ever worked on.”

Doug Lane: I would say when we went down to Round Rock last year. I got to play with I think it was a 56-core server, that was pretty fun.

Erik Darling: Yeah, I think my best was 64 cores and 1.5 terabytes of RAM.

Richie Rump: Yeah, I had 32 cores and 2 terabytes of RAM.

Erik Darling: Nice.

Jessica Connors: What about the worst you’ve seen with clients?

Erik Darling: Ugh. Probably an availability group with 16 gigs of RAM across them. That was pretty bad. And it had like one dual core processor. It was pretty, yeah. It was Richie’s laptop.

Angie Rudduck: Worse than Richie’s laptop.

Doug Lane: That sounds about like the worst I’ve seen is like dual core, 10 or 12 gigs of RAM.

Angie Rudduck: 500 gigs of data.

Erik Darling: I’ve had faster diaries than that.

Jessica Connors: All right, well, we’re out of time.

All: Bye.