This week Brent, Erik, and Richie discuss truncating & repopulating, failed logins, their choice for high availability, PowerBI, read committed isolation, parameter sniffing, database mirroring, extended events and failing queries and re-indexing large tables.
You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.
Enjoy the Podcast?
Don’t miss an episode, subscribe via iTunes, Stitcher or RSS.
Leave us a review in iTunes
Office Hours – 12/20/17
Do nightly-reloaded tables need a clustered index?
Brent Ozar: Gordon asks, “Would creating a clustered index on a heap with a very high number of page locks make things better when truncating and repopulating?” That’s interesting.
Erik Darling: When truncating? No, I can’t see how it would help with truncating, and I can see how it might hurt with repopulating because then the data would get ordered by the clustered index. So for that kind of heap, if all you’re doing is truncating and reloading, I can’t really see much benefit to a clustered index; unless you’re doing something real clustery with it later. Like if you need to join that table off to something else later on – but even then, I would probably create and drop the clustered index as needed. I wouldn’t just have it on there all the time.
Richie Rump: Yeah, I’m assuming that if you’re doing truncates on that, there’s no foreign keys in other tables and things like that.
Brent Ozar: That’s a good point, yeah. So it’s one less reason to need non clustered indexes on it.
Richie Rump: Correct.
Erik Darling: Just do column store for everything. There’s no need for any other kind of index at all, ever; just column store everything. Partition and column store, and make sure that you’re loading your data as JSON, I think, is just the key to database success these days.
Brent Ozar: Anything that that doesn’t work for, you should use Hekaton.
Erik Darling: Yeah, totally, or in-memory column store with JSON, I think, is just…
Richie Rump: I like DynamoDB, so let’s do JSON with DynamoDB. How’s that; we good?
Brent Ozar: Cosmos, you’re not remembering the current keywords…
Richie Rump: You know, Microsoft Cosmos, that’s right.
Brent Ozar: Yeah, yeah, and this is why nobody lets you into the MVP club.
Erik Darling: You spent too much time in RDS, man.
Brent Ozar: We don’t use the Cosmos word.
Richie Rump: Yeah, it wasn’t the successful podcast or the 1,500-people user group; that had nothing to do with that. I think it was working for you, Brent. I think that’s what…
Brent Ozar: It’s probably true; guilt by association.
Where can I track down failed logins?
Brent Ozar: John says, “Blitz told me to add alerts, so I did. I am now seeing alerts. I’m getting alerts for severity 16 and severity 20. Looked in the error log and I’m seeing this alert information as well. Where can I find out more about the logins that are trying and failing with an SSPI handshake error?”
Erik Darling: I’m not good at that one.
Brent Ozar: if you wanted to, you could start running an extended events trace to catch failed logins, for example. I would just say check to see the time of day that it happened. If it’s happening at the exact same time every day, it’s really common for penetration testing tools. I’ve seen stuff flag at a certain time every day. And if it’s not your biggest concern, feel free to leave off those sev20 errors if you’re not worried about those.
When I need to restore 10 tables, how do I handle foreign keys?
Brent Ozar: J.H. says, “When I restore around ten tables – not the entire database – those ten tables have foreign key dependencies. Any handy tricks that would help me track down all the other foreign keys in other tables, dropping those keys, inserting the data and then recreating the keys?”
Erik Darling: So how are you only restoring certain tables? Are you using a third-party tool or are you doing a select into a different database? Obviously, there’s no object level restore natively with SQL Server, so that would be my first thing. I do know that Aaron Bertrand has a pretty nifty script out there on the internet that will script drop and create statements for all of your foreign keys; so that might be one way to go. If you wanted to just script it out and do it, I don’t know if it will work for absolutely everything, but it’s Aaron, so I trust it for the most part.
Brent Ozar: I agree with that.
Richie Rump: I don’t know. I get kind of [crosstalk] when you start talking about dropping foreign keys, then reloading data, then applying the foreign keys. I mean, are we sure that data hasn’t been removed and what kind of – because when you reapply those foreign keys and data has been removed in the child table, or the parent table, then it’s not going to be there. You’re not going to be able to apply that key back.
Brent Ozar: I’d be curious as to how often you have to do it too. You can tell we like this question a lot. Not that we don’t like all the questions; you’re all wonderful snowflakes that are going to melt. But this is really interesting – how often do you have to do it? Is it part of a regular data warehouse type process or is it something that you only have to do every now and then for emergencies? That’s really cool.
What high availability feature is the best?
Brent Ozar: Ron says, “What is your choice for high availability? I can tolerate up to one minute of downtime.”
Erik Darling: I would say, you know, ye olde failover cluster is my favorite thing in the world.
Brent Ozar: And what do you like about failover clusters?
Erik Darling: They’re not availability groups is primarily what I like about them. Just for the environments that I’ve been a part of they’ve made the most sense because we already had the San, we already had the shared storage. It was fairly easy to set up and get all that stuff running. You know, with an AG, we would have had to have had separate storage and, you know, probably messed stuff up that way. But, that failover cluster gave us pretty quick failover; even with dozens of terabytes on there. It was pretty easy to manage quorum in all of that; once you got your head wrapped around that. And since it was a two-node failover cluster and both were in the same data center, I didn’t have to worry too much about crazy split brained stuff happening off somewhere else.
Brent Ozar: I highly recommend checking out our post, the 9 Letters that Get DBAs Fired. And we walk you through how to interpret the right HA and DR mechanisms for your goals… Methanisms – crystal methanisms…
Erik Darling: Method Mans…
Richie Rump: You got that click bait thing nailed down tight, man. There’s a lock on that.
Brent Ozar: I’m telling you, I love over hyping and sensationalizing things; it really gets people’s attention. It’s really good stuff.
Erik Darling: And sensualizing as well, not just sensational…
Brent Ozar: Making them sensational. Ron Saxer notices the gremlin on there; oh yeah.
What tool should I use instead of Access?
Brent Ozar: Let’s see here, James May – and I hope this is the real James May, because that would be really cool if he was on the webcast…
Erik Darling: You mean Jimmy May, right?
Brent Ozar: That too. Wow, I never thought that Jimmy May has James May’s name.
Erik Darling: Yeah, they made him put his full name in the GoToWebinar thing.
Brent Ozar: That’s incredible. He says, “I have an Access problem. [crosstalk] We have to build a whole new data mart with over 100 reports and I’d like to put it in literally anything other than access. What tool would you recommend and how would you go win around the Access guy?” I have the answer – PowerBI. I’m all about PowerBI. I think PowerBI… Your faces are so – what?
Richie Rump: Oh, there’s still the Access guy; he’s still around? He didn’t die of old ace?
Erik Darling: Hide his dentures.
Richie Rump: The one Access guy, he has a job. Mind you, I’m the only one with an Access certification on the team.
Brent Ozar: This is true. So as someone with an Access certification, Richie, what’s your opinion?
Richie Rump: I don’t know; I stopped using access 20 years ago.
Brent Ozar: I would totally go down the PowerBI route. PowerBI, Microsoft is investing in it, it looks gorgeous – it looks really pretty. There are a ton of gotchas, like joins in between multiple tables – you can only use one field to join between them, so you have to create another calculated field to reinforce joins. There’s a lot of gotchas, but I really like where they’re going with it. But if the access guy can get it to work, I’m also kind of like, go…
Richie Rump: Yeah, okay, so if you’re building a whole new data mart with 100 plus new reports, yeah, definitely go with PowerBI. You could put that online, you could view it mobile – I mean, there’s a whole bunch of other good stuff that’s kind of in there that you’re going to want to utilize. And as soon as the executives see it on their phone, they’ll be like, “I want more of this. This is what I want.” When I was at a previous company, eight years ago when we were able to put it on an iPad, it’s like all the executives just lost their mind. It’s the same thing with the phone. So now I can get access to all my reports and it’s real time – or near real time – and it’s all my data’s right there so I can ignore my family while I’m sitting there checking up on financial reports. They’re going to want that.
Why is RCSI not enabled by default?
Brent Ozar: Eric Swiggum asks a really good question. He says, “Why is read committed snapshot isolation not enabled by default? I call foul.”
Erik Darling: I don’t disagree.
Richie Rump: Change it to the default.
Brent Ozar: That’s true, make it on the new servers that you build. So for those of you who haven’t played around with RCSI, it allows readers to not block writers and writers to not block readers. We have a page on it if you go to BrentOzar.com/go/RCSI, originally written by Kendra Little, who has a phenomenal set of isolation level stuff over on her own site, LittleKendra.com. She’s got a new training site called SQLWorkbooks.com as well too. She’s really into isolation level stuff. She has a webcast coming up on February 8th, Snapshot Isolation on AG Secondaries. But we’re 100% with you, it’s the default in Azure, it’s the default in availability groups, secondaries, all kinds of cool stuff there.
Richie Rump: So how can you make it the default, Brent? Which way would you do that on your own servers?
Brent Ozar: Let’s see. So I would right-click on a database and go into properties, and then there’s isolation levels inside there that you can change to read committed snapshot – why would I do that? Hold on, we have a guidepost on this.
Richie Rump: I guess that softball was more like a curve.
Brent Ozar: I’m looking over at Slack and all of a sudden I’m like, “No, bad idea…”
Richie Rump: It was a Wiffle Ball; you thought it was going straight and then it curved.
Brent Ozar: Oh, bad idea. So read down through this – and the one thing that’s a gotcha with enabling RCSI is it’s an alter to the database and you have to be the only active connection. So you have to put the database in single user mode. So just [inaudible] something that you want to think about doing after hours.
Richie Rump: Is that one of those things that you could add to model?
Brent Ozar: I believe so. I don’t know why it wouldn’t be.
Richie Rump: That would be my first guess. If you wanted the default, just throw it into model.
Brent Ozar: Let me get the question list back out because I’ve thoroughly hosed up my screen.
Erik Darling: If I was starting a brand-new app, I would 100% start everything off with using RCSI from the get-go. I wouldn’t do anything else.
Brent Ozar: Just get that locking problem out of the way before you get started.
Richie Rump: yeah, absolutely, I agree 100%.
How do I encrypt all connections with TLS?
Brent Ozar: J.H. says, “His application team is requesting all SQL Server communications to be secured. What would need to be done from a DBA end? What are the differences between TLS 1.2 and forcing encryption protocol?”
Erik Darling: Oh boy…
Brent Ozar: Aaron has a post on this too as well if I remember. There are a whole lot of gotchas that are terrifying. I would start here. So search for Aaron Bertrand SQL Server support for TLS 1.2.
Why is my query slow in the application, but fast in SSMS?
Brent Ozar: Abdel says, “I have a situation where the stored proc is slower than running from the SQL in the stored proc just by itself.” So he runs the stored proc from one place, then he copies out the T-SQL, pastes it in SSMS and it runs slower. He says, “I would expect the stored proc to run faster. Have you seen this before and what should I look for in these types of situations?
Erik Darling: Mark it.
Brent Ozar: We’re 13 minutes in?
Erik Darling: Yep.
Brent Ozar: I’ll let you answer. [crosstalk]
Erik Darling: You left the web browser up; honestly… So what you’re most likely running into is something called parameter sniffing, and there’s a great post by a fellow named Erland Sommarskog, and it’s all about why that might be happening to you. It’s a long read, but it’s well worth it because – this is one of those career investment reads. This is like getting a college degree in something that I wouldn’t know anything about.
I want to do minimally logged bulk loads…
Brent Ozar: Alright, next up, Mahesh says, “Is it possible to perform minimally logged bulk loads to a table with clustered and non clustered indexes? I turn on trace flag 610 but it is not performing in my situation.”
Erik Darling: Nope, there’s a whole website written about this stuff by Microsoft, and you can’t have non clustered indexes on a table and get minimal logging. It has to be a heap or it has to be an empty non clustered index. Like if the clustered index already has stuff in there then you can’t get it. So there’s a whole lot of gotchas around it. Minimal logging is one of those, “What if it’s Tuesday on Mars,” performance benefits to me. When it works on an insert, it’s beautiful, but it works under such limited conditions. I see a lot of people who go into simple recover model or go into bulk log and it’s still not working for them. And I’m like, “Well bad things happened; you didn’t do something right. You didn’t follow the steps here.
Should I use database mirroring for SharePoint?
Brent Ozar: Alright, let’s see, Barab says, “I have SharePoint and I want to do basic database mirroring; just plain old database mirroring. Have you seen this with your customers and are they able to get SharePoint to work after breaking the mirrors on the DR side?” Oh man, it’s been years since I’ve done this.
Erik Darling: I don’t have any practical experiences with it, but this reminds me of – something that I hear a lot of people say when they get into weight lifting and stuff is, “I want to be 300 pounds and shredded.” And I’m like, it’s really hard to do either one of those things. Like it’s hard to be 300 pounds and it’s hard to be shredded. You’re talking about mixing up two things that are kind of hard to mess with individually, never mind like putting them together and making them work at the same time. There’s just not a lot of people out there doing that. I wonder why mirroring and not the AG. Like if you’re going to roll that out these days, I wonder what – like practically why you would do that.
Brent Ozar: Maybe no Windows failover clustering experience, I’m guessing. Yeah, I think we would prefer AGs over database mirroring if we could. It’s included in Standard Edition starting in SQL Server 2016.
Erik Darling: If AGs are out of the question, I might even prefer log shipping to mirroring in that situation, just because I know log shipping works, I know how solid it is and the interoperability of the two things is a little bit less questionable.
How would I catch failed SQL statements?
Brent Ozar: let’s see, James May, our favorite Top Gear presenter pops back in with, “I’m getting a very intermittent level 16. Once every couple of days a SQL statement is failing but the logs are getting backed up before I can jump in and query it. How would you go about catching this bug?” I would be curious…
Erik Darling: We have a blog post about that. We have a blog post about – Kendra wrote it – it’s about extended events and catching failing queries. It’s called like Queries Failing in My Database or something.
Brent Ozar: this is what happens when you have thousands… Well, look at that, unbelievable. So assuming you can run extended events, you’re on whatever version that supports that or higher. Super lightweight for stuff like this, as long as you don’t have a lot of queries failing it should be pretty easy. Usually, we’re good about having the – there we go.
Erik Darling: I’m just going to throw a couple of things out there, since the question is still up on the screen. You wouldn’t really want to query the transaction logs to find that sort of thing. That’s not where you find queries that fail, because the transaction log is for modification queries that finished. They wouldn’t be in there; you would most likely just want to look at the error log, not the transaction log. If something is cycling the error logs then you can just look at the prior log because it’s probably not deleting out everything, unless you have some really weird error log cycling thing going on. But if I just wanted to have all the data isolated in one place, I would definitely just go the extended events route.
Brent Ozar: Makes sense. The other reasons that I’ve seen sev16s, the query is too complex and doesn’t produce an execution plan, the query optimizer ran out of resources or whatever. At that point, I’m like, “I don’t even care about capturing the query most of the time. Whoever is having a problem with this query, they can bring it to me and we’ll work together on tuning it, but I don’t want to proactively go looking for a problem like that.”
Erik Darling: It’s going to be one of those in clauses that breaks like the 64MB limit or something. “You put 65,000 nested [inaudible]… What happened?” I don’t know, good lord. It was a case statement with more than 128 nested levels. Like, well, perhaps it’s time to revisit the logic of this query.
Richie Rump: It’s a recursive query, it just keeps calling itself.
Erik Darling: I saw someone using recursive scalar functions last week. So it was like – so it was a scalar function – no it gets better – it’s a scalar function that parses XML. And if it doesn’t find, I think, a parent node, then it passes the next node in and looks for a parent node for that recursively. So it’s like function, does stuff, parses XML. If this is NULL, it recalls the function with the next [crosstalk]. And I’m looking at this, and I’m just like – [crosstalk] client. So I’m looking at this function and I’m like, “Let’s run BlitzCache.” So we run BlitzCache with XPM and this function was running 80,000 times per minute. [inaudible] no, start over again. No, start over again. And we’re looking at the XPM just like – okay, so imagine like one of your cores is dedicated to this query. You are paying $7000 to run this [crosstalk].
How long will an index rebuild take?
Brent Ozar: J.H. says, “When I’m planning on re-indexing large tables, is there any way to estimate time to complete? We’re trying to avoid locking and blocking users or other processes.”
Richie Rump: But what if you’re trying to do popping and locking?
Brent Ozar: I’m not even going to try and … by doing popping and locking onscreen.
Erik Darling: Dead man walking.
Richie Rump: That was a break dance reference, for anybody who wasn’t alive during the 80s. Just put that out there…
Brent Ozar: So boy, the thing that’s really awesome in there is the new ability to rebuild indexes at low priority; starting in SQL Server 2014. I guess it is an Enterprise Edition only feature, since that’s the online option.
Erik Darling: 2017 has a resume-able online index rebuild. I don’t think we blogged about that. Generally, we don’t spend too much time blogging about ways that people should be rebuilding their indexes; that’s the thing. Try not to give too much advice on it…
Brent Ozar: That is true. So this is the way to do it at low priority, and you don’t block anybody else.
Can Richie demo a pop and lock?
Brent Ozar: Daryl asks, “Can Richie please demo a pop and lock?”
Erik Darling: Yeah, go for it, Richie.
Richie Rump: Nope.
Erik Darling: Do it on your couch so you…
Richie Rump: Nope. Like somewhere around 1984 there was a movie called Breakin’. Maybe you need to pull that one out. It has some really great characters like Special K and Ozone, Turbo, Jean-Claude Van Damme is one of the dancers. I mean, it will give you a real nice flavor for the 80s and what you missed.
Erik Darling: Turbo was bad as hell, man.
Richie Rump: Turbo was the man.
Erik Darling: What was the other good one? There was Breakin’, Beat Street.
Richie Rump: Beat Street, Breakin’ 2: Electric Boogaloo.
Erik Darling: Those are all fine movies that featured breakdancing heavily.
I changed the service account, but…
Brent Ozar: Hanan says, “I changed the service account running SQL Server via the configuration manager. After that change the instance is running and everything seems okay, but when I check the local policies GUI, the new account has not been granted any of the required rights like log in as a service, bypass traverse checking, et cetera. Is there something I should need to set up manually?”
They’re supposed to be taken care of by configuration manager, which leads me to think that either you checked on the wrong server – like you ran config manager on the wrong server. Maybe you weren’t logged in as somebody with admin rights. I mean, there could be a long list of things that was done. What I would try is go try it again. Go try to change the logon account again. It will have to wait until restart before stuff takes effect, but then at least you can see if the permissions are getting granted.
Erik Darling: I was just breakdancing; I don’t know if you all missed it?
Brent Ozar: Oh, that’s what it was. Oh…
Erik Darling: You didn’t see? Because I did some really awesome stuff; I actually just have a … desk for other reasons.
Brent Ozar: Ah, no, for once you had the webcam above the desk turned on instead of the one below the desk. I think we’ve asked for that one to be permanently deactivated.
Richie Rump: Especially since he’s not wearing any pants, yeah.
Erik Darling: I actually still have my morning sweatpants on.
Brent Ozar: You have separate sweatpants for the afternoon?
Erik Darling: yeah, who doesn’t? You don’t have afternoon – what am I, a farmer? Of course, I have separate sweatpants for the time of day. [crosstalk].
Richie Rump: I’m like, “Sweatpants, what are those? I have no idea”
Erik Darling: They’re what you go to the gym in, Richie.
Richie Rump: Jim lives down the street, bro.
Brent Ozar: I should put in a plug for Sleepy Jones. Sleepy Jones has phenomenal pajamas. This is pretty much what I’ve been living in these days when I don’t have to go get on a webcast. So I can’t say enough good things about Sleepy Jones Pajamas. This webcast is brought to you by Sleepy Jones Pajamas.
Erik Darling: What the hell just happened?
Brent Ozar: Thanks, everybody, and we will see y’all next week at Office Hours. Adios.