Blog

[Video] Office Hours 2016/04/20 – Now With Transcriptions

This week, Brent, Erik, Jessica, Richie, and Tara discuss database modeling tools, how to learn about database corruption, the new cardinality estimator, and the only question that will be on our certification exams.

Here’s the video on YouTube:

You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.

What ERD Tool Should I Use?

Jessica Connors: All right, question from Lee. Let’s jump in there. He says, “We have a system where the vendor went belly up. Now I am tasked with getting an ERD for the database so we can move it to a new system. What tools, free if possible, would you suggest?”

Richie Rump: Oh, uh-

Tara Kizer: I mean, Management Studio has it built in, the ERD, but I’ve always, anytime I’ve used the ERDs, I’ve always used Erwin or whatever.

Richie Rump: Technically, that’s not an ERD. That’s just a diagram, right?

Tara Kizer: Yeah.

Richie Rump: An ERD is an all-encompassing tool that will have, it’s essentially like a case tool. Did I just go case, yeah? You can generate databases from it. You can reverse engineer it. I think Viseo still could do it for free in a pinch, which I’ve done, but my favorite is Embarcadero ER Studio and Erwin, still. They both have their plus and minuses. Both of them have the same minus which is a very large price tag. I have actually purchased my own copy of ER Studio because I do data design a lot, and I’m kind of crazy that way.

Brent Ozar: I want to throw something else weird out there. If the vendor went belly up, the diagramming or ERD tool is going to be the least of your worries. Holy cow, buckle up. You’re going to be developing the bejeezus out of that database. If you’re taking it over from here on out, go buy the tool because this is your new career. This is what you’re going to be working on.

Tara Kizer: Hopefully, they have the source code [inaudible 00:03:29]

Richie Rump: Wow.

Brent Ozar: Oh, that would suck.

Jessica Connors: Yeah, what happens when vendors just die like that? It’s just like, oh, sorry customer. You can have the product, but we’re not iterating on it. We’re done.

Brent Ozar: That was my life in 2000 to 2001. We had a help desk product where the vendor was like, yeah, no, we’re not doing this anymore. I had to build a new web front end to their existing database and gradually like change things over. That was two years of my life that I would really like back. The hilarious part is that the company that I worked for then still uses that crappy web help desk that I built because they lost the source code after I left.

 

How Should I Troubleshoot a Big, Slow Stored Procedure?

Jessica Connors: Oh boy. All right, James says, “I have an SP that has 53 queries in it. What is the best way to approach troubleshooting this slow query?”

Erik Darling: After the execution plan-

Tara Kizer: Execution plan sets statistics I know.

Erik Darling: Then, use it in the SQL Sentry Plan Explorer, it makes it really easy to see which statement in the store procedure takes up the most percentage of work.

Brent Ozar: The other thing you can do if you want to play with it is take it over to development, like restore a copy to production database. Blow your plan cache, run DBCC FREEPROCCACHE. I emphasize, you’re going to do this in development. You’re not going to do it in production. Then, immediately after you free the plan cache, run the query and use sp_BlitzCache. sp_BlitzCache will show you which lines in the stored procedure do the most reads, CBU, run time, whatever.

 

Should I Update Stats When I Change Compatibility Levels?

Jessica Connors: Question from Tom Towns, I haven’t gotten this one. It says, “Is it necessary or advisable to rebuilt indexes/stats when downgrading compatibility level on a database but staying on the same version of SQL Server?” He’s running SQL Server 2014.

Erik Darling: No, not necessarily, but you should be doing regular maintenance on your databases anyway.

Brent Ozar: That’s actually a good question because it comes from the history of when you went up with SQL Server, there was a lot of advice around. You should update your stats, and I don’t think, for a while there, we were trying to find the root of that, like where that advice came from.

Erik Darling: My best guess on that is that there are slight tweaks and twinges to the cardinality estimator, and when you update versions, that doesn’t necessarily kick in just by restoring the database. Updating the stats just kind of helps push things to the whatever, new ideas that cardinality estimator has for your data.

 

Will Brent Ozar Unlimited Have a Certification Program?

Jessica Connors: Let’s see here, you guys are quiet today. In terms of certifications, would we ever think about giving our own Brent Ozar stamp of approval? I know we give out those certificates of completion with the funny little cartoons on there, but Brent, we never actually thought about giving our own certification course, classes, all of those things.

Brent Ozar: It would really just consist of, can you write us a check for 200 dollars, yes? You are a member of a very prestigious club. I bet if we all got together and focused, we could probably put together a test that would do it. The problem is, it’s really expensive. It takes a lot of time to do it because you’ve got to do it in a way, I was part of a team that helped build the Microsoft Certified Master, the next version that never ended up going public. I learned so much from Microsoft around that process where the questions have to be legally defensible. They can’t be biased towards English speaking people. They can’t have cultural issues, like you can’t take in certain business knowledge or assume certain cultural knowledge. It needs to be a fair playing field for anybody who touches SQL Server. That’s really hard. I have a lot of respect for people who write certification exams, but I agree with you. The Microsoft one sucked pretty bad.

Richie and I talked about that in the Away from the Keyboard podcast.

Richie Rump: Yeah, on Away from the Keyboard. I think the name of the episode was Brent Ozar Loses His MVP.

Brent Ozar: Yeah. I trash talked the bejeezus out of that, but yeah.

Richie Rump: And yet, we got renewed. He’s trying people. I don’t understand.

Brent Ozar: We don’t even have tests in our classes. What we end up doing is we have group discussions. I’ll give you a homework assignment, and everybody works together at the same time, like some of the assignments you do by yourself. Some you of them you do in groups. Even that, just the discussions of people saying afterwards, I think this answer is right. I think this answer is right can take hours, so it’s pretty tricky.

 

Should I Install Multiple Instances On One Server?

Jessica Connors: Let’s see here, Steve. Move on to Steve. He says, “We are being given a virtual server to install SQL 2014. Which would be better, install one instance of SQL with all of our databases or several instances of SQL server with fewer databases on each instance?”

Brent Ozar: We’re all trying to be polite, I think.

Tara Kizer: Is this all for one box? If it’s for one box, we don’t recommend stacking instances. One instance per box.

Brent Ozar: Why don’t we recommend stacking instances?

Tara Kizer: I mean, you have to determine max memory setting for them, and you might can figure it not optimal for one instance. Another instance might need more memory. You might be playing that game and just fighting for resources. How many databases are we talking about? Are we talking about just 40 or 50? Are they all critical? Are they all non-critical? Can you have another virtual sever where you can put some databases on that one and other databases on the other?

Erik Darling: A lot of what I would use to sort of dictate where I’m going to put things is RPO and RTO. If you have a group of databases that all have 24/7 uptime, higher level requirements and say like bug tracking or help desk or something along those lines, something you can put off, stuff that needs high availability, stuff that needs better back ups, things like that, things that need more horsepower behind them, I would put those on a server or servers where I’m going to be able to pay special attention to them. I would put some of the less needy stuff on other servers that I can sort of stick in a corner and forget about and not worry about performance and not worry about any performance in one stepping on the toes of another.

 

How Do I Get Rid of Key Lookups?

Jessica Connors: All right, question from [inaudible 00:10:51]. She reads this, “My key lookup is still showing after the column was added in a covered index. Anything I could do to avoid the key look up?”

Brent Ozar: We’ve seen this with a few. You want to look a little deeper with the execution plan. Sometimes there is something called a residual predicate, where there’s other fields that are being looked up, not just the key. When you hover your mouse over the key, look up, look for two things, the predicate and the output. Maybe SQL Server is seeking for some other kinds of fields or it’s out putting other kinds of fields. If there is no predicate and no output, use SQL Sentry Plan Explorer and then anonymize that plan and post it online. There are known issues, I’ve seen them where I can’t get rid of the look up, and sometimes there’s gurus on there like Paul White who can take a look at the plan and go, oh, of course, it’s the bug right here.

Erik Darling: That happened to me sort of recently. I was helping a client get a computed columns together for a table, and for some reason, we added the computed columns. We started referencing the computed columns, but in the execution plan, there was still a key look up for all of the columns that made the computed column work, right? We had five or six columns that added up, made a computed column. We added the computed column to the included columns of the index, but it still wanted all of the columns that made up the computed column for some reason. That was a very weird day.

Tara Kizer: What did you do to fix it?

Erik Darling: Added the columns it was asking for in the output columns and the key lookup went away. It was very weird though. It was something that I tried to reproduce on 2012 with 2014, and I couldn’t get it to happen.

Richie Rump: He stared at it, and-

Erik Darling: Intimated it.

Richie Rump: Changed.

Brent Ozar: Flexed.

 

How Do I Learn About Database Corruption?

Jessica Connors: Database corruption, we haven’t talked about that.

Richie Rump: Ew.

Jessica Connors: That sounds fun.

Brent Ozar: It is fun. It is actually fun.

Jessica Connors: Is it? It reminds me- [crosstalk 00:13:05] Now I’m thinking about that song by the Beastie Boys.

Brent Ozar: Sabotage?

Jessica Connors: Yeah, that’s the one. Let’s see, it’s from Garrett. He says, “What’s the best way to lean about how to resolve database corruption with zero data loss?”

Brent Ozar: Steve Stedman’s database corruption challenge. If you search for Steve Stedman, it’s just S-T-E-D-M-A-N, database corruption challenge, he had like 10 weeks of challenges with different sample databases that were corrupt. You would go download them, and you would figure out how to fix them. The quizzes are kind of two part. The download sample database, and then you go try to figure out how to recover as much data as you can, with zero data loss. Then, you go read how other people did it. They share their answers and show how fast they were able to get the data back.

Erik Darling: Get real comfortable with some weird DBCC commands.

Brent Ozar: Forcing query hints to get data from some indexes and not others, oh it is awesome. Even if you don’t do it, just reading how hard it is is awesome.

Erik Darling: You know what always get me is when you have to find binary, but it’s byte reversed, so there’s like stuff with DBCC right page, and oh forget it.

Brent Ozar: Screw that. No way.

Erik Darling: Call someone else.

Brent Ozar: It’s really-

Jessica Connors: All of you, you’ve all dealt with this before? Database-

Brent Ozar: I’ve never dealt with it in real life. I don’t like-

Tara Kizer: I’ve done it two or three times. Do I like it? No. It was probably about five years ago. I was at the on call DBA. Two DBAs got woken up, and this was on a weekend. By seven in the morning, you hadn’t slept yet. They couldn’t even think clearly at this point, so that’s when they called me to take over for the next several hours. We ended up having to restore the database, and we had data loss, and it was all due to the san. You had some san hardware issues. It was bad. It was really bad.

Brent Ozar: It’s one of those where, like you can go your whole career and never see it, but if you have storage that isn’t bullet proof reliable, or if you have a team that like screws around and cuts around corners, then you spend years of your life dealing with that.

 

Should I Feel Bad for Not Using the Current Compat Mode?

Jessica Connors: Question from Mandy. She says, “Hi. We upgraded to 2014 Standard Edition on new hardware a couple of months ago, but left our databases in 2012 compatibility mode. A few weeks ago, I upped the compatibility mode to 2014 and afterwards had such major contention and blocking problems, we had to change it back to 2012 after about 30 minutes. Is this common? What can we look for in our database to resolve this?”

Brent Ozar: It would be more common if more people were upgrading.

Erik Darling: Everyone’s still on 2008.

Tara Kizer: Is that the cardinality estimator, probably doing that.

Erik Darling: Yeah, it sounds like that kicking in.

Tara Kizer: Is there a different way to turn it off and you’ll still have your compatibility level be 2014?

Brent Ozar: This is tricky, you see, price flags, and you can turn it off at the query level, but Mandy you were bang on. You’re perfect to wait a couple of weeks. That’s exactly what you want to do. Then, you want to flip it on a weekend, like on a Saturday morning. Let it go for just 10 minutes or 15 minutes, but as soon as you can, run sp_BlitzCache and gather the most CPU intensive query plans, most read-intensive query plans, and the longest running query plans with sp_BlitzCache. You’ve got to save those plans out to files, and then switch back to 2012 compatibility mode so that you’re back on the old CE. Then, you take the next week or two troubleshooting these query plans to figure out, what is it about these queries that suck. Is it the new cardinality estimator? So often, people just test their worst queries with the new 2014 CE, not understanding you need to test all of the queries, because the ones that used to be awesome can suddenly start sucking with the new CE.

Erik Darling: Or, in a couple of months, just upgrade to the new 2016 and use the query data store and then just bang that right off. Done.

Brent Ozar: What’s the query data store?

Erik Darling: It’s this neat thing. It’s like a black box for SQL Server. What it does, is it basically logs all of your queries. It’s a good way for cases like you where you had good query plans that were working, and then all of a sudden, something changed and you had bad query plans. It gives you a way to look at and find regressions in query plans and then force the query plan that was working before so that it uses that one rather than trying to mess around and just use the new one. Otherwise, you’re kind of stuck, like you are on 2014, where you could run, switch to the new cardinality estimator, but then you would have to use some trace flags on the specific queries to force the old cardinality estimator, which is not fun. I Don’t recall those trace flags off the top of my head. I never do.

Richie Rump: Can you hit paste flight recorder for queries? Is that about right?

Brent Ozar: Listening in on the cockpit catches people doing things they shouldn’t be doing.

Erik Darling: I was having a drinking contest.

 

Does the New Cardinality Estimator Get Used Automatically in 2014?

Jessica Connors: Let’s see, question while we’re on the topic of the cardinality estimator. Question from Nate, we may have answered this. “Speaking of the new CE for 2014, does it automatically get used for everything or only for the DBs in the 2014 compatibility level, or is it off by default, and it’s up to the DBA to turn it on at their discretion?”

Erik Darling: Or indiscretion.

Brent Ozar: Yeah, whenever you change your, if you’re upgrading a SQL Server, or if you’re attaching databases to a 2014 SQL Server, the new cardinality estimator doesn’t get used by default. It does get used if you create new databases, and they’re in the 2014 compat mode. Yeah, it’s totally okay to leave your databases in the old compat mode. You can leave them there as long as you want. There’s no botches with that. Totally okay, totally supported.

Jessica Connors: What if it’s in a 2005 compatibility mode?

Brent Ozar: What’s the oldest one that supported? I think it’s 2008.

Tara Kizer: I just looked yesterday, because a client had a question, and 2014 does have the 2005 compatibility level in the list. I was surprised. I was very surprised.

Brent Ozar: So generous.

 

Should I Use “Optimize for Ad Hoc Workloads”?

Jessica Connors: Question from Robert, he says, “Optimized for query workload’s instant setting, good to turn on by default or only when mad at hot queries are frequent?”

Brent Ozar: This is tricky. You will read a lot of advice out there like you should turn it on by default, because it doesn’t have a drawback. I’m kind of like, if I see that, it doesn’t bother me. It just doesn’t usually help people unless they truly have a BI environment where every query comes out as a unique delicate snow flower. Have any of you guys ever seen problems with optimized for ad hoc?

Tara Kizer: Problems? No.

Richie Rump: No.

Erik Darling: I’ve never seen problems. You know, every once in a while, you’ll see someone with like 80 or 90% of their plan caches is single use queries. At that point, that’s not really efficient use of the memory or what you’re storing in the plan cache.

Jessica Connors: Let’s see, question from Nate. We are still on compatibility mode questions. Last one, “Is there anything wrong with leaving all DBs in lower compat levels when you do a SQL Server migration, server 2014? Like, leave all the DBs at 2005 or 2008 or two compatibility levels and then start upping them later when we have breathing room?

Erik Darling: No.

Tara Kizer: You won’t ge the new features.

Brent Ozar: Totally okay. Totally okay.

Tara Kizer: Eventually, you’re going to have to up them, because you’re going to be upgrading and you can’t go down to that level. I imagine that 2016 doesn’t go down to 2005.

Erik Darling: I mean, just doing that before you up the compatibility level, just make sure that you check any of the Microsoft’s broken features and deprecated stuff and new reserve key words. If your old databases are using keywords that have now become reserved, stuff could break pretty easily.

 

When Will We Add More Training Videos?

Jessica Connors: James is getting sick of our same old training videos on our website. He is wondering when we will be adding more videos to the training library. If so, what? When? How?

Erik Darling: As soon as you pay us to make them.

Brent Ozar: Doug is doing a new one on advanced querying and indexing. It has a game show theme, back from a ’70s game show. We are aiming to have that one online in June. To give you a rough idea, it takes about three months worth of work to do a six hour class to the level of Doug’s videos. If you’ve watched T-SQL Level Up, it’s phenomenal production value. It takes three months worth of work to do a six hour video.

Brent Ozar: For my level of production values, where it’s a guy talking in front of a green screen, it’s usually about a month. I’m working on performance tuning, when you can’t fix the queries. That one will be out probably in June or July, somewhere in there. The trick with that is, the Everything Bundle price will go up at that time. If you’re interested in getting an everything bundle, I would do that this month. It will include any videos that we add during the course of your 18-month ownership. This month, the everything bundle is on sale, even further the half off. Now, it’s just $449 for 18 months to access to all of our video. That is a one month only sale to celebrate our 5th year anniversary. After that, you are right back up to 899.

Jessica Connors: Then, what’s it going to go up to?

Brent Ozar: It depends. I think we have two more 299 videos. I wouldn’t be surprised if we pushed it to 999.

Jessica Connors: Ah.

Richie Rump: 999? That’s still a deal.

Brent Ozar: Richie says in his sales person voice.

 

Why Does sp_BlitzIndex v3 Have Less Output?

Jessica Connors: All right, question from Justin. Hello, Justin. He says, “sp_BlitzIndex isn’t recognizing new indexes on a table where I deleted all of the indexes and ran replay trace against, but SSMS is recommending some. So is a third party software. Any idea what would cause this?”

Brent Ozar: I bet you’re running a brand new version of sp_BlitzIndex, version three, that just came out where we ignore crappy little tables that don’t really have that much of a difference in terms of performance. If you want the crappy little tables version, you can run the older version of sp_BlitzIndex, which is included in the download pack, or use it with @Mode = 4, which does the fine grained analysis that we used to do. Just know that you’re probably not going to have that much of a performance improvement, not if they’re tiny little tables, tiny little indexes.

 

Should I Use Somebody’s HA/DR Software on My SQL Server?

Jessica Connors: Yeah, he said that he was using the new one. Brent, or anyone, has anyone here heard of DH2I enterprise contain HA solution, and what is your opinion? Have you heard of that? It’s from-

Brent Ozar: I have heard of it. I don’t know anybody using it. I don’t know if you guys have either. I have this whole thing where if I’m going to install software that supposed to make my stuff more highly available, my whole team better know how to use it, and it better be like really well tested. I need to be able to talk to other people and get training on how it works. I would just talk to other people who use it and ask for hey, can you give me a run down of what issues you’ve had over the last 12 months. Make sure they’re a similarly sized customer to you, like if you’ve got 100 servers, the person you talk to should have 100 servers.

Erik Darling: What I find helpful is that a lot of vendors like this will have help forums. What I like to do is just read through the help forums and see what kind of problems people are facing, kind of common questions and see if there’s any answers there that kind of help me figure out if this is right for me.

Jessica Connors: At any of the help forums, have any of you guys seen people giving feedback like, don not buy this product. This is terrible.

Erik Darling: You know, you see some questions that are like X product stopped working and this broke and this stopped working and had major outage. What do I do?

Brent Ozar: And you look at the time lag between answers, like how long it takes to get it fixed.

Erik Darling: And then like, no actual company support people are chiming in. It’s all other users that had the same problem.

Tara Kizer: I just don’t know if I would want to use a product that isn’t widely used in industry. Do you want to be the first customer using this product or the first five? I want to use the products everyone else is using.

Jessica Connors: We are doing some upgrades to our CRM right now, and there’s little things I want to change to make it work, and the engineer is sending me, basically these forums of people like, I really want this feature. It’s never going to be turned on. This piece doesn’t work, and then like it’s from five years ago, four years ago.

Brent Ozar: Utter silence. There’s also a follow up question from someone that says, oh, so and so did an online session on that. Just know that often, consultants who are paid by vendors, will do sessions going, hey, this tool is amazing. I would be glad to help you install it. They are selling the software and they are selling their consulting. Look at it just the same as you would an infomercial. I would ask to talk to the customers using it, not to the people you are going to pay money to.

Jessica Connors: Have we done that? Has anyone ever come up to us with another product like, hey, can you do a webcast for their product?

Brent Ozar: Oh, all the time. I’ll be like, because we do webcasts too for like Dell and Idera and all this. I’m going to talk about the native way of doing it, like the pain points of how you do it without software. If you want to talk about how your software does it, that’s totally cool. I can’t talk about your software, because I don’t use it. I just have to jump form one client to another. Every now and then, people will say, “Here’s a bucket of money. How much of this bucket would you like in order to say our product is awesome?” I’m like no. I only have one reputation. What happens if the product sucks butt wind? I will review privately if you want to spend three or four days hammering on your software and see how it works. We would be glad to do it and then give you a private review. I’m not going to go public and say that it’s awesome when it smells like…

Richie Rump: Erik and I will do it. That’s not a problem.

Brent Ozar: Yes.

Erik Darling: My reputation stinks anyway.

 

What’s the Ideal HA/DR Solution for 5-10 SQL Servers?

Jessica Connors: Let’s see, just for fun. Question if you run out of things to answer. This is really open-ended. What is your ideal HA and DR solution for a SQL Server environment with five to ten instances. That depends.

Brent Ozar: No, we should each answer that. Tara, what’s your ideal for five to ten instances? Say you’ve got one DBA.

Tara Kizer: I don’t know how to answer that. I’ve never worked in an environment where I was the only DBA. I have always worked in an environment where there was probably three to eight DBAs Availability groups is my answer for almost everything. You say HA, I say availability groups. I know you guys don’t like that, but that’s what we implemented. We had large DBA teams. We had larger server teams that understood Windows, clustering, all that stuff. It works well if you have the people that know the clustering, you know, all the features.

Brent Ozar: Tara doesn’t get out of bed for less than 10,000 dollars. Erik, how about you?

Erik Darling: For me, if you just have one to two DBAs, but you may have some other support staff, I would say a fair level clustering and then something like either async mirroring or log shipping. It’s usually pretty decent for those teams. It’s pretty manageable for most people who don’t have the DBAs or the football jerseys on and the schedules and the tackle cards on almost everything.

Brent Ozar: Yeah, how about you, Richard?

Richie Rump: From a developer’s perspective, because I joined this team, and all of a sudden, I have a developer’s perspective. I love that. It’s Azure SQL Database, right?

Brent Ozar: Oh, you’re cheating.

Erik Darling: Wow.

Richie Rump: It’s all kind of baked in there, and I don’t have to think about it. A lot of it’s done fore me. As a developer, I’m going to do the laziest, simplest way out. That would be it.

Brent Ozar: Man, you win points on that. That’s genius. That is pretty freaking smart. I would probably assume that they’re not mission critical if there is stuff that I could stand some down time on. I actually would probably go with just plain old VMware. I would just make them as single instances in virtualization. Then, that way do something like log shipping for disaster recovery, or VMware replication. Now, this is not high availability. It’s just availability. It’s just the A. If you cut me down to just like five SQL Server instances and no DBA or like one DBA who’s kind of winging it and spending most of his time on a free webcast, then I kind of like that. If not, I’m still kind of a fan of async mirroring too. Async mirroring is not bad.

Erik Darling: Much less of an albatross than its sync brother.

Brent Ozar: Yeah, that thing blows.

Jessica Connors: Cool.

Erik Darling: I’ll say one thing though, not replication.

Tara Kizer: Yeah.

Jessica Connors: So many people using replication out there.

Tara Kizer: Replication is mostly okay, but not as an HA or DR feature.

Jessica Connors: Ah, they’re using it for the wrong thing.

Tara Kizer: It’s really reporting feature.

Brent Ozar: It’s funny to see, too, like all of us end up having these different answers, and this ends up being what our chat room is like. In the company chat room, like we all have access to everybody else’s files. We know what everybody is working on. Somebody can be like, Hey, I’m working with this client. Here is what they look like. Here’s what their strengths and challenges are. It’s really fun to bounce ideas off people and see what everybody comes up with.

Jessica Connors: Mm-hmm – well, all right, guys. It’s that time.

Erik Darling: Uh-oh.

Brent Ozar: Calling it an episode. Thanks everybody for hanging out with us, and we will see you next week.

Stats Week: Only Updating Statistics With Ola Hallengren’s Scripts

I hate rebuilding indexes

There. I said it. It’s not fun. I don’t care all that much for reorgs, either. They’re less intrusive, but man, that LOB compaction stuff can really be time consuming. What I do like is updating statistics. Doing that can be the kick in the bad plan pants that you need to get things running smoothly again.

I also really like Ola Hallengren’s free scripts for all your DBA broom and dustpan needs. Backups, DBCC CHECKDB, and Index and Statistics maintenance. Recently I was trying to only update statistics, and I found it a little trickier than I first imagined. So tricky, in fact, that I emailed Ola, and got a response that I printed and framed. Yes, the frame is made out of hearts. So what?

What was tricky about it?

Well, the IndexOptimize stored procedure has default values built in for index maintenance. This isn’t a bad thing, and I could have altered the stored procedure, but that would be mean. I set about trying to figure out how to get it to work on my own.

First, I tried only passing in statistics parameters.

EXEC [master].[dbo].[IndexOptimize]
    @Databases = N'USER_DATABASES' ,
    @UpdateStatistics = N'ALL' ,
    @OnlyModifiedStatistics = N'Y' ,
    @LogToTable = N'Y';

But because of the default values, it would also perform index maintenance. Sad face. So I tried being clever. Being clever gets you nowhere. What are the odds any index would be 100% fragmented? I mean, not even GUIDs… Okay, maybe GUIDs.

EXEC [master].[dbo].[IndexOptimize]
    @Databases = N'USER_DATABASES' ,
    @FragmentationMedium = N'INDEX_REORGANIZE' ,
    @FragmentationHigh = N'INDEX_REORGANIZE' ,
    @FragmentationLevel1 = 100 ,
    @FragmentationLevel2 = 100 ,
    @UpdateStatistics = N'ALL' ,
    @OnlyModifiedStatistics = N'Y' ,
    @LogToTable = N'Y';

But This throws an error. Why? Well, two reasons. First, 100 isn’t valid here, and second, you can’t have the same fragmentation level twice. It would screw up how commands get processed, and the routine wouldn’t know whether to use @FragmentationMedium, or @FragmentationHigh. This makes sense.

Okay, so I can’t use 100, and I can’t set them both to 99. What to do? Let’s bring another parameter in: @PageCountLevel.

EXEC [master].[dbo].[IndexOptimize]
    @Databases = N'USER_DATABASES' ,
    @FragmentationMedium = N'INDEX_REORGANIZE' ,
    @FragmentationHigh = N'INDEX_REORGANIZE' ,
    @FragmentationLevel1 = 98 ,
    @FragmentationLevel2 = 99 ,
    @UpdateStatistics = N'ALL' ,
    @OnlyModifiedStatistics = N'Y' ,
    @PageCountLevel = 2147483647,
    @LogToTable = N'Y'; 

This seems safe, but it’s still not 100%. Even with the integer maximum passed in for the page count, it still felt hacky. Hackish. Higgity hack. The other part of the equation is that I don’t even want this thing THINKING about indexes. It will still look for indexes that meet these requirements. If your tables are big, you know, sys.dm_db_index_physical_stats can take foreeeeeeeeeeeeeeeeeeeeeeeeeeeever to run. That seems wasteful, if I’m not going to actually do anything with the information.

Hola, Ola

This is where I emailed Ola for advice. He responded pretty quickly, and here’s how you run stats only updates.

EXECUTE [dbo].[IndexOptimize]
    @Databases = 'USER_DATABASES' ,
    @FragmentationLow = NULL ,
    @FragmentationMedium = NULL ,
    @FragmentationHigh = NULL ,
    @UpdateStatistics = 'ALL' ,
    @OnlyModifiedStatistics = N'Y' ,
    @LogToTable = N'Y';

Moral of the story

NULLs aren’t all bad! Sometimes they can be helpful. Other times, developers.

Thanks for reading!

Brent says: Subtitle: How many DBAs does it take to think of NULL as a usable option? Seriously, we all banged our heads against this one in the company chat room.

Breaking News: Query Store in All Editions of SQL Server 2016

Bob Ward talking Query Store at SQL Intersection

Bob Ward talking Query Store at SQL Intersection

Onstage at SQL Intersections in Orlando this morning, Bob Ward announced that Query Store will be available in all editions of SQL Server 2016.

This is awesome, because Query Store is a fantastic flight data recorder for your query execution plans. It’ll help you troubleshoot parameter sniffing issues, connection settings issues, plan regressions, bad stats, and much more.

I’m such a believer in Query Store that sp_Blitz® even warns you if Query Store is available, but isn’t turned on.

Wanna learn what it is and how to use it? Books Online’s section on Query Store is a good place to start learning, and check out Bob’s slide deck and resource scripts.

And oh yeah – Argenis Fernandez and I had a little bet. He bet that Query Store would be fully functional in Standard Edition, and I bet that it wouldn’t. I’ve never been happier to lose a bet, and I made a $500 donation to Doctors Without Borders this morning. Woohoo!

Update 4/21 – note the comment below from Bob Ward, who clarifies that this wasn’t quite ready for release yet, and feature decisions may not have been made yet.

Stats Week: Messin’ With Statistics

If there’s one thing living in Texas has taught me

It’s that people are very paranoid that you may Mess With it. Even in Austin, where the citizenry demand weirdness, they are vehemently opposed to any form of Messing, unless it results in mayonnaise-based dipping sauce.

Me? I like Messing With stuff. Today we’re going to look at one way you can make SQL think your tables are much bigger than they actually are, without wasting a bunch of disk space that has nearly the same price as wall to wall carpeting.

To do this, we’re going to venture deep into the Undocumented Command Zone. It’s like the Twilight Zone, except if you go there on your production server, you’ll probably end up getting fired. So, dev servers only here, people.

Creatine

Let’s make a table, stuff a little bit of data in it, and make some indexes.

DROP TABLE [dbo].[Stats_Test]

;WITH E1(N) AS (
    SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL UNION ALL 
    SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL UNION ALL 
    SELECT NULL UNION ALL SELECT NULL UNION ALL SELECT NULL UNION ALL 
    SELECT NULL  ),                          
E2(N) AS (SELECT NULL FROM E1 a, E1 b, E1 c, E1 d, E1 e, E1 f, E1 g, E1 h, E1 i, E1 j),
Numbers AS (SELECT TOP (1000) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS N FROM E2)
SELECT  ISNULL([N].[N], 0) AS [ID] ,
    ISNULL(CONVERT(DATE, DATEADD(HOUR, -[N].[N], GETDATE())),     '1900-01-01') AS [OrderDate] ,
        ABS(CONVERT(NUMERIC(18,2), (CHECKSUM(NEWID()) % 10000.00))) AS [Amt1]
INTO [Stats_Test]
FROM    [Numbers] [N]
ORDER BY [N];

CREATE UNIQUE CLUSTERED INDEX [cx_id] ON [dbo].[Stats_Test] ([ID])
CREATE UNIQUE NONCLUSTERED INDEX [ix_test1] ON [dbo].[Stats_Test] ([OrderDate], [ID])
CREATE UNIQUE NONCLUSTERED INDEX [ix_test2] ON [dbo].[Stats_Test] ([Amt1], [ID])

There’s our thousand rows. If you’re dev testing against 1000 rows, your production data better only have 1001 rows in it, or you’re really gonna be screwed when your code hits real data. How do we cheat and make our data bigger without sacrificing disk space?

Eat Clen, Tren Hard

You can update all statistics on the table at once, or target specific indexes with the following commands.

UPDATE STATISTICS [dbo].[Stats_Test] WITH ROWCOUNT = 10000000000, PAGECOUNT = 1000000000

UPDATE STATISTICS [dbo].[Stats_Test] ([cx_id]) WITH ROWCOUNT = 10000000000, PAGECOUNT = 1000000000
UPDATE STATISTICS [dbo].[Stats_Test] ([ix_test1]) WITH ROWCOUNT = 10000000000, PAGECOUNT = 1000000000
UPDATE STATISTICS [dbo].[Stats_Test] ([ix_test2]) WITH ROWCOUNT = 10000000000, PAGECOUNT = 1000000000

This will set your table row count to uh… 10 billion, and your page count to 1 billion. This makes sense, since usually a bunch of rows fit on a page. You can be more scientific about it than I was, this is just to give you an idea.

So let’s check in on our statistics! Sup with those?

DBCC SHOW_STATISTICS('dbo.Stats_Test', cx_id)

DBCC SHOW_STATISTICS('dbo.Stats_Test', ix_test1)

DBCC SHOW_STATISTICS('dbo.Stats_Test', ix_test2)

Hint: these commands will not show inflated page or row counts in them. They actually won’t show page counts at all. Hah. That’s kinda silly, though. Hm.

Anyway, what we should grab from the statistics histograms are some middling values we can play with. For me, that’s an ID of 500, a date of 2016-03-18, and an amount of 4733.00.

One thing I’ve found is that the inflated counts don’t seem to change anything for Identities, or Primary Keys. You’ll always get very reasonable plans and estimates regardless of how high you set row and page counts for those. Regular old clustered indexes are fair game.

Some really interesting things can start to happen to execution plans when SQL thinks there’s this many rows in a table. The first is that SQL will use a rare (in my experience) plan choice: Index Intersection. You can think of this like a Key Lookup but with two nonclustered indexes rather than from one nonclustered index to the clustered index.

SELECT *
FROM [dbo].[Stats_Test] AS [st]
WHERE [st].[ID] = 500

SELECT *
FROM [dbo].[Stats_Test] AS [st]
WHERE [st].[OrderDate] = '2016-03-18'

SELECT *
FROM [dbo].[Stats_Test] AS [st]
WHERE [st].[Amt1] = 4733.00

For these equality queries, we get the following plans:

Bizarre Love Parallel

Bizarre Love Parallel

SQL isn’t fooled with an equality on 500. We get a little plan. We’ll examine inequality plans in a moment. For now let’s look at the middle plan. That’s where the Index Intersection is occurring. The bottom plan has a regular Key Lookup.

Blood Everywhere.

Blood Everywhere.

The costs and estimates here are Banana Town crazy. And right there down the bottom, we can see SQL using the Clustered Index key to join our Nonclustered Indexes together. If you’ve been reading this blog regularly, you should know that Clustered Index key columns are carried over to all your Nonclustered Indexes.

If we switch to inequality queries, well…

SELECT *
FROM [dbo].[Stats_Test] AS [st]
WHERE [st].[ID] > 500

SELECT *
FROM [dbo].[Stats_Test] AS [st]
WHERE [st].[OrderDate] > '2016-03-18'

SELECT *
FROM [dbo].[Stats_Test] AS [st]
WHERE [st].[Amt1] > 4733.00
All Hell Breaks Loose

All Hell Breaks Loose

The top query that SQL wasn’t fooled by before now has the same insane estimates as the others. Our two bottom queries get missing index requests due to the amount of work the Index Intersection takes.

It’s happening because of the SELECT * query pattern. This will go away if we stick to only selecting columns that are in our Nonclustered Indexes. For example, SELECT ID will result in some pretty sane index seeks occuring. The estimated rows are still way up there.

Unfortunately, STATISTICS TIME and IO are not fooled by our statistical tomfoolery.

Womp womp womp

Womp womp womp

They use reality-based measurements of our query activity. This trick is really only useful to see what happens to execution plans. But hey it’s a lot cheaper, easier, and faster than actually inserting 10 billion rows.

So what?

Like a grandma in a grocery store, SQL Server makes all its decisions based on cost. Whatever is cheapest is choice. If SQL Server were a person, it would probably wash and dry used aluminum foil, save old bread ties, and use clothes pins for the right thing.

I forget what I was going to say. Probably something smart about testing your queries about sets of data commensurate to what you have in production (or larger) so that you don’t get caught flatfooted by perf issues on code releases, or if your company finally starts getting customers. This is one technique to see how SQL will treat your code as you get more rows and pages involved.

Just don’t forget to set things back when you’re done. A regular stats update will take care of that.

Thanks for reading!

Stats Week: Do Query Predicates Affect Histogram Step Creation?

Auto Create Statistics is your friend

It’s not perfect, but 99% of the time I’d rather have imperfect statistics than no statistics. This question struck me as interesting, because the optimizer will totally sniff parameters to compile an initial plan. If you don’t have index statistics, or system statistics already on a column in a WHERE clause, SQL is generally kind enough to create a statistics object for you when the query is compiled.

So I thought to myself: Would SQL create an initial histogram based on the compile-time parameter? It might be nice if it did, since it could potentially get the best possible information about predicate cardinality from a direct hit on a histogram step.

Here’s a quick test that shows, no, SQL doesn’t give half a care about that. It creates the same histogram no matter what. 1000 rows should do the trick. I’m making both columns NOT NULL here, because I want to make one my PK, and I want to make sure there’s no histogram step for NULL values in the other. I’m not going to index my date column here, I’m going to let SQL generate statistics automatically.

SELECT
    ISNULL([x].[ID], 0) AS [ID] ,
    ISNULL(CAST([x].[DateCol] AS DATE), '1900-01-01') AS [HireDate]
INTO
    [dbo].[AutoStatsTest]
FROM
    ( SELECT TOP 1000
        ROW_NUMBER() OVER ( ORDER BY ( SELECT NULL ) ) ,
        DATEADD(HOUR, [m].[message_id], GETDATE())
      FROM
        [sys].[messages] AS [m] ) [x] ( [ID], [DateCol] );

ALTER TABLE [dbo].[AutoStatsTest] ADD CONSTRAINT [pk_t1_id] PRIMARY KEY CLUSTERED ([ID]);

First, let’s check in on what values we have

I’m going to run one query that will generate a histogram, but it’s guaranteed to return all of the table data. I want to see what SQL comes up with for histogram hits and missing, here.

SELECT *
FROM [dbo].[AutoStatsTest] AS [ast]
WHERE [ast].[HireDate] >= '1900-01-01'

We have our histogram, and I’ll use a clunky DBCC command to show me to it. Below is a partial screen cap, up to a point of interest.

Pay attention to the rectangle.

Pay attention to the rectangle.

SQL created a histogram with direct hits on 04/30, and then 05/02. That means it doesn’t have a step for 05/01, but it does postulate that there are 22 rows with a date of 05/01 in the RANGE_ROWS column.

I went ahead and dropped that table and re-created it. Next we’ll run the same query, but we’ll pass in 05/01 as an equality value.

SELECT *
FROM [dbo].[AutoStatsTest] AS [ast]
WHERE [ast].[HireDate] = '2016-05-01'

And, long story short, it creates the exact same histogram as before.

Is this good? Is this bad?

Well, at least it’s reliable. I’m not sure how I feel about it otherwise.

You can try creating filtered indexes or statistics on really important segments of data, if you really need SQL to have granular information about it. Otherwise, you’ll have to trust in the secret, and sometimes not so secret sauce, behind the cardinality estimator.

Thanks for reading!

Brent says: the more I work with SQL Server, the more I’m filled with optimism about the oddest things. When I read Erik’s idea about the exact histogram step, though, I thought, “Nopetopus.”

[Video] Free Training of the Week: Statistics and Memory Grants

This week is Stats Week here at Brent Ozar Unlimited, so let’s kick things off with a module from How to Think Like the Engine. Why does one query get wildly different execution plans? In this 17-minute video, learn how statistics influence your query plans, discover how to see your own statistics, and understand how stats help build memory grants.

UPDATE: expired!

(If you don’t see the video above, you’re reading this somewhere that doesn’t support video embedding. You’ll need to read the post on our blog.)

This video is part of our How to Think Like the Engine class, and along with the rest of our videos, it’s on sale this month! Use coupon code HighFive for half off our training videos and bundles.

[Video] Office Hours 2016/04/13

This week, Jessica, Richie, and Tara discuss whether you should skip SQL Server 2014 and jump to 2016, our Performance Tuning When You Can’t Fix the Queries class, and whether you should detach a database in order to drop the connections to it. (Wow!)

Here’s the video on YouTube:

You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.

Announcing Our New Pre-Con: Performance Tuning When You CAN Fix the Queries

The team resting up before tuning queries

The team resting up before tuning queries

Going to be in the Seattle area in October?

Are your users frustrated because the app is too slow, and you can change the SQL queries – but you’re not sure how? Or which ones? Or where to start?

Take heart – there’s hope. The Brent Ozar Unlimited team does this every week, and we’ll share our proven methodologies to performance tune code, execution plans, Entity Framework, and more.

This is a two-day pre conference session in Seattle on Monday-Tuesday, October 24-25, 2016. Lunch is included with your admission. (You’re on your own for breakfast and dinner.) Class is held at the Big Picture Theater in comfy seats from 9AM to 4:30PM.

You’ll meet the consultants – Angie, Brent, Doug, Erik, Richie, and Tara – for two days of learning and fun, plus t-shirts, magnetic poetry kits, and more.

To celebrate the launch, it’s $100 off – it’s one heck of a deal at just $299 for 2 solid days of training! Register now – our pre-cons usually sell out. See you in Seattle!

P.S. – there’s t-shirts, magnetic poetry kits, stickers, and swag involved.

Looking for a New Challenge? kCura is Hiring a DBA.

I’m a big kCura Relativity fan – it’s an application that really pushes SQL Server hard, written by people who are a ton of fun to work with.

If you’re looking for a challenge in a really cool environment, check out what they’re looking for:

A Production Database Administrator with a deep demonstrated knowledge of MS SQL administrative tasks and the ability to consult on design, development, and automation improvements.  Having a passion for maintaining MS SQL databases that meet or exceed internal and client contracted production SLAs for availability and performance. The right candidate will ideally have past hands-on experience administering MS SQL databases running in a public cloud environment such as AWS or MS Azure.

Responsibilities

  • Install and configure SQL Server 2012 and higher versions. Configurations built and supported should include scenarios that include leveraging SQL Always On, windows fail over clustering, and transaction replication.
  • Document complex installation, configuration, and optimization procedures so they can be automated.
  • Provide support 24/7/365 for any troubleshooting or corrective actions related to incidents impacting application availability within the production environments.
  • Take proactive measures to monitor, trend, and tune SQL databases, such as running maintenance jobs (backups, DBCCs, apply indexes/re-indexing, etc.), to meet or exceed baseline stability and performance SLAs on large databases (1 TB+) and large volumes of databases (100+).
  • Create, implement, and maintain SQL DB Health Checks, and have a demonstrated ability to automate SQL health reporting/event notification, and corrective actions.
  • Configure SQL Server monitoring utilities to minimize false alarms, and have a demonstrated ability to monitor/trend SQL environments to determine and implement enhanced monitoring thresholds to prevent incidents and reduce mean time to recovery (MTTR).
  • When performance issues arise, determine the most effective way to increase performance including scaling up or out, server configuration changes, index/query changes, etc.
  • Identify code defects and enhancements and develop a detailed root cause analysis that can be leveraged by the product management and development teams to improve application availability and decrease the total cost of ownership.
  • Ensure databases are being backed up and can be recovered in a manner that meets all BCDR objectives for RPO and RTO.
  • Perform all database management responsibilities in Microsoft Azure for production and non-production workloads.

Qualifications:

  • At least 4 years’ experience working as a Microsoft SQL DBA leveraging versions 2008r2 or later.
  • Experience working in a 24/7/365 operation.
  • Bachelor’s degree in computer science or information systems.
  • Familiar with basic Azure IaaS capabilities, and some experience designing and building MS SQL databases within Azure or AWS.
  • Microsoft certifications such as MCSE, MCSD, etc.
  • Experience operating in an ISO certified and/or highly regulated (SSAE, PCI, HIPPA, etc.) hosting operation.
  • Familiar with Dev/Ops concepts, and ideally experience working with a Dev/Ops team focused on implementing and enhancing continuous delivery capabilities.
  • Experience automating SQL Server deployment and configuration through PowerShell, Chef, Puppet, etc.
  • Background designing, building, and managing a search and indexing solution such as Elastic Search, Apache SOLR, etc.
  • Previous Relativity system administration experience.

 

(You should read that “qualifications” list as a perfect candidate, and don’t be dissuaded from applying if you’re not the perfect candidate.)

Interested? Get on over there and apply.

Old and Busted: DBCC Commands in 2016

I hate DBCC Commands

Not what they do, just that the syntax isn’t consistent (do I need quotes around this string or not?), the results are a distraction to get into a usable table, and you need to write absurd loops to perform object-at-a-time data gathering. I’m not talking about running DBCC CHECKDB (necessarily), or turning on Trace Flags, or any cache-clearing commands — you know, things that perform actions — I mean things that spit tabular results at you.

Stuff like this puts the BC in DBCC. It’s a dinosaur.

In SQL Server 2016

DBCC INPUTBUFFER got its own DMV after, like, a million decades. Commands like DBCC SQLPERF, DBCC DBINFO, and DBCC LOGINFO should probably get their own. Pinal Dave has a whole list of DBCC Commands that you can break your server with here.

But truly, the most annoying one to me is DBCC SHOW_STATISTICS. It’s insane that there’s no DMV or function to expose histogram information. That’s why I filed this Connect item.

UPDATE: It looks like Greg Low beat me to it by about 6 years. Too bad searching Connect items is so horrible. I urge you to vote for Greg’s item, instead.

Statistics are the intel SQL Server uses to make query plan choices.

Making this information easier to retrieve, aggregate, join to other information, and analyze would put a powerful performance tuning tool into the hands of SQL Server users, and it would help take some of the mystery surrounding statistics away.

Please consider voting for my Greg’s Connect item.

Thanks for reading!

css.php