[Video] Office Hours 2017/01/11 (With Transcriptions)

This week, Brent, Erik, Tara, and Richie discuss restoring databases from Enterprise Edition to Standard Edition servers, their favorite way of collecting SQL server metrics, extended events, career advice, how to tell what processes SQL uses memory for, and much more!

Here’s the video on YouTube:

You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.

Enjoy the Podcast?

Don’t miss an episode, subscribe via iTunes, Stitcher or RSS.
Leave us a review in iTunes

Office Hours Webcast – 2017-01-11

 

Can you restore from Enterprise to Standard Edition?

Brent Ozar: We might as well go ahead and get started because now we have two pretty good questions in there that don’t have anything to do with the Nelson Mandela Effect. Mandy asks, “Can there be any issues restoring a database from an Enterprise Edition server over to a Standard Edition server? I’m on SQL Server 2014.”

Erik Darling: Yes.

Tara Kizer: I’m pretty sure that when we did that to move a production database down to a test environment for like customer service at my last job I believe it was, the restore failed because it was using an Enterprise Edition feature. I think normally though you can do the restores down but this was a specific issue. I forget what the Enterprise feature was though.

Erik Darling: Yeah, it really could be any of them, it doesn’t matter if it’s partitioning or compression or just any one of those things, or TDE, or something that’s Enterprise only. If you’re using any one of them, the restore will fail. There is—I forget if it’s its own DMV or if it’s a column in a DMV—it’s like sku persisted features or something like that.

Brent Ozar: Yeah, persisted sku features.

Erik Darling: Yeah, that’s the one. That will tell you if you’re using anything Enterprising. You should know that. You’re a good DBA. You should be aware of all the features that your developers are implementing.

Brent Ozar: Yeah, but she is such a good DBA she never had to do a restore, so.

Erik Darling: Terrifying.

Tara Kizer: I’ve never really even used Standard Edition, so a lot of times I have no idea what those lesser editions have. I’ve always worked in large corporations that just throw money at SQL Server.

Richie Rump: What are those poor plebs using?

Erik Darling: It’s like, no, it’s all Enterprise.

Tara Kizer: Yeah, what we had to do at my last job, because this was going to be used as a test environment, but it was going to be used like production, so we had to license it and we didn’t want to waste an Enterprise license on that one. So we had to restore to an Enterprise Edition test server, remove the feature, backup, and then restore it to the Standard Edition server.

Brent Ozar: My favorite way to tell when it’s being used by a database is use sp_Blitz. If you run sp_Blitz it will actually check to see if any databases are using Enterprise Edition features and it will tell you right in there. That will help a lot.

 

Have you ever seen hypothetical indexes suppress normal ones?

Brent Ozar: Justin asks—here’s a really bad question—“Have you ever seen hypothetical indexes suppress normal ones?”

Richie Rump: Not the worst question we’ve seen this week.

Brent Ozar: No.

Tara Kizer: It’s a good question. Hypothetical indexes don’t actually get used by anything. It’s just used for the database tuning advisor, right? They were just left behind from that. It didn’t complete successfully so it didn’t clean up all the stuff that it left behind.

Erik Darling: I wonder if that would ever suppress a missing index request though.

Brent Ozar: Oh, that’s a great question. We should totally try that.

Erik Darling: Yeah, that would be weird, right?

Tara Kizer: That means we have to use DTA.

Erik Darling: Can you create a hypothetical index? I forget.

Brent Ozar: I don’t think so.

Erik Darling: Never mind then, out the window.

Tara Kizer: Too much work.

Erik Darling: Scratch that blog post before it even begins.

Brent Ozar: That’s a great question too though, could you create your own hypothetical index? So then we have to start tracing the DTA.

Erik Darling: I did that once and it was stupid looking.

Brent Ozar: There’s trace flags.

Erik Darling: Yeah, I remember specifically there was one like showplan feature that the only other reference I found to it was on a website that was all in Russian and I didn’t click on it. I was like I don’t want to know.

Richie Rump: ??.

Brent Ozar: ??.

 

How do you recommend collecting performance data?

Brent Ozar: Renee asks, “Do you guys collect KPIs, key performance indicators, of the SQL Server? If so, how do you do that? What’s your favorite way to collect SQL Server metrics?” We should ask each of you. Erik, how do you collect SQL Server performance indicators?

Erik Darling: I use sp_BlitzFirst.

Brent Ozar: How does that work?

Erik Darling: You can log in all the tables and you can set it up as an agent job to run every however many minutes or seconds you want. If you’re short a monitoring tool or round-the-clock DBAs who don’t mind hitting F5 or one of those dark man bird things…

Brent Ozar: People who aren’t watching the YouTube video won’t get that.

Erik Darling: You know, it’s a pretty good way to collect metrics across a variety of things like wait stats and file stats and all sorts of stuff. Then it is kind of an exercise to you to query and trend that information.

Brent Ozar: Richie, how about you?

Richie Rump: I don’t always collect stats, but when I do, I use sp_Blitz. Even before I started working for Mr. Brent Ozar I used sp_Blitz. So nothing new there. If it ain’t broke, don’t try to figure it.

Brent Ozar: Okay, Tara, give these guys an intelligent answer.

Tara Kizer: I’m a big fan of monitoring tools that can collect this data for me. If I don’t have a monitoring tool not licensed for a certain server, I might start collecting stuff there, maybe it’s perfmon counter logs that I’m using. But again, I’ve worked at large corporations where we just had monitoring tools collect all the stuff and then we reported on that out to a DBA team.

Brent Ozar: Tara doesn’t get out of bed for less than Enterprise Edition.

Tara Kizer: Sorry.

Brent Ozar: That’s my favorite answer too. We write sp_BlitzFirst to make it easier to go gather stuff but if I was a DBA tomorrow I would immediately go to my manager and say, “I’ve got to have a dashboard. I’ve got to know what’s going on here.” I would never try to roll my own as a DBA. There’s a funny story around me not getting a job because I gave that as an answer in part of the interview.

Tara Kizer: I actually have rolled my own monitoring tool, this is many years ago, in a large corporation but before we started shelling out cash for monitoring tools. It was a little .NET application. I didn’t have access to the production server from my desk or at home and we had to walk to the actual data center, which if I was in the office was fine because it was two buildings over, but it just became ridiculous. So I put a tool out there to then email me some information or maybe put it in a log file and email it to me. Eventually, we did get access. Then having to get a call in the middle of the night was ridiculous because no access.

 

How many miles did you walk this weekend?

Brent Ozar: You know, you say that, you didn’t want to walk to the server but how many miles did you walk this weekend?

Tara Kizer: So I only became active the past—I’m going to say four years. I was pretty sedentary before that and a lot heavier.

Brent Ozar: I see the pictures on Facebook and I’m like dear god, that’s a mountain, how…?

Tara Kizer: Yeah, I’m a little concerned about the two hikes I signed up for for Saturday and Sunday because they’re both hard and I didn’t realize they were the same weekend. I was like, “Oops,” usually I only do one.

Brent Ozar: Wow, tell us about those.

Richie Rump: The real question is what’s in the water bladder? Is water really in the bladder?

Tara Kizer: It’s definitely water because everything else is junk. Everything else breaks the hydration. One is in miles—it’s only like 1500 elevation gain—but it’s boulder scrambling. I just completed a hike on Sunday that involved a lot of boulder scrambling to get to the peak. I’ve never been sore on my arms from my hike as I was on this hike, usually it’s my legs. So Saturday’s includes that too. Sunday’s is a cross country for the desert for a peak out there called Willow Peak. I’ve been wanting to do that for a while. You need a capable vehicle to get to the trailhead so I need to go with groups that have those vehicles.

Brent Ozar: Wow.

Richie Rump: Nope, nope.

Tara Kizer: I think that’s 12 miles. It’s 21 miles and I’ve done that in a day before but it’s just the elevation gain and the type of hike that it is that’s going to make them really hard.

Brent Ozar: Richie, you’re saying no but you started running again. I heard that on the podcast.

Richie Rump: Yes, yes, but I’m not doing 21 miles in the desert so there’s a difference here. It was 60 degrees on Monday here and I didn’t even get out of bed. I was like, “I’m not running. That’s it. I’m done.” It’s got to be hot and it’s got to be humid for me to actually get out and go do it.

Brent Ozar: For those of you that don’t know, Richie lives in south Florida which is where you don’t even have to carry water. All you have to do is open your mouth and keep running and you just immediately hydrate.

Tara Kizer: The highest peak in Florida I think is like 350 for the elevation.

Richie Rump: And that’s at Disney World, right? That’s like Big Thunder Mountain or Space Mountain, or one of those two.

 

Should we use the cloud as a tertiary server?

Brent Ozar: Thomas asks, “What do you guys think of using Azure or Amazon Web Services or Google Compute Engine for a third secondary, aka tertiary, for High Availability? Is there one that you guys would recommend over GCE, Azure, or AWS?” I’ll take that one. I would use the one that your developers want to use because they’re probably wanting to use other features or things that are inside that platform. Some developers have stuff that they like to use in Azure, some have stuff they like to use in AWS, whatever, so I would look bigger and from an enterprise perspective. We never get to pick as DBAs, we’re kind of like low man on the totem pole there. Richie gets to pick because he’s our head developer. He gets to architect all these things. We just, “Where do you want to go, Richie? All right, that’s the one we’re going to go with.”

Richie Rump: Except I don’t get to pick because you got to pick, Brent.

Brent Ozar: I got to pick.

Richie Rump: And what do I do? I’m like, “I’m going to use Azure for this” and just go off and do it.

Brent Ozar: Yep, and then swear later.

 

How do I use the system health XE session?

Brent Ozar: Terry asks, “Do you have a recommended blog or anything on how to interpret and use the system health extended events session? P.S. I have taken down all my Christmas cards except for yours which is staying up forever.” Awesome. We send out Christmas cards to our training buyers. Have any of you guys used the system health extended events session?

Erik Darling: I have only used it to look at deadlocks and I think maybe like occasionally just to—I’ve looked at like the CPU utilization thing but I just haven’t really cared all that much about it.

Brent Ozar: I don’t either.

Tara Kizer: I used it one time and I had to look up how to use it because I can never remember and also what is included in it. But Google has always led me to the answer.

Erik Darling: I would say deadlocks is probably the most common, there’s a bajillion blog posts about that.

Brent Ozar: If you want to learn more about extended events in general, there’s two things: look for Jonathan Kehayias and then also look for SQL Server Central’s Stairways to Extended Events. Both of those are good resources.

 

I have a function that’s fast on one server and slow on another.

Brent Ozar: Frederick says, “I have a function that takes 35 seconds to run on its original server then I moved it to an identical server—same CPU, same memory, same SQL Server version—and it runs there in five seconds. I can’t find any differences in MAXDOP, cost threshold, query plans, whatever. They’re on the same VM server and SAN. I’m out of things to check. What should I do next?”

Tara Kizer: Statistics IO and time. I’d compare the output of those and I would bet that there is a plan difference though.

Brent Ozar: Yeah, he didn’t mention a plan difference. So when you go to look at the execution plans, what would you do?

Tara Kizer: I think it does say there aren’t any differences in the query plans.

Brent Ozar: Oh, yeah.

Tara Kizer: But I would bet there are.

Brent Ozar: Yeah, you’re right. I bet you’re right. We’ve got a post talking about you can use a plan comparison tool built into SQL Server Management Studio 2016 to compare two different plans and it will find all the differences between those two. Those are useful.

Tara Kizer: Then also check while both of them are running run BlitzNow or WhoIsActive and see what it’s waiting on. Is there blocking? Is that what the difference is?

Brent Ozar: Yeah, sp_BlitzFirst has the seconds parameter. So you can run it with at seconds 30 and it will take a 30 second sample and tell you what happened during that time span. That may help a lot.

 

Why is a backup being blocked?

Brent Ozar: Paul says, “I’m seeing blocking when my transaction log that’s running and the blocking isn’t picked up by Idera’s monitoring. What in the SQL Server engine could be blocked during a transaction backup? I didn’t think backups would block anything.” Has anybody else seen this?

Tara Kizer: We need more information I think.

Brent Ozar: There are two.

Tara Kizer: Do you know what it is?

Brent Ozar: One. One is msdb. If the beginning or the end of your log backup, it will hit msdb in order to read or write backup information. So I ran into this before when I had a long history of backups and I wasn’t purging it out. So then my monitoring tools would actually get blocked. You know what’s funny? A lot of monitoring tools will exclude their own queries. So they won’t show you that they’re part of the problem. If I had a crispy taco to bet, I would bet it on that one. It could theoretically also be system tables but I would bet it on the msdb tables.

 

What should MAXDOP be if I have 80 cores?

Brent Ozar: Oh, goodness, Jonelle. Jonelle I like already because Jonelle has a lot of money. Jonelle says, “Hi. In a transactional workload, what’s the best practice for MAXDOP on a system with 80 cores?” Not 8, not 18, but 80 cores.

Tara Kizer: … one of my clients that claimed that they were throwing hardware at a performance problem and they started with like 4 processors and I think they went up to 8 or something. I was like, “You have 8.”

Brent Ozar: My laptop has 8.

Richie Rump: Mine does too, 8.

Tara Kizer: I know that we say max degree of parallelism shouldn’t go any higher than 8 unless you can prove that higher will help you. I sat in on Gail Shaw’s presentation at PASS for tempdb. I think she covered MAXDOP. I’m pretty sure it was a tempdb session but I went to two of her sessions. Her opinion is MAXDOP should be set to no more than four—or maybe that was a tempdb datafile? Maybe that’s what it was. I know that I’ve seen other people say max degree of parallelism, don’t go any higher than 4 but I know that we say 8. Try them both.

Brent Ozar: Erik, you ran tests on CHECKDB at Dell DBA Days and you ran it with different MAXDOPs. It went faster when you went higher than 8 too, right?

Erik Darling: Well, sort of. On 2016, it went faster up until I hit the number of cores in a socket. So we had, I think it was dual 18s in those?

Brent Ozar: I think that sounds right.

Erik Darling: Yeah, dual 18s or 16s. So we had that and when I got up to the number of cores in a socket, DBCC CHECKDB went not like crazy faster, but it did get faster and faster up until I hit the number of cores in a socket. But prior to 2016, I’ve always found that it sort of levels off around MAXDOP 8. I’ve never found that once you get up to like MAXDOP 16, 32, 24, hike, that it gets any better. 2016, there is a difference. Whether that translates to workload queries is different. But you can set MAXDOP specifically for DBCC CHECKDB in 2016 so that’s where it shines a bit.

Richie Rump: I have done a little bit of work with an 80 core server. I had the opportunity of working on one of those guys. For the workload that I was running, was just doing a lot of reading on vast amounts of data. One of these tables was like six terabytes or something like that. I was playing with MAXDOP and my sweet spot for that one query was 24. I did it in increments of five and anything after that—or four, or whatever it was…

Tara Kizer: I was going to say five, why would you go in increments of five?

Brent Ozar: That’s our developer, ladies and gentleman.

Erik Darling: Increments of 13.

Richie Rump: But after that it got slower, before that it kept going faster. So I just set that one particular query to 24 and kind of off to the races. I don’t think we set MAXDOP for that particular server but for that particular query, it needed to be set.

Brent Ozar: The reason why I bring up those other kinds of experiences is especially, now most people out there won’t invest in an 80 core server. So you, Jonelle, I’m not just saying this because you have money, you’re probably a smart and attractive person as well, whether you’re a man or a woman, it’s probably either way. Stand near an 80 core server and amazing things happen. But, you may want to have different settings for things like rebuilding indexes or CHECKDB. You may want to be the kind of person who plays around with resource governor to set different MAXDOPs for different workload pools.

Erik Darling: I think if I’m on an 80 core server I’m going to be really worried about seeing them thread. I want to be looking into trace flag 8048 and some of the other ones that help out prior to 2016 so they can sort of help out when you have that many cores. Because I’m guessing if you have an 80 core box you are at more than eight cores per socket and that’s when things start to get a little tricky, when you have those big, wide CPUs.
Brent Ozar: How many cores—the new desktop you just built—how many cores does that thing have?

Erik Darling: 8?

Tara Kizer: 120.

Erik Darling: It’s an I7 6850, so it’s 4 cores, big thread. So it is hyper-threaded. I didn’t go with the crazy one.

Brent Ozar: So you have more solid state drives than you have cores.

Erik Darling: Yeah, but they’re in a Windows 10 storage space so technically I only have one solid state drive that’s 3.5 terabytes.

Richie Rump: Your priorities are out of whack, sir. I hate to tell you that.

Erik Darling: If I wanted to spend an extra $400 I could have gotten the bigger chip but I didn’t want to do that, at least not this time around.

Brent Ozar: One of the solid state vendors I was just looking at today has a 4 terabyte 2.5 inch SSD out right now. You can buy it on Amazon, it’s $1,300. Insane.

 

Why does my CHECKDB fail?

Brent Ozar: M.M. says, “We have a two terabyte—” Why don’t I read these questions before I start talking? M.M. says, “We have a two terabyte database which fails the CHECKDB before the backup due to my snapshot space. However, there’s over 400 gigs free on disk.” Oh, I feel sorry for you, buddy. “Is there anything on how I should manage CHECKDB or how I get around what that error is?” Have you guys run into this error before?

Tara Kizer: It sounds really familiar but I can’t pinpoint it.

Brent Ozar: Something to do with snapshots with like fragmented NTSF volumes…

Erik Darling: Well, yeah, because snapshots are sparse files so they do something weird off the bat. I forget what it is but, oh god, what are those trace flags that help…?

Tara Kizer: Yeah, I was going to say there’s a trace flag.

Erik Darling: 20…

Brent Ozar: I wonder if Bob Pusateri’s CHECKDB thing is public.

Tara Kizer: While you are doing that, I would recommend that M.M. posts this question on Twitter with #SQLHelp and you’ll probably get Paul Randal or someone like that reply to you with the correct answer.

Brent Ozar: Is there anyone like that or is there just Paul Randal? Yes, I love Paul dearly. So, yeah, that’s a great point. It is a known thing that a lot of people have run across and none of us just remember it off the top of our heads.

Erik Darling: Aaron Bertrand also has a post called “Minimizing the Impact of CHECKDB” that talks about some of the trace flags that you can use that will change the way CHECKDB runs. You might have some luck with one of those. If not, there’s always off-loading CHECKDB. You could backup the database and restore it somewhere else. Since you’re already a diligent DBA taking backups you might want to try physical only. I don’t know if that would decrease the amount of work that goes into but it might be another thing to look at.

 

If MAXDOP is 8, will SQL Server only use 1 or 8?

Brent Ozar: Humphrey likes his question so much that he has pasted it in three times. All right, Humphrey, you’ve got my attention. He says, “If MAXDOP is set to 8 does that mean that SQL Server will only use either 1 or 8?” As much work as you put into your question, I’m going to give you a real short answer: No. That’s just the MAXDOP, it’s not the MINDOP. If you look at your execution plans you can actually see the number of threads used.

Erik Darling: Only the actual plans though.

Brent Ozar: Yes.

Erik Darling: Not the cache plans, otherwise it would be in BlitzCache and it would be slick there.

Richie Rump: And it would be amazing.

Brent Ozar: It would be hard to query.

 

Where should I put the partitioning key?

Brent Ozar: Fiag asks, “Should the field in the partition function always be the first place in the primary key for the same table? Basically when I partition does the partition field have to go first in the indexes?”

Erik Darling: I’ve never not done that so I don’t know.

Richie Rump: Well then is it partitioned?

Brent Ozar: Yeah, you can create indexes on top of it and have those not be the first field but, yeah, sadly I think it has to be the first one. I’ve never not done it that way either.

 

What’s your favorite monitoring tool?

Brent Ozar: Clark says, “Speaking of monitoring tools. Tara mentioned Foglight/Spotlight in the past, or as we used to call it ‘QuestFogSpot.’ Is this the monitoring software she prefers?” Why don’t you care about the rest of us, Clark?

Tara Kizer: Brent used to work at Quest. I really did like Foglight and Spotlight when I used them at two of my jobs. The last job was using SQL Sentry Performance Advisor and I liked it too. I haven’t done much with Idera. Those are the three tools that I think are pretty comparable. But, you know, are there any other tools that I prefer? I just have the most experience with Foglight and Spotlight but SQL Sentry, or SentryOne’s Performance Advisor is a great tool as well.

Erik Darling: I’m having the hardest time adapting to their new name.

Tara Kizer: I know.

Erik Darling: SentryOne. Like SentryOne perf…

Brent Ozar: Sentry 21.

 

Should I learn A or B or C or D or all of the above?

Brent Ozar: Graham asks, “I’m doing some career planning and I wanted to get your opinion on whether I should focus on database development and design or should I focus on business intelligence? Should I know both? I’m already learning Amazon Web Services and Azure Infrastructure Services.”

Tara Kizer: It doesn’t say DBA anywhere in that question.

Brent Ozar: No, yeah. I think he wants to know all the things. Richie, what are your thoughts on that?

Richie Rump: I don’t know, what do you want to do is really the question for me. Those are two pretty different things. You could go into both but eventually you’re going to get a job doing one. So which job do you want? Do you want to do the BI side or do you want to the db design/development side? I think they’re two totally different things. Focus on what you want to do and then go towards that.

Tara Kizer: Some people base their decision purely on the money aspect of things so take a look at our salary survey. I know that there are definitely some people that are listing those as what they are doing right now and see what salaries are averaging for those.

Richie Rump: Yeah, I’ve done both and I’m not doing the database/data intelligence warehouse-y stuff anymore. I’m done.

Brent Ozar: Why not?

Richie Rump: It seemed pretty repetitive to me. You load data in, you clean up a crapload of data, and you take data out. It’s what you’re doing over and over and over as opposed to when you’re doing a lot of database development, there’s a lot of interaction between what’s the business really need and you’re talking with the business doing database design and doing all that stuff. Where typically when you get inserted into a business BI project, all that stuff has really been answered already and you’re just going in and doing it. At least that’s been my experience, so I’m done. I want to write code so that’s kind of where I went.

Brent Ozar: I just never want to deal with reports again as long as I live because it’s always, “Can you move this number to the right? What does this number mean? Can you add this?” “No” I’m done with that.

Erik Darling: “Are you sure this is right?”

Tara Kizer: I’ve had some fun with SSIS and SSRS when it gets into the coding aspects of it. You are playing with, in SSRS you can have .NET stuff in there, you can load in DLLs. So I’ve done some fancier stuff with these utilities. It wasn’t just purely moving a column. I didn’t get into that aspect. Once I was done with it, it got moved over to a developer to take care of those requests. But what are you most interested as far as what makes you happy. What makes the day go faster? For me, it’s performance tuning. So database development, if I had to pick between these two, database development does have performance tuning in it a lot of times so that’s where I would go if I had just these two choices here. Money and what interests me.

Erik Darling: Money and happiness.

Tara Kizer: Money first, which is why I was an on-call DBA for forever.

 

Will the First Responder Kit grow to include development tools?

Brent Ozar: Fiag says, “We use the Blitz Toolkit a lot. Are there any plans to extend this library beyond the database and architect tools to development helper tools?” The cool thing is that it’s totally open source. You can add in whatever you want. I’ve worked with other folks—for example, I’m working with Aaron Bertrand on getting in his replacement for sp_MSforeachdb. If there’s other tools that you’d like to build, we can totally talk about that. Start an issue over in the GitHub repository and the group can start discussing in. I’d love to see other stuff in there as well. Will we build it? Probably not. I think the tools that you see in there, the extent of the tools that we’re going to build, we’ll continue to make those better but I just wouldn’t expect to see new tools in there.

Erik Darling: Even some of the tools that exist we just don’t add much too, like BlitzTrace and BlitzRS, we just don’t do anything with them.
Brent Ozar: Brent says, “Is there a way to see how SQL Server is using memory after it’s taken all that’s available, like what parts of the memory SQL Server is using for what?”

Erik Darling: Yes.

Brent Ozar: All right, next question… [Laughter] Where do you look? Is it in DMVs?

Erik Darling: It’s DMVs all the way. I’m going to totally barf on remembering the names of them but I know that there are DMVs where like memory buffers or buffer descriptors or something like that where you can figure out which tables and indexes are actually in memory and all that stuff.

Brent Ozar: Clerks is another one. Clerks describes if it’s used for log buffering or whatever.

Erik Darling: Clock hands.

Brent Ozar: Clock hands, wow, oh man, that’s bonus points for that. If you want a query to see as an example, if you search for Opserver—it’s like observer but with a p—Opserver, it’s an open source thing by Stack Overflow’s dev team. It’s a monitoring tool kind of for SQL Server, it’s like a dashboard. If you go to their GitHub repository, search for the word “clerks.” There’s a DMV query in there that I wrote with Nick Craver and it lists all of the clerks and what they do. So it just gives you what is used inside. I have no idea how it works, I wrote that thing like three years ago.

Richie Rump: Yeah, not Amazon. Don’t go to Amazon and search for clerks, but GitHub and Opserver because the other one will get you something else.

Brent Ozar: Which is good as well, but yeah.

Richie Rump: But in a different way, Brent.

Tara Kizer: Not in a good way.

Richie Rump: A little bit different way.

Brent Ozar: Paul Olson follows up and says, “What about BlitzCache, won’t that show what’s in memory?” It only shows the one component, the cache query plans, it doesn’t show for example what transaction log records are in RAM, how much Always On Availability Groups is using, all kinds of other stuff.

Brent Ozar: Justin adds, “Clerks was indeed a great movie.” I’ve never seen it. I’m just going to leave it there.

 

How can I tell when I have out-of-date statistics?

Brent Ozar: Fiag says, “Sorry for my ignorance.” You’re forgiven. We all have our problems. All of us happen to also suffer from ignorance so we work pretty well here.

Erik Darling: Massive amounts.

Brent Ozar: “Does Blitz recommend stats for what table should be…?” I think you mean where we need to update statistics.

Erik Darling: Yes, well not Blitz, but BlitzIndex. I recently added—so it doesn’t do it out of the box automatically because there were some performance issues, the server had a large amount of statistics in them, that I think I fixed. It was weird, I’ve been experimenting with it with clients and I haven’t been able to get the statistics part to run slow since I changed—I had to hint the join types to automatically use hash joins because there were certain circumstances where like SQL would just woefully underestimate how many rows are going to come back and just be like, “I’m going to nested loop sort through all this garbage.” So, yeah. BlitzIndex, if you download one of the newer versions of it and you use the @SkipStatistics = 0 it will tell you if you have wonky statistics on any of your tables. Tell me if it runs slowly so I can keep messing with things.

Tara Kizer: By slow he means like over 20 minutes before I killed it.

Erik Darling: I tested that on so many statistics objects and I could not get it to be slow.

Tara Kizer: I remember that. It was like two clients of mine in a row or something that it happened on.

Erik Darling: 30,000 some odd statistics objects and it’s not…

Brent Ozar: This is exactly where our scripts come from. All of the stuff in the First Responder Kit, we use this every day. We go and run it against client’s servers. So please, when you check in good code, make sure it doesn’t suck. I’m not talking to Erik, I’m talking to you, the public.

Richie Rump: Yeah, you’re talking to me. I understand that, Brent. Yeah, you’re talking to me. I get that.

Brent Ozar: You.

How can I tell if a query is coming from a linked server?

Brent Ozar: Curtis asks, “Is there a way to tell if a query is coming from a linked server connection?” I don’t know that.
Tara Kizer: I don’t think so, especially if they’re doing an open query to use it because it’s just going to look like the query is local.

Brent Ozar: It could come from—it’s going to look from another server name. I think you could see from sys.dm_exec_connections or sys.d—whatever the connections DMV is.

Tara Kizer: Yeah, for a server rather than a desktop or a laptop. If you have a naming standard you could search in.

Erik Darling: If we’re just like brainstorming ways to tell you could look through the execution plan and see if there’s a remote query operator in there anywhere.

Tara Kizer: Wouldn’t that appear on the source server though, not where the actual object exists?

Erik Darling: Oh, yeah. Then you know what, look at the SQL text and if there are a lot of double quotes…

[Laughter]

Tara Kizer: That has good stuff in there.

Brent Ozar: It looks like a really bad query. We could probably figure that out, too. But I like the connections DMV angle is where I’d probably go.

 

Was Clerks 2 better?

Brent Ozar: The last question we’ll take is I guess from Brent. Brent says, “What about Clerks 2?” No, I haven’t seen that either. I don’t know if you guys did? Nope? All right.

Erik Darling: Best part about Clerks 2? Goodbye Horses is in the soundtrack.

Richie Rump: I didn’t think Clerks 2 was bad but I didn’t think it was that great either.

Brent Ozar: What’s your favorite movie that you’ve seen lately?

Richie Rump: Rogue One.

Brent Ozar: That good? I still haven’t seen it yet. I need to get that in the theater while it’s still out. Yeah, I know. I’ve been busy. I’m trying to get a dang apartment over in freaking San Diego. Turns out to be kind of hard.

Richie Rump: Priorities, Brent, priorities.

Brent Ozar: Yeah, right? All right, thanks everybody for hanging out with us this week and we will see you guys next week on Office Hours. Adios.

Erik Darling: Adios.

Previous Post
Build Me A Build: Things I’d Do Different
Next Post
Your Favorite Bugs & Enhancement Requests: #TSQL2sday 86 Roundup

2 Comments. Leave new

  • Data Compression was the Enterprise Feature that caused the restore to Standard Edition to fail. 🙂

  • Andrej Kuklin
    January 19, 2017 5:30 am

    Concerning the partitioning question: “Should the field in the partition function always be the first place in the primary key for the same table? Basically when I partition does the partition field have to go first in the Indexes?”

    It is technically not required. For instance, the following works just fine:
    CREATE PARTITION FUNCTION [PFN_dueday](INT) AS RANGE RIGHT FOR VALUES (20161216, 20161217);
    CREATE PARTITION SCHEME [PS_dueday] AS PARTITION [PFN_dueday] TO ([PRIMARY], [PRIMARY], [PRIMARY]);

    CREATE TABLE dbo.RiskEstimation
    (
    RiskScenarioId INT NOT NULL
    ,Dueday INT NOT NULL
    ,Payload VARCHAR(200) NULL
    CONSTRAINT PK_RiskEstimation PRIMARY KEY CLUSTERED (RiskScenarioId ASC, Dueday ASC) ON PS_dueday (Dueday)
    );

    It even *may* be a good idea to index in such a way if you have both queries that filter on
    1. Only Dueday
    2. Only RiskScenarioId
    AND you’re sure you can get partition elimination work for you (this is the greatest IF).

    Then the query
    SELECT * FROM dbo.RiskEstimation WHERE Dueday=20161216
    will at least profit from partition elimination.

    And the query
    SELECT * FROM dbo.RiskEstimation WHERE RiskScenarioId=2
    will scan all partitions, but with a Clustered Index Seek

    If the index were defined as (Dueday ASC, RiskScenarioId ASC), the second query had to scan the whole table.
    The risk is that if partition elimination doesn’t work for some reason (and there are many of them), a query with a predicate on a partitioning column (and these should be the most of your queries) is suddenly *scanning* all partitions (instead of seeking). So, if you don’t have a significant amount of type 2 queries (not filtering on the partitioning column), it’s probably a good idea to stay safe and make partitioning column be the first in the index.
    Creating a separate index just for RiskScenarioId can also be a good solution – with all other implications as storage space and insert/update slowdown.

Menu
{"cart_token":"","hash":"","cart_data":""}