This week, Brent, Richie, Tara, and Erik discuss vendor licensing, rebuilding indexes, analysis services, using trace flag 174, and the team’s PASS Summit impressions and highlights.
Enjoy the Podcast?
Office Hours Webcast – 2016-11-02
How do I license SQL Server in virtualization?
Brent Ozar: Well, we got questions piling in here we might as well ask. Let’s see here, we have a licensing question.
“I need the Microsoft doc for licensing all cores on a VM host with …” This is just a rambling bunch of words. “I need Microsoft documentation for licensing all the cores on a VM host with Enterprise, allowing all SQL VMs on the host to be licensed.”
I think what you’re looking for is the licensing guide. If you search for “Microsoft SQL Server Licensing Guide,” it’s one of the rare pieces of documentation that is wonderful. It has pictures. It’s fantastic. I love the licensing guide. It’s really good. Have any of you guys had the chance to read through the licensing guide?
Tara Kizer: God, no.
Erik Darling: I only read through what I have needed to and it’s been precious little, thankfully. There were some questions about VMs and stuff and I was like, “Okay.”
Brent Ozar: It’s great. It’s got great character development. The plot is wonderful.
Richie Rump: The ending kind of sucked but, you know.
Erik Darling: If the guide doesn’t suit you, there’s like a 5-day licensing class that you can take from Microsoft. You can learn the joys of licensing.
Brent Ozar: There’s a company called Directions on Microsoft. One of the guys who does it is @GetWired on Twitter. They just have a boot camp on licensing. If you think our training is bad, just wait.
Richie Rump: Best 5-days of sleep I’ve ever had.
Brent Ozar: That’s probably what you need to watch before … Are you going to take a nap or anything before the World Series game, or are you just 100% wired?
Richie Rump: I am completely wired. I could barely hold it together. I don’t know what to think. My head is saying they’re going to break your heart, they’re going to lose in spectacular fashion by being up 3 runs, a grand slam in the 9th inning with 2 outs and 2 strikes and Chapman on. That’s what my head is telling me is going to happen, but my heart is saying they’re going to do it tonight. I am this split person. I am a shell of a man. I am a complete and total mess right now. I am wrecked. I am ruined.
Brent Ozar: That’s Wednesday. It’s normal.
Richie Rump: Pretty much. Yeah.
Erik Darling: Is that something in AWS or is that the Cubs? I’m sorry, I didn’t get that.
Richie Rump: Oh. I came into work in Azure today… and they just, whoa…
What should I do with all these Fusion-IO drives?
Brent Ozar: Next question.
“Hi, guys. We see high latency reads and writes currently, so our company purchased Fusion-io drives for a 2-node cluster running system manager in a data warehouse. We want to take advantage of the Fusion-ios. What should we do with them? Would you recommend buffer pool extensions? Right now we have 256 gigs of RAM with no memory issues. Or in-memory OLTP, an attempt to move …” Well, in-memory it doesn’t matter what storage you use there, so we can kind of short-circuit that. “What should I do with my fancy Fusion-io drives?”
Man, wow, that’s kind of cool. If you had a couple of Fusion-io drives, what would you do with them? In a cluster, keep in mind.
Erik Darling: Sorry. I didn’t catch … Are these local drives, or are these …?
Brent Ozar: Yeah.
Richie Rump: Yeah.
Erik Darling: They’re local? Oh, yeah. Stick tempdb on those, then. Maybe stick some lock files on them, if you have. Some logs that you want to make sure that you feel speedy with.
Brent Ozar: Well, but it’s a cluster, though. That’s where it gets complicated. If you’re-
Erik Darling: Oh, right, right. Cluster. Just tempdb then. Skip the log files.
Brent Ozar: If you were going to decide what you’re going to do, and really that’s the only option that you have, are there any symptoms that you would look for before you would say, “Hey, let’s go get Fusion-io drives and use them for tempdb?”
Erik Darling: Yeah, you know. tempdb horribleness, the kind that does not get resolved by adding multiple files or turning on trace lags or doing anything fancy. Looking at the slow reads and writes, that can happen sometimes when you’re piling things in. Or just looking at my workload and saying, “Okay, I used the crap out of tempdb. I use a bunch of temp tables, I have triggers, I do all sorts of horrible things that don’t fit in my memory that I have, so I’m just going to make tempdb as fast as possible.”
Brent Ozar: When you guys are looking for where should you invest, hardware or changes in code, indexes, whatever, what kind of places do you go look? What metrics do you go look at? I’ll point to you, Tara.
Tara Kizer: Okay. In my past few jobs we’ve always had a performance test, load test environment and, you know, we got production and we would be able to baseline the systems. We’d be able to do a load for what the load is going to look like 2 years from now, and then be able to determine when we’re running out of hardware. Just the typical things with performance monitor counters, disk, CPU memory, looking at were SQL server is waiting on things. This is assuming we’ve already optimized as much as we could in the database, so just having that load test environment where we could do synthetic production loads and being able to do loads a year from now, 2 years from, if you have that type of environment and tools.
Brent Ozar: If you don’t have that kind of environment, if you just got yourself a couple of fancy Fusion-io drives, it sounds like you just got the opportunity to go build one of those environments. Your load test environment doesn’t necessarily have to exactly mirror production if you got a nice set of Fusion-io drives.
Erik Darling: Yeah, not many people are throwing Fusion-io into dev testing QAs.
Brent Ozar: It sounds like you probably have buckets of money, so hey, if you’re able to just get Fusion-io drives for no apparent reason, that’s freaking awesome.
Can I use SQL Server 2016 if the vendor only supports 2014?
“We are looking to upgrade 16 SQL server 2012 instances. They all hold third-party databases and we’re going for either SQL server 2014 or 2016. Is it better to install each of these instances all to 2016, bring everything to 2016 and run all the databases in 2014 compat mode where necessary, or should we just only go with what the vendor supports?”
I think what the root of the question is, if a vendor says they only support SQL server 2014, can I install their databases on 2016 but run them in 2014 compat. mode?
Tara Kizer: If it were me I would not be doing that. You’re potentially breaking the license you have with the vendor, but even still, if you have a test environment some of these vendor applications won’t even start up, because they are checking the server version to see what your version is and you don’t get to pass go. If you wanted to go down the route of 2016 with 2014 compatibility mode you’re going to need to test to see whether or not the application will even load, because they might be checking the server version.
If it were me, for vendor applications I go with whatever they say is they can support. I don’t want to chance running into the issue in production, and you won’t have a fallback plan. If you’re going to go to 2016 you can’t restore back to 2014, it’s a very painful process.
Brent Ozar: Yeah, that’s a good point.
“Would you guys recommend running once-a-week stats updates even though you rebuild indexes daily?” Specifically, this person says they use Ola Hallengren’s rebuilding of indexes daily. That doesn’t mean you’re rebuilding all of the indexes, does it? What does that mean?
Tara Kizer: I think he’s got the schedule flipped. I would update stats daily and do the rebuild, reorg, weekly, maybe even less frequently, such as monthly. Quarterly, even. Or how about never?
Brent Ozar: Why is that?
Tara Kizer: You’re not doing it for performance reasons. You are getting to update statistics, which is why people think that you get better performance when you rebuild indexes. Just do your update stats, because rebuilding indexes is a really tough way to get updated statistics. Rebuilding indexes and reorganizing is more about overall database health. It does help with storage, compacts it down, but if you’re just talking about performance, update stats is where it’s at.
Erik Darling: I think that workloads, at least OLTP ones, don’t typically benefit from rebuilt indexes if you’re just doing a lot of single 10 or short-range Seeks on things, you’re not going to see a huge difference if you rebuild indexes. It’s mostly for those behemoth, terabyte plus data warehouse style workloads, where you’re taking that whole table and you’re joining it to that whole table and then you’re squishing it and aggregating it. That’s just few filters in the WHERE clause on that.
Brent Ozar: Next up, someone says they’re rooting for the Indians just because of the movie “Major league”. I saw, Richie’s making a great face.
Richie Rump: I just hope it ends like the movie “Major League”.
Brent Ozar: Ooh.
Richie Rump: Where they win the championship but they lose the World Series. See, that’s what I want to see.
Tara Kizer: Ah.
Brent Ozar: I think Richie’s probably going to be insufferable either way. If they win or if they lose, we’re going to be paying for it in the company chat room for the next week.
Richie Rump: Oh, yeah. There are just going to be tears shed one way or the other tonight.
What Perfmon counters should I be collecting?
Brent Ozar: Next question. “Can you guys direct me to a good blog on what perfmon counters I should be collecting?”
As it happens, I’m famous for that. If you go to brentozar.com/go/perfmon, so brentozar.com/go/perfmon, I got a video on there that you can go watch, and my list of favorite perfmon counters and what they mean.
How do you simulate production loads?
Mandy asks, “Tara, you mentioned having a performance test environment at your previous jobs. What technologies did you use in order to simulate production load on those test boxes?”
Tara Kizer: I don’t remember all the tools that were employed, but I know LoadRunner was one the tools that were used in my last 3 jobs. My last job, a lot of stuff was just written directly into Visual Studio. LoadRunner was the main tool.
Brent Ozar: Did you guys have a dedicated team of people to manage LoadRunner?
Tara Kizer: Yeah. These were not small companies, so yeah, we had dedicated teams for load test environments. They knew how to use LoadRunner, they knew how to code, they knew how to use all these tools.
Brent Ozar: Richie, what about you? What stuff have you seen used in the past?
Richie Rump: For perf, I haven’t been on teams big enough to have perf testing. It’s always been, “Is it working functionally and we’ll get it out, and if there’s a problem, well, we’ll have to fix it as it comes on.” Perf testing is a whole different other ball of wax. Usually you have, like Tara said, dedicated team does all that for you. It’s just a different skillset that your normal devs usually don’t have.
Brent Ozar: LoadRunner is a good example. It’s an expensive piece of software, it’s a different … I don’t expect DBAs to know how to use it, or developers to know how to use it. I don’t know how to use it. When I go into companies, sometimes every now and then I see a team over there in the corner and they’re like, “We’re the load test team.” Because if you think about it, if you have to reproduce production load, what you really have to do is redo that test every time there’s a new deployment of the software. You add new features in, you learn that people are using it in different mixes, so you can’t just build one test and then call it a day, it ends up being a full time job.
Are there compatibility issues with 2016 Analysis Services?
Next question. “We may not have the budget to upgrade all our SQL servers to 2016 yet, or next year. Do you guys know if 2016 analysis services can work with R2’s analysis services? Right now we do cube development on one box and aggregation on another.”
I know they changed the models between tabular and whatever the other one’s called. Not tabular?
Erik Darling: Not tabular.
Tara Kizer: We’re not the right people to ask for this question.
Brent Ozar: No. Who I would check with is Chris Webb, with 2 Bs. Chris Webb does a lot of analysis services blogs. Andy Leonard does a lot of integration services blogs. If you search for “Andy Leonard SSIS”, I forget the name of his company that’s out there. He doesn’t do, I don’t think, analysis services, but he can point you to the right person. Who else does analysis services stuff?
Erik Darling: I would just set up a couple of VMs and try out my use case on it.
Brent Ozar: That’s because you live like a crazy man.
Erik Darling: I know. Because if someone tells me, “Yeah, it’ll work just fine,” then I’m like, “Okay, cool,” but then as soon as I start doing something I’m like, “Wait, I’m doing weird things that this person couldn’t possibly know about or couldn’t have possibly accounted for.” That’s why I always like to, you know … If I get that initial okay, “Is Service Pack 3 cool? Yeah? Okay,” but then when I go to install it, “Wait, I’m doing something weird.” I just like to make double sure that it fits my use case as well as general, “Okay, that works,” but who knows what things people have tested and what things they haven’t tested.
Brent Ozar: Development edition is totally free, too. You can go download it, or if you go to SQLserversupdates.com I’ve got a link where you can go grab it from.
What should I do about high memory usage?
Next up, we have high memory usage, I don’t think he’s referring to marijuana, but, “Since increasing from 256 gigs to 512 gigs our monitoring metrics show a big increase and there’s no complaints from end users. We upgraded to SQL 2012 and memory is our largest wait.”
I’m going to ask you a follow-up question there before we tackle that. When you say memory is your largest wait, what do you mean by that? What tool are you using to gather that data, and what’s the specific word on the screen that says, “We’re waiting on memory”? I want to see a little bit more about that. I have a hunch, but I don’t want to push anybody under the bus until I see it.
Erik Darling: Also, if you increase the amount of memory you have, you’re probably going to increase the amount of memory that you’re using.
Brent Ozar: And if users aren’t complaining …
Erik Darling: Because servers use every bit that they can.
Brent Ozar: What do SQL servers use memory for?
Erik Darling: Everything.
Tara Kizer: Everything.
Erik Darling: Everything. What doesn’t it use memory for? That’s a better question.
Tara Kizer: I just wonder if they’re looking at Task Manager and just seeing available megabytes. The SQL server went ahead and used all that memory and that’s normal. SQL is going to use that 512 gigabytes, or whatever you allocated to SQL server. Make sure you also updated your max server memory setting if you changed it when you were on the lower amount of memory.
Automate restores to dev (wat)
Brent Ozar: There’s an interesting one that’s not even really a question, but it’s a good topic. “Automate restore to dev.” Rather than playing Alex Trebek and saying, “Please format your answer in the form of a question,” … That actually is a good- If you guys were going to regularly automate restores from production to dev, what tools would you use and why?
Tara Kizer: You know, I’ve got a really old blog on this topic because I’m not sure that I would use that method now, but there’s a script out there. It grabs the latest backup from a folder, pulls it over, I think it even zips it, it was some kind of old utility back then. Restores it on the server and it’s got everything in there. These days, though, if you’re using a SAN, we do SAN snapshots, and then just pop it over and we’re good to go. That was mostly for CHECKDB process, not necessarily for dev.
Brent Ozar: We have a follow-up, he says, “For prod load testing you should automate restores to the dev.” Yeah, but that’s only the data, you still got to have your workload on it. Unless your workload is restores, in which case …
Tara Kizer: Yeah, we did that manually. We did it about twice month, refreshing our load test environment, but it would be dependent upon the load test team when they needed those refreshes. I should say it’s done manually, but it was all automated, they just had to click a job to do it. Basically the same thing, grab the latest backup, pull it over, do the restore. We were using availability groups environment, so you set all that up and magic, it was all there.
Brent Ozar: Once you know where your full backups are going it’s pretty easy to pull the most recent file out of that folder.
Richie Rump: It still may not be a good idea to use production data, because production data still might not reach all of your use cases that you need to test, especially on the outer rim, stuff that shouldn’t happen but probably will at some point. The data is probably not going to be in production. As well as, there may be some sensitive data in there as well that you may have to clean, obfuscate and/or remove. [inaudible] data, although it’s good and you test your queries in running it and stuff like that, it probably doesn’t have everything you need if you’re running adequate testing against it.
Tara Kizer: One of my struggles with the load test environment was, they had all these wonderful scripts in LoadRunner, and they said, “This is a production-like load,” but when I would dig into the details and I’d run a trace in a load test environment and run a trace in Production or Extended Event or whatever tool you’re using, and I would see that the stored procedure execution count would be so much higher in the load test environment than it was in production. I was like, “You know, this is not realistic, this is not what we’re seeing in production.” I don’t have an answer for it, it was a really hard thing for them to figure out. How do you make sure that your load test environment has a real production-like load and not just what we think production looks like?
Brent Ozar: I totally saw that with … I was involved in one project where we did load testing, and the load test had thousands of people concurrently editing their profile, but it was the same user over and over again, so we ran into all kinds of … Yes, you’re going to have blocking problems when the same user is updating their profile across 5000 sessions.
Why does an SSIS package run slower as an Agent job?
I got the next question. “We got an SSIS package that takes way longer to run under an SQL agent job as opposed to running it in BIDS and Visual Studio. What would I look at to determine why a query runs fast in one place and slow in another?”
Erik Darling: We’re at the 30-minute mark, so we should probably bring it up.
Richie Rump: Yeah, look at that. Amazing.
Tara Kizer: I’m surprised it’s that way, it’s usually the opposite. When you run it through BIDS, Visual Studio, it’s got the GUI, it’s generating all kinds of information going across. I haven’t seen it in that order, usually it’s an agent job that runs faster. I wonder if maybe the SQL’s agent job is pointing to a resource that’s maybe further away than when you run it in BIDS. Maybe there’s a remote share that’s on your subnet but it’s further away for the agent job. I don’t know, I think that there’s probably some kind of remote resource that we’re talking about here.
Why does SolarWinds say I’m waiting on memory?
Brent Ozar: We got a follow-up on the memory question, and it is exactly as I suspected. There is a popular monitoring tool called SolarWinds, and this person says, “We’re using SolarWinds and memory CPU is listed as the metric.”
By all means what you want to do next is go talk to SolarWinds, talk to their support team, and ask what you should do about that wait. Maybe more specifically, ask exactly what that wait type means. It’s a very common question that we get amongst SolarWind users. It does not mean that you’re waiting on what you think it does.
Erik Darling: Also ask why it’s red.
Richie Rump: Brent wouldn’t know that at all, by the way. He just …
Brent Ozar: I only know because people tell me. They are like, “This big huge bar here, it’s red, that must mean something,” and I’m like, “Well I’m red-green colorblind so it doesn’t mean anything to me.”
How was the PASS Summit?
Next question. Someone says, “How was PASS? I really enjoyed your topic.”
Well, thanks! We should actually each talk about the thing we liked the most out of PASS summit, or what it meant to us. Richie, you’re next to me, or Sammy Sosa is next to me on the webcam, so I’ll go with you.
Richie Rump: Sammy Sosa.
Brent Ozar: Sammy Sosa.
Richie Rump: Wow. PASS was different for me. It was the first time I came with a company and not just by myself, so it was totally a blast because we don’t actually get to hang out very much.
Brent Ozar: That’s true.
Richie Rump: People go in the office and see each other and stuff like that. We see each other maybe 3 times a year and that’s it, so getting a lot of face time with everyone I think was really the highlight of my PASS. Kiss up.
Erik Darling: Dear coworkers …
Richie Rump: Remember it come from me, Erik.
I think the other part for me was, I thought the keynote was good and the sessions were good, but they’re always good. There are always good SQL Saturdays, because this is our community, but it’s outside of the community that I actually had a blast with some of the parties. Playing board games at 1:30 at night was a highlight for me, so that was a blast with some of the fellow PASS folk. That was great. I had a blast. I had a good time. Not a lot of Cub wins but that’s okay, we’ll fix that tonight.
Brent Ozar: What I got out of that is, Richie didn’t really like the conference, he just likes to hang out afterwards and play board games. We’ll remember that for Richie’s career planning for next year. Tara, you’re up next.
Tara Kizer: I really liked hanging out with my coworkers because we don’t get to see each other very often, so that was nice. Great free time.
Richie Rump: [crosstalk] Cubs sessions.
Tara Kizer: Networking, meeting people that we talked to online for a long time and seeing them in person, that was great. The sessions that I did see were good. One of the challenges I had is that when there was a session I wanted to see, there were like 4 sessions at that time that I wanted to see, and then there were entire times where there was nothing I wanted to see. I just wish it was more spread out. I felt like the people had the same problems because I kept talking to people and they’re like, “Yeah, what are you going to see at 3 o’clock,” or whatever time it was, I’m just using that as an example, “Oh, there’s nothing interesting to me.” It was the same thing over and over again.
Brent Ozar: I almost wish they had tracks of, one 400-level track, one BI, just specific tracks where you know that if you’re interested in one topic there’s always going to be a different session available for it. Erik, how about you? This was your first time at PASS.
Erik Darling: Yeah, it was my first time, so I really enjoyed getting to meet people who I had only interacted with on the old internet for a while. That was really cool. Kalen Delaney, Grant Fritchey, Steve Jones. Joe Sack was a big one, I loved talking about … Actually, my favorite part was probably hanging out with the Microsoft people and talking about all the new stuff coming down the pipe that they’re working on, you know? Mostly because it’s stuff that I complained about in blog posts, so it’s, “Maybe you did read that. Cool. Good for you, you weren’t lying to me at all.”
Brent Ozar: It’s amazing where, if you get to stand around in the Microsoft booth you get access to all this NDA-ish type stuff where they’ll talk through and answer questions and, “Does this really help you? Are you interested in this? Tell us about your use cases.”
Erik Darling: Yeah. All sorts of lions and tigers and bears out there, it was great.
Richie Rump: I saw the booth. I saw it.
Brent Ozar: Just walking past it?
Richie Rump: I was walking, I kind of peeked my head and, “Oh look, there it is.”
Erik Darling: He was on his way to play competition Parcheesi or something.
Richie Rump: Vast. Vast, and I was not the Cave this time.
Erik Darling: Okay.
Brent Ozar: I would have to say that, too. I used to think, “I’ve enough access where I can talk to Microsoft whenever I want, or I can go email them or whatever.” That was my favorite part of the summit this year too, just the number of Microsoft big guns who were there, who were really fun to talk to. It’s just amazing how you can… Slava Oks I’ve never met in person. His laptop was there and the guy was just coding, and I’m like, “Oh my god, that’s the clock hands guy.” I read his blog post. I don’t understand any of his blog posts, but I read them. My god.
Erik Darling: Not because his English is bad, because it’s too good.
Brent Ozar: Yeah. I’m not smart enough. It takes me four reads to get through those.
One of the attendees says, “I love Brent’s shirt ‘Surprise, I’m drunk’.” Yes, that-
Erik Darling: Except that doesn’t [crosstalk] no one surprised.
Brent Ozar: That’s true. I got to start shopping for next year.
Richie Rump: Too bad you didn’t wear that on Tuesday morning, Brent.
Brent Ozar: Man, jeez, yeah. Or Monday evening. Holy cow, so we did … I should say, this is why the team really says that they like hanging out with their coworkers. This is the whole deal. Monday night we did an 8-course tasting menu with wine at MistralKitchen. I think we were the only people there, we closed out the restaurant, and then we ended up with, they were like, “Well, what else do we have? Let’s give you guys a whiskey tasting menu.” Then Erik and I did some real damage to our livers at that point.
Erik Darling: Tuesday morning the T-shirt should’ve said, “Surprise, we’re still drunk.”
Brent Ozar: Oh, god, that was rough.
Erik Darling: Yeah.
Does TDE cause problems with AGs and replication?
Brent Ozar: Let’s see here. The next technical question is, “We’re looking for a geo-replication architecture and we wanted to include Always On Availability Groups across multiple sites, plus replication across multiple domains, plus transparent data encryption.” I think that they’re just putting things into a stew at this point.
Richie Rump: I think this a headhunter trying to get Tara. “We have all these things, perhaps you’ve heard of them.”
Brent Ozar: Sounds like a Bingo card. We need little Bingo cards for this.
“Does TDE cause any issues for such an architecture?” Tara’s making the face.
Tara Kizer: I haven’t used TDE with replication or availability groups, so I don’t know what kind of challenges that might have. I know TDE adds performance overhead.
Brent Ozar: Normally the thing I ask with TDE is just that, “Okay, so what’s the goal here?” If you’re just trying to encrypt data at rest there’s a lot of communication that happens around from place to place where TDE is not involved. When I see replication, that means we’re going across from one SQL server to another. I don’t even know if the distribution database is actually encrypted with TDE, I don’t-
Erik Darling: I don’t know if it is, but I don’t think … That whole time the data is going from New York to Dublin it’s not encrypted.
Brent Ozar: That would be a lot-
Erik Darling: I don’t think.
Brent Ozar: Yeah, so I don’t think TDE would cause a problem. I don’t think TDE is going to cause a problem with replication either, because it’s only on disk that stuff is encrypted, so it shouldn’t cause a problem. I would just say, make sure you know the benefit that you’re getting there, especially with TDE with AGs. If you ever rotate your keys with TDE it’s re-encrypting the entire database. That is a ton of log traffic that’s going across from one replica to another, and while the database is re-encrypting … I haven’t looked at this on 2016, but I know in all the prior version you could not take any backups while the database was re-encrypting, so that’s kind of sketchy when you talk about large amounts of data there. Not that I don’t like TDE. TDE is fine. There’s nothing wrong with it. There’s lots of things wrong with it, but you know where I’m going with that.
Erik Darling: At least in 2016 you can get backup compression with TDE finally.
Brent Ozar: Elaborate – how does that work?
Erik Darling: You have to specify the max transfer size. You just have to set it above the 65536. I think that’s the right number. 65 whatever. You just set it above that. I blogged about it a while back, if you’re really that interested you can do a search for it. I think it was “TDE backup compression” on ye olde Ozar site.
Brent Ozar: It’s not enough to just enable backup compression, you also have to change this obscure setting, max transfer size, even if you don’t need to change max transfer size. There was a Microsoft-certified master, Bob Pusateri, who did a session at PASS on backup compression and I’d sat through it at our local user group, he’s here in Chicago.
I’m like, “Backup compression? You just turn it on and off. That’s all there is to it. You want something more than that, you’ll get a third party,” yeah. It was great. He had all kinds of, there were trace flags that would output to the Messages tab and show you the effect of different max transfer size and buffer count settings. It was really impressive. If you went to the PASS summit and you got the recordings, or if you’re going to watch the recordings via streaming, I would actually recommend that session a lot for those of you who have databases of a terabyte or more.
Should I use trace flag 174?
We have one other one. “Is there a benefit to using trace flag 174? This increases the prod cache. We are in an OLTP shop with tons of ad-hoc queries.”
Well I have an opinion immediately. [inaudible] fragmentation says …
Erik Darling: That is the last thing that I would do for that. I would first, if you’re not already doing it, switch on Optimize for ad-hoc workloads. Just get a plan stub so you’re not blowing things out with single-use plans. Then after that I would also probably turn on forced parameterization, so that if there is a chance the parameterized plan can get reused then it’ll do that. Especially if they’re all ad-hoc stuff and they’re just getting passed in the string, it’s not getting parameterized code, SQL is not going to parameterize it because it just sees, “Oh look, new string, I’m going to do something new for this,” and you just have a billion of the same plan over and over again.
Brent Ozar: Especially if you have a lot of ad-hoc queries, adding more memory to the plan cache is exact opposite of what I would want to do. I don’t want to give ad-hoc plans more RAM in the cache, especially if you’re an OLTP shop with lots of ad-hoc queries. Those ad-hoc queries are probably relatively easy to compile, especially if they’re reporting type queries. I’d rather have that memory available to cache data. That’s much more where I’d want to go. We don’t get a whole lot of buttons we need to tweak around the plan cache and its size.
Erik Darling: You might even just free the prod cache every once in while.
Brent Ozar: Dang! Look at you… What’s your logic there?
Erik Darling: Well, I just don’t care about it. It’s probably a bunch of single-use plans hanging around waiting the get aged out. This part has nothing all that important in there. If I take a look at my plan cache, or say if I run sp_BlitzCache and I look at the execution count column, and I just see a bunch of single uses, or 1 or 2 uses, I’m just going to clear it out and let things every once in a while naturally refill.
Brent Ozar: Got it.
Erik Darling: I’d rather give that space over to something that could actually benefit me, and caching single-use plans is not it.
Brent Ozar: Do not try this at home, ladies and gentlemen.
Are there any good SQL Server podcasts?
The last question we’ll take is, “Are there any good podcasts relating to SQL servers?”
Well, of course, there’s this one. Then other ones, I’m going to get my iPhone out from right here just to make sure I don’t miss any, because I subscribe to a bunch of them.
My favorite one is probably SQL Server Radio. Comes out of Israel, with Matan Yungman and Guy Glantser. They’re very funny, they take on all kinds of weird little topics. Somewhere around I think an hour in length, and cover a lot of interesting different technical things. It’s almost always Guy and Matan, that’s SQL Server Radio.
There’s SQL Down Under, with Greg Low out of Australia. The problem is he doesn’t update it very often. I mean, he goes months, sometimes even a year or more, without updating it. The past episodes are really good if the technology applies to you. He has access to Microsoft people, so he gets podcast interviews with Microsoft program managers, for example.
There’s also SQL Data Partners with Carlos Chacon and Steve Stedman, where they take a different approach than the other podcasts. They take a presenter on, and basically talk the presenter through a training presentation. They talk to the presenter about what his material is and basically turn the presentation into an interview, so you learn technical things along the way. Steve Stedman is also the guy behind the Database Corruption Challenge. If you are a production DBA who likes playing around with recovering from corruptions, search for Database Corruption Challenge.
Kendra Little has Dear SQL DBA. Dear SQL DBA is also available on YouTube and as a podcast form, where Kendra takes a question from the audience and goes really deep, like half an hour or more on the same subject, so she goes really far in depth. As opposed to us, we just ramble a lot.
Richie Rump: And her hair is purple, too, so that’s a bonus.
Erik Darling: Also, it was great meeting Kendra for the first time at PASS.
Brent Ozar: Oh, that’s funny. Hi, how are you?
Did I miss any SQL server podcasts? I think that’s it.
Erik Darling: Nope.
Brent Ozar: I ask because we would be remiss if we didn’t say, Richie’s Away From The Keyboard podcast. Richie, talk to the good folks on the audience about what your podcast is about.
Richie Rump: We talk to technologists, but don’t talk about technology. We’re trying to get inside the head of the individual and talk to them about their likes and dislikes, and their hopes and their dreams, what do they think about the future. It’s a good time. What we want to do is, we want to see what they do when they’re away from the keyboard. We have a new episode coming out today after I clock out of work.
Brent Ozar: If you’re listening to Away From The Keyboard for the first time, the current episode that’s on there, Jesse Liberty, get Part 1 as well as Part 2. It’s a really good interview, really compelling guy.
All right, well thanks everybody for hanging out with us this week. We will see you next week at Office Hours. Adios, everybody.