[Video] Office Hours 2018/4/18 (With Transcriptions) with Special Guest Drew Furgiuele

Videos

This week Drew Furgiuele joins Brent and Richie to discuss patching, job interview suggestions, using PowerShell to solve various SQL Server issues, roles and responsibilities of a SQL or a database architect, enabling query store on production servers, SARGability, moving SSRS from the main production server, tempdb issues, Availability Groups troubleshooting, Drew’s upcoming Powershell class, and more.

Here’s the video on YouTube:

You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.

Enjoy the Podcast?

Don’t miss an episode, subscribe via iTunes, Stitcher or RSS.
Leave us a review in iTunes

Office Hours – 4-18-18

 

Should I apply patches to all the nodes at the same time?

Brent Ozar: Richard says, “When I’m doing monthly Windows updates on a failover cluster, should I update each node and then wait for a few days, then update the next one, or should I just update everything all at exactly the same time?” Drew, how do you guys deal with patching over there?

Drew Furgiuele: You know, we’ve gone through all kinds of different ways of administering patches, like, at least where I work. I mean, we used to do WSUS and we tried System Center. Now we use a third-party solution. But if I can stomach the downtime, I like to do it all when I can, just because if, god forbid, I do have to failover – because we had an instance one time where some patches got applied to a SQL Server, system level patches and SQL level patches, which we like to do separate. But for whatever reason, they all went at the same time and we were noticing that the failover wasn’t behaving very well because it was taking a long time for it to start up, or it wouldn’t start up because the system databases got patched to a higher version, even though it was all SQL Server 2008R2.

So we got into this weird situation where, like, we were running on two different CU levels and it got really dicey. So if we patch in a window where I know I can take the extra little bit of downtime with the failover, that’s what I like to do.

Brent Ozar: I’m one of those where I’ve been burned enough by updates. I’ll usually try and do like half of them on one weekend and then wait and see if they bake out and shake out and then do the half the next weekend. But it makes your workload higher not lower. It depends on how much downtime you’re allowed to take too.

Drew Furgiuele: Right, it’s like you’ve got to have two windows to do it at that point, unless you do the failover ahead of time and depending on how tolerant your apps or your business processes are to that – I mean, I think it really depends on the person.

 

I have an interview tomorrow. Any suggestions?

Brent Ozar: Niraj has an interesting question and it’s awfully open-ended.  Niraj says, “I have an in-person interview tomorrow morning; any suggestions?”

Richie Rump: Show up.

Drew Furgiuele: Yeah, if you’re not early, you’re late is what I would say.

Brent Ozar: Brush your teeth. Don’t wear a suit unless you’re interviewing at a financial company. That advice is dead and gone. I think that’s…

Richie Rump: Good call. I mean, talk to the HR person and say, well what should I wear? What’s the proper attire for your environment?

Brent Ozar: It was good that Richie asked us on ours because I said a leather clown suit and he said formal or informal.

Richie Rump: I’m like, what color…

Drew Furgiuele: I think it also really helps to know, like where you’re applying, what’s the business do, what market are they in, just so you have some base understanding about what you’re getting into.

Brent Ozar: And you can Google for – usually, if you Google for their HR department, they’ll have pictures – for larger companies, they have pictures of the office and you can see what people wear. You can see what kind of environment it is.

 

How should I keep users synced between AG replicas?

Brent Ozar: Next up, Augusto asks, “I would like to know the best practice…” You came to the wrong room, “In order to keep users in an Always On environment synced between the principal and any replicas.”

Drew Furgiuele: Well, I mean there’s lots of different ways to do that, right. I mean, you can use good old-fashioned T-SQL scripting. You can set up automation through, of course, PowerShell, but look at user principals that exist on both servers, do a compare, find out what’s missing. I don’t really have a lot of AGs that I administer. In all that I’ve done with it, that’s how I’ve approached it is I just have a basic compare of what’s there. And with PowerShell, if you’re able to create custom objects and read in those permissions, you can do things like compare object and see where the differences lie.

Brent Ozar: Instead of dumping them all and redoing them every time.

 

I’m getting 15-second I/O warning errors…

Brent Ozar: Oh, Pablo says, “I’m getting I/O requests that take longer than 15 seconds.” Notorious – I keep saying I’m going to write a blog post about that. His principal wait in SQL Server is write log waiting to write to the transaction log and his applications are getting timeouts expired. “Can PowerShell help me look for disk issues?”

Drew Furgiuele: You know, it’s funny…

Brent Ozar: Not really…

Drew Furgiuele: Not really, like, there’s some code out there that can help you look at different wait statistics, but it’s not going to be able to give you a smoking gun, just like any other kind of wait statistics query might do. You can use that to aggregate this stuff over time so you can see if it’s time-based or if you want to collect it and look at it over historical periods of time if you don’t have access to a fancy expensive monitoring tool.

But as far as helping alleviate that stuff, I’m going to leave it to the real experts on this webcam.

Brent Ozar: Unfortunately they didn’t show up today… I think you nailed it with you know that your top wait is write log and you’re getting 15-second I/O warning waits. I don’t know how much more conclusive you can be from the SQL Server side. Now is when I would turn around and play with – CrystalDiskMark is a utility that will help you just quickly run a speed test. And you run CrystalDiskMark against your laptop, you run it against the SQL Server C drive, you run it against where the data and log files live and it will get you just a quick sanity check on does my storage suck completely or is it vaguely in line with a consumer grade laptop.

Most of the time, when I see 15-second I/O warning errors, if I even plug a USB into my laptop I can get faster throughput on a USB thumb drive than I can get from the storage. It really happens a lot under virtualization.

 

How does Drew use PowerShell to manage storage?

Brent Ozar: I’m going to ask a related question though because when we talk about PowerShell and storage – Drew, I’ve heard you talk about what you do with your snapshots in Pure Storage. So talk a little bit about how you use PowerShell with that.

Drew Furgiuele: Yeah, so where I work, we have a ton of different non-production environments and one thing we got in the habit of doing was we have a set schedule where we refresh our non-production environments and before we had access to fancy Pure Storage – free plug for them because they’re great – we were doing traditional restores and it was like, alright, it’s Friday afternoon, we’re going to shut down int and dev and con or whatever. We’re going to do the refreshes, they’re going to run for however many hours to restore those databases and then hopefully when we come in Monday morning, everything worked and we’ll reapply permissions.

So when we switch to the Pure Storage array, they have the concept of volume based snapshots. So you can set up automation around that because it has a complete PowerShell API tied to it where you can connect to it, set up scripting, to say take a new snap, you know, present it to this host, mount it on this disk and then attach these databases, apply permissions, any other kind of post scripts you need to run to prep your databases for non-production.

And that’s what we do now. And our restores went from taking an entire weekend, kind of worst case, obviously, now I can do basic on-demand refreshes that take less than a minute and for multi-terabyte databases in multiple servers. It’s fantastic. And I know that other storage providers also have that, but I’ve seen Pure do it and they just do it right, like, big fan.

Brent Ozar: And I think – you kind of have to be a new vendor. You have to come up with all new stuff and go, alright I’m going to have new approaches as to how I’m going to do this and embrace things like PowerShell.

 

What does a database architect do?

Brent Ozar: Sree asks – and this is a good one for Richie – Sree asks, “What are the roles and responsibilities of a SQL architect and or a database architect?”

Richie Rump: Ooh, well we really make a lot of coffee, we, I don’t know, have a meeting or two and we go home. I mean, what else do we do?

Brent Ozar: I wouldn’t say anything about documentation.

Richie Rump: What the hell is that? Mainly, database architects will get around – talk to the business a lot. So they know a lot about the business and how the business works because they need to understand, really, the intricacies of how everything works so they can properly model it in the database. So that is, in essence, what a data architect does; understands the business enough to know how to properly model into a database where it’s fast, it’s efficient and it does what the business needs it to do. So in the days of NoSQL now, that’s kind of changed a little bit because a lot of these NoSQL databases, you don’t need models anymore, they’re all just kind of thrown in there.

So you can kind of move very willy-nilly. That has its own set of challenges as is, but mostly that’s kind of what we do. We understand how the, kind of, gotchas of a database engine and what kind of modeling we need to do. I remember, there was one project that I was doing that I had one of my guys help with a lot of the architecture and he did a fantastic job. I mean, he really knew his stuff and we pulled out database patterns and we did some other stuff and [thaw fluid] in there and the database was pristine. It was great except when we threw the ORM on top of it. It crapped out a log and it just made everything super slow, of which we had to change the architecture of the database so that the ORM would be faster and just work right.

So a lot of times, we do our job as database architects really super well. Sometimes too well, where the application itself doesn’t keep up with what we’re trying to do. So there’s a balance there and there’s a give and trade to kind of everything we do as database architects. And what if we do this? Well, there’s something over here – and there’s always that. we’re trying to find that middle ground when we’re doing that. I mean, I guess I could go for another hour; are you okay with that? Start pulling books out and…

Brent Ozar: Well I was going to say – like for example, I know we’ve had customers come to us and say, “You have a data architect in the form of Richie; can you just come in and architect a database for us, like tell us what we need?” I’m like, well there’s so much back and forth conversation with the business and understanding what the application is. You want that guy on your team for good, you know. If you bring in a consultant for it, you’re basically writing them checks for the rest of your life because they’re going to build up all this knowledge about your business and you’re going to wish you had that in-house instead of being held hostage by an outsider.

Richie Rump: You mentioned documentation too and that’s a big part of it as well. A lot of – from what I understand, a lot of database architects, all they do is come out with the ERD and that’s it. So you have a data model and that’s all you get. But I’ve always found that the best data modelers also do a lot of business documentation, whether that’s business process flows or whatever that is. So all that is down, so then you could actually say, hey here’s my business flow and here’s how it maps into my data flows, and it all kind of goes that way.

Drew Furgiuele: Database architect doesn’t really get thrown around a lot where I work, but we have a lot of people who refer to themselves as data modelers. I guess it kind of works in the same vain. And ironically, the same person that helped model a lot of the stuff that we do is also the person who helped me embraced ORMs. So John Kruger, if you’re listening, you take credit for that.

Brent Ozar: And that always comes up on the blog. I’ve been talking about writing a blog post for a while too. There’s people who bash ORMs all the time. There was a blog post on Hacker News recently that Richie and I were looking at and people come out in droves, like ORMs suck. Everyone should know exactly how to write T-SQL. I’m like, we don’t have time, you know, you’ve got to ship products, you’ve got to ship features.

Richie Rump: And the other thing is that who’s writing the SQL? When I was a manager, that was my big question, who’s writing the SQL and do the people that will be writing the SQL, do they know SQL? Whether that’s the DBA, maybe it comes from the administration and doesn’t have a strong SQL language background, or if it’s developers who don’t have a strong database background. So who does that leave, really? Who’s writing the SQL that’s tight and does get to what it needs.

How many times, Brent, have we gone to clients and they’re missing where clauses? I mean, just the base level simple things and they’re missing all this stuff. So ORMs kind of prevents a lot of that. Now, as opposed to me understanding SQL, I just have to understand my language and I can focus more on business stuff. Now we’re talking about benefits and tradeoffs there. ORMs inherently, they’re not bad; it’s how they’re used that’s bad.

Drew Furgiuele: I mean, it’s like nuclear power; it can be used for good and evil and you don’t want to get any on you.

Brent Ozar: I thought for sure where Richie was going to go was it depends on the quality of the person writing the SQL because I’ve had several check-ins into our own products where I’ve written queries for the wrong language. I’m like, oh this is perfect T-SQL – it doesn’t even work in Postgres.

Drew Furgiuele: I have a whiteboard that I keep track of how many times you break the build and announce it via Twitter.

Richie Rump: Well we have the opposite; how many times Brent didn’t break the build and that’s when it’s announced.

Brent Ozar: I think one. I think I’ve had one build or two builds go through.

Richie Rump: Full disclosure, it really is not Brent’s fault. It’s not…

 

What’s the impact of Query Store on performance?

Brent Ozar: Turge asks, “Do y’all have any experience with the performance impact of enabling Query Store on production servers?” Drew, have y’all turned on Query Store on yours?

Drew Furgiuele: You know, we haven’t – it’s not something we’ve fully embraced yet because we have such a mish-mash of SQL versions. We have a couple of 2016 instances. We’re in the process of thinking about upgrading some to 2017, but we don’t have any kind of plan to do that right now. I’d like to, but it’s not something we’ve turned on. I mean, I know there’s a lot of people that have done a lot of good case studies about what you should or shouldn’t do. I’m not one of them, but I’d love to turn it on and watch plans evolve over time because we do have some pretty in-depth stored procedures for stuff that run and I would love to be able to track that stuff.

Brent Ozar: I’ve seen – what Microsoft initially touted was something like a 1% to 3% overhead. And the way I always feel about it is it’s a lot like a car’s dashboard. If you want to know how fast you’re going, you need a speedometer. And the weight of a dashboard and a speedometer and all that slows your car down a little. But if you don’t have a dashboard, you don’t really know how fast you’re going. So people are just like, it feels fast or it feels slow. So if you need to know which queries are performing better or worse, then you want to turn this on. I would just say, make sure that you’re on a recent cumulative update of either 2016 or 2017 because they fixed a lot of bugs in Query Store over the last three, four, five months.

Drew Furgiuele: And part of that too is with the monitoring tools we have, we can actually see degradation in similar statements over time. So it seemed to me like it’d be kind of redundant, but at the same time, I know what it has and what it offers.

Brent Ozar: Y’all use SQL Sentry or what do you use?

Drew Furgiuele: Yeah, we’re Sentry One customers and they have the ability to kind of see different statements. And they put on a nice little chart about how they’re performing over time.

Brent Ozar: And it works with any supported SQL Server version. It’s not like it’s just 2016 or newer.

Drew Furgiuele: Right.

Brent Ozar: That was my feeling with Query Store is it’s wonderful for people who don’t have a monitoring tool already and that also are on 2016. So if you fall into those two categories, 2016 or 2017. It’s a smaller audience. Most of the people who needed that kind of monitoring already got it with third-party tools.

 

How should I set a variable to the beginning of a day?

Brent Ozar: Let’s see – the last one in the queue – if anybody has other questions, otherwise we’ll close the webcast over this one, so feel free to get your questions in, otherwise we’ll bid you adieu. Colin says, “There are 100 ways to set a date time variable to the beginning of the day…” Oh, I love that. He has a method in there using convert date time, flipping it over into a date field. “This way seems super simple and straightforward to me. What are the downsides?” The big one on that one is SARGability.

SQL Server may look at that function and go, “I have no idea what the hell is going to come out of this.” So it also depends on whether or not you’re seeking for it, whether you’re just rendering it as report output; all kinds of things. If I’m going to set it as part of a search, like in a where clause, I’m usually going to set a variable either outside in my app somewhere and then set it that way, so that when SQL Server builds the execution plan, it sees a parameter with the exact date that you’re looking for. Otherwise, if you set an internal variable into your code, you can end up using the density vector, which will get you wacko statistics for how many rows SQL Server is going to come back with.

 

How hard is it to move SSRS?

Brent Ozar: Let’s see – anon says, “How difficult is it to move SSRS off your main production server? Assume I have no help, no third-party tools and can have little or no downtime.” Damn, wow.

Drew Furgiuele: All I know about SSRS is you want to backup your keys. That’s all I know. Like, that’s the only way you can move stuff is to backup your keys from SSRS. I think as long as you do that, you can pretty much do whatever you want.

Brent Ozar: I would say, if anything, you put it behind a load balancer too, so you put – if you have any kind of F5 or any kind of load balancing type tool, that you first start by pointing everyone at the load balancer and then put your existing 2008 behind the load balancer, just so that that way, whenever it comes time for the cutover, you can do that with little to no work. Plus later, you can add in multiple reporting server VMs if you want to for high availability too.

Drew Furgiuele: Yeah, and the bigger question is – I don’t think you lose anything going from 2008 R2 to 2014, but I have no idea. That would be the other thing; what’s going to break?

Brent Ozar: And I’d also ask what you’re going to gain too. SSRS isn’t terribly resource intensive. It doesn’t use that much memory. It uses some but it’s not as bad as, like, integration services or analysis services, which are total pigs. Not that they’re bad; they’re great applications. I’m just saying they use a lot of resources. But make sure that there’s a bang for the buck on the other side too, just that you’re going to get something out of it.

Richie Rump: Yeah, I mean, if you’re going to use new hardware on it, you might as well move the main production server over there rather than SSRS. But if you’re moving to like a small VM or something like that, no, don’t do that; that’s no good.

Drew Furgiuele: And ultimately, it’s where the queries run, right. So like, you know, it doesn’t matter where SSRS runs; it’s going to connect to an instance and run a big SELECT*. It doesn’t matter where that happens.

Brent Ozar: Anon adds on, “Our team’s split and the reporting team is now in a different organization.” Ah politics. In that case, it’s their problem. You go, you stand up the server whenever you want and I’ll give you the data.

Brent Ozar: Marci says, “We have always run reporting services analysis services on their own servers and over the years we’ve gone from 2008 to 2014 without anything breaking and the report builder version is always improved with the latest, so just stand it up and see.”

Brent Ozar: Brian also adds, “I just restored my supporting services databases to the new instance and then fixed the data course pointers prior to going live.” They keys is the other one to keep in mind there; any kind of encryption keys on SSRS.

 

How many TempDB files should we use on a VM?

Brent Ozar: Adrian says, “We use VMware and we’re trying to figure out whether or not we should use one data file per socket for tempdb or one data file per core – like how many tempdb files do I have?” And he says at the end, “I have super high latency.” So I think he’s thinking that more files are going to help him in terms of latency. Drew, when y’all set up new tempdb data files, how many users per server?

Drew Furgiuele: Well we’re a VMware shop and I typically go, if my server has eight CPUs or eight VCPUs or less, I tend to go eight files. If it goes any bigger than that, I kind of see diminishing returns, but that’s just my experience. I haven’t really seen any tempdb contention over that. Now granted, like, we don’t have any super powerful VM SQL Servers, although we did just get some new hardware. I think our biggest virtualized CPU in a cluster is probably eight VCPUs per. But even then, if it was bigger, and maybe it is a little bit bigger than that, I still think I only went eight. And I think I saw that advice somewhere.

Brent Ozar: Yeah, that’s in our setup checklist too. I’m fine with – some people will tell you four, some people will tell you eight, either of those numbers is fine. And in terms of more files helping you with file latency, that’s probably not going to happen. I would go check CrystalDiskMark. Just got run CrystalDiskMark for a quick rule of them test against your I/O and run it against your desktop and compare to see the two differences. Then prepare to open sadtrombone.com.

 

Where are Drew’s scripts for his GroupBy presentation?

Brent Ozar: Teschal says, “Drew, do you have the scripts that you presented on GroupBy? I watched but I couldn’t find the scripts.”

Drew Furgiuele: Yeah, so man, it’s been a while since I presented on GroupBy, but my GitHub – it’s a ridiculous last name, so the easiest way to get there – it has my last name in the URL. If you head over to my website, which is port1433.com – over on the right-hand side there’s a little Git Hub icon, on the right, a little octocat. If you click on that guy, that will take you to my repositories and any publically available code that I’ve shared has been there. So you click that little guy – those icons could admittedly be a little bit bigger, but I’m not the designer. So if you click there, that will take you over to my repositories. And like I said, any publically available code that I’ve ever done is on there. Richie, you should really upgrade IE…

Rocky, Drew’s partner in crime

Brent Ozar: And who is this adorable little creature here with the bowtie on?

Drew Furgiuele: That’s Rocky, who is actually behaving himself right now. Usually, he’s down here all up in my business, but that’s Rocky. He’s my little monster.

Richie Rump: Are you using DotNetNuke for your website?

Drew Furgiuele: Nah, WordPress.

Richie Rump: I’m trying to find that guy who’s always using DotNetNuke. I’ll still look.

 

 

SadTrombone.com actually exists

Brent Ozar: And then Michael Tilley says, “Holy crap, there actually is a sadtrombone.com.” Of course, there is.

Drew Furgiuele: There’s also the auto-play version, which is the one you need to send people.

Brent Ozar: Yes, why does that not work? Google Chrome, it’s trying to plug – I hate Internet Explorer so badly. So anyway, any-who, back over to the PowerPoint, put that backup and go back to the questions list. That teaches me for trying to surf the web on a crappy version of internet explorer.

 

Why isn’t my AG working correctly?

Brent Ozar: Mark asks, “I’m adding a database to an existing Availability Group. For some reason, the secondary files are not going to the same drive and location as the primary – they’re going to a default folder. Do you have any idea why this is happening?” Drew, do y’all use Availability Groups?

Drew Furgiuele: We don’t because we don’t use Enterprise Edition. Although I know we can use basic AGs now, but we don’t use them. We’re a traditional Failover Cluster group right now, although we are getting to that point where we’re exploring a lot of new DR stuff for this coming year and we’re going to start moving to that stuff with some remotely available stuff. But that’s a weird question. And forgive me, but is that, I think you said the default directory when you install SQL, right?

Brent Ozar: Yeah, I bet money that when you’re restoring the databases without using the with move commands to move those databases somewhere else. If you’re restoring the databases manually as opposed to going through the GUI or using direct seeding, which will just automatically do it for you. If you do manual restores, you have to specify where you want the data and log files to go. And the folder structure needs to be identical across all of your replicas, otherwise, whenever you add a data file or log file on the primary, it will break if that folder doesn’t exist, drive letter or folder doesn’t exist over on the secondaries; your replication will stop working.

Mark follows up with, “We’re using the GUI to do it automatically and the folder structure is identical.” In that case, I would actually post it, but I would post with screenshots. Do as many screenshots as you can on dba.stackexchange.com because you might have found a bug. I don’t think you did, because I’ve done this a bunch of times myself, but it’s possible that you found a bug and other people will see it when you go over to dba.stackexchange.com.

Update: in this week’s 2017 Cumulative Update 6, Microsoft released a patch that might solve this problem, too.

 

Drew – what’s your PowerShell class about?

Well, thanks, everybody for hanging out with us this week. Drew, before we go, we should say your upcoming training class – tell us a little bit about what you’re teaching and who’s the target audience for it.

Drew Furgiuele: Yeah, so this will be kind of the third go-round of this class. It’s PowerShell for DBAs, hosted by the fine folks at Brent Ozar Unlimited, and it’s targeted at people who have a desire to learn PowerShell and how it can be used to administer things like SQL Server. So it’s a two-day class. For signing up and joining it, you get access to a VM for both days and we start out at the base level. We talk about how to do simple things like declaring variables in PowerShell and doing basic scripting actions and then day two, we move into how you connect to SQL Server, how you can script things out, how you can automate things. Really a lot of fun to present. It’s been really good. People seem to like it to hopefully you guys, if you have desire to learn that stuff, you can now learn with me.

Brent Ozar: Absolutely. Marci says, “Do you have a link for the class?” Yes, if you go to brentozar.com and right on the homepage there’s a list of classes. You can see Drew’s shiny happy face smiling. When’s the next one, like June I think?

Drew Furgiuele: It’s like the last week of June, I think.

Brent Ozar: Sounds right. Alright, well thanks everybody for hanging out with us this week and we will see y’all next week on Office Hours. Adios, everybody.

Previous Post
Does an Index on Just The Clustering Key Ever Make Sense?
Next Post
Building SQL ConstantCare®: Why We Avoided Graphs

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.