[Video] Office Hours 2018/7/18 (With Transcriptions)

Videos

This week, Brent, Erik, and Tara discuss their SQL server specialization, CPU-ready time, making SSIS changes globally rather than one at a time, creating an Always On AG farm, Azure managed instances, going from log shipping to Availability Groups, tracking folks that executed queries, moving databases into Availability Groups, an exciting update to SQL Server 2016, and Brent’s upcoming move across country.

Here’s the video on YouTube:

You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.

Enjoy the Podcast?

Don’t miss an episode, subscribe via iTunes, Stitcher or RSS.
Leave us a review in iTunes

Office Hours – 7-18-18

 

How much vCPU Ready Time is too much?

Brent Ozar: Greg asks, “How much vCPU Ready Time is too much?”

Erik Darling: Ah, Greg must be talking about a virtual machine.

Brent Ozar: Oh yeah.

Erik Darling: So if I recall correctly about CPU Ready Time, it’s not something that you want to just look at once. It’s something that you need to look at over time. I forget all the details on it. I admit, I’m a bad virtual DBA sometimes. I forget the ins and outs of every single metric life that. Maybe someone else remembers them.

Brent Ozar: I would just start with SOS_SCHEDULER_YIELD – is SQL Server actually waiting on CPU time and how long is it waiting on CPU time – because if SQL Server’s not waiting on CPU, I don’t really care that much about vCPU Ready Time. It can be other things inside your guest that are waiting for CPU time. Then what I just do is, if I’ve got SOS_SCHEDULER_YIELD waits inside a VM, I’m just going to go to the VM admin and go hey, do you know what CPU Ready Time is? Can you show me that inside of vSphere? Can we look at it together? Oh, you see those huge spikes in the numbers, does this mean anything to you? And they’re like, oh yeah, we overcommitted that host. We have 42 other guests on it. And just even having that discussion starts to help, but that’s the only time I would go to them is first if I could prove it with SOS_SCHEDULER-YIELD waits first.

 

How do I make a change in a lot of SSIS packages?

Brent Ozar: Steve asks, “Today, I had to change a data type from Unicode string to decimal…” Holy smokes, “In the advanced editor of an Excel source within an SSIS package. Is there a way I can make a bunch of SSIS changes globally rather than one at a time?” Oh, dude, have I got bad news for you. You are in the wrong webcast here.

Tara Kizer: Well, I might be able to give a clue though. So at my last job, we had a lot of SSIS stuff. And when we were moving everything over to Availability Groups and multi-subnet failovers, we were trying to figure out is there a way to modify all of the SSIS packages programmatically so it wouldn’t require so much touching, manually making changes to each – you know, because we were having to swap out OLE DB to ADO.Net. anyway, the dev DBAs had figured out that BIML would provide a lot of help. So I suspect that BIML helps with this. If you’re using SSIS, you probably know about BIML already. So I would look into BIML and then maybe check out Andy Leonard.

Brent Ozar: Yes, up on the screen now, Andy’s class on expert SSIS actually has stuff on BIML inside of it, the BI markup language. It lets you generate your SSIS packages programmatically. Of course, it’s kind of a start from scratch kind of thing, but if you’re the kind of shop that needs to do that all the time – and I hate giving Steve that answer too because Steve’s like a good friend of ours and is with us all the time and, you know…

Erik Darling: We just don’t use that much SSIS ourselves.

Brent Ozar: It’s not that we have any negative feelings towards it. I’m totally fine with it. It’s just one of those things where as you get to become a database professional, you start specializing in things. So I should ask that too. So as we were going through our careers, what did y’all decide to specialize in and what called to you inside of the SQL Server engine? Erik, what called to you as you were learning more about SQL Server?

Erik Darling: Oh, I mean, obviously the performance stuff; anything involving query plans, optimizer weirdness. That just popped right out at me immediately. At first, I thought I was going to be like, oh well I like ETL. I like getting this crazy file and then doing stuff to it and that magically goes in the SQL Server and then people can do whatever they want with it. But that really got to a point where, like, I didn’t want to start learning other languages and every time I opened up SSIS I cried. So when Brent says we don’t really have negative feelings towards SSIS, I’m going to say, like, the first time I dipped my toe in that water, I was like, no. there was things, drawings and arrows, I just couldn’t wrap my head around that. I never invested a lot of time in it, but I just didn’t – and then as things went on and I got past the ETL hump and I had a stable enough process to deal with that and I started working on more reports and stuff, it would always come down to, like, well why is the report slow today? And other big questions like that that got me more into the reporting end of things.

Brent Ozar: Tara, how about you?

Tara Kizer: Well, I mean, my story is a little unique because I was just placed into a DBA role and I was just happy to get a fulltime job in IT right out of college. And I really enjoyed what I was doing but my focus for what I like has always been performance tuning and then a few years later when I got into HADR topics. Those are the areas that I really enjoy. They aren’t necessarily what got me into it because I was just placed in the job.

Brent Ozar: Yeah, I’d say to anybody watching, there’s so many good careers in SQL Server. There are so many things that you could specialize in. whatever you choose to specialize in, the deeper that you know, the better off you’ll do.

 

Should I stack instances and run AGs on them?

Brent Ozar: Next up, Joseph asks, “Are there any good suggestions for resources on creating an Always On Availability Group farm; as in lots of instances on two servers running SQL Server 2014 Enterprise?” And I think by instances, he just means a bunch of AG listeners. I don’t think he means a bunch of SQL Servers. I’m an optimist.

Tara Kizer: No, he used that word instance and it’s an instance installation. My answer is stop. Do not keep going with this. Two servers, I would have no more than two instances, one instance per server.

Brent Ozar: And if you mean listeners then we feel differently, but yeah, in terms of instances, we have kind of passionate feelings against them. Oh goodness, Joseph, oh, Joseph follows up with his Availability Group thing and he says his boss is requiring it as multiple instances, not listeners. Oh, Joseph.

Erik Darling: Yeah, that’s just going to crash and burn. I wouldn’t do that at all ever to anyone. I don’t care; no. Like sure, if you want me to technically implement this, yes, but I’m not supporting it because as soon as anyone does anything on this, it is going to be awful. There are a lot of words I left out of that sentence.

Tara Kizer: Virtualize – just create smaller VMs for each instance. I get that you’re using Enterprise Edition, so you just want to stack everything onto it, but just create smaller VMs and just split your licenses across.

Erik Darling: Yeah, like for Enterprise Edition, if you license at the host level, you can spin up as many enterprise instances as you want on there. So your ball’s in a better court when you do that both from resource partitioning, you know, licensing, all sorts of stuff. I would much rather go with lots of VMs than two monolithic instances of arguing Availability Groups.

Brent Ozar: Nice flexibility on patching, on outage windows, you can do different ones for each VM instead of trying to patch an entire cluster at once.

Erik Darling: You can have different people on different hosts, some affinity rules so that people aren’t stacked up too much on one host over another. I mean, putting everything on one box is just not going to be a happy place for anyone or anything anywhere – any…

Brent Ozar: Joseph follows up with, “Okay, thanks, you guys.” And I get the feeling it’s a really short response as in, okay, yeah, whatever, I’m screwed. So here’s what you do, Joseph. We’ve played this game so many times with managers. Here’s how you run it is you email help@brentozar.com and you copy your boss and you ask it in the most politically polite way possible and we’ll remember who you are from Office Hours and we’ll give you all of the answers. So we go, well we would never do that in production; here’s why… Never is probably a strong word, but I would do it if there was a million dollars on the other end of the line, sure, as long as I didn’t have to support it after.

Tara Kizer: And just be aware, our replies may not be politically correct though.

Brent Ozar: What are you, effing stupid? What kind of… And if you don’t like your manager, just send us a separate email first going, I don’t really like this guy, and then we’ll really just lay on the F-bombs and all kinds…

Brent Ozar: I have a folder of GIFs just for bosses.

 

Can I run SSRS on Azure Managed Instances?

Brent Ozar: Bill says, “Do y’all know if Azure managed instances can run reporting services or do you still have to run reporting services in your own Azure VMs?”

Erik Darling: Oh, I don’t know.

Brent Ozar: I do and you can’t.

Erik Darling: Alright, well…

Brent Ozar: SSRS is in this weird lifecycle thing right now where it’s kind of hovering around and they’re like, it has a separate installer and you can do it on Linux or whatever, but they don’t really have a cloud story for it right now. It’s not that it’s a bad product, it’s just kind of in a weird spot right now. So they just announced that Power BI – the Power BI reporting server and the stuff up in the cloud – now starts to have SSRS functionality. So I’m like, man this is just a weird time to be in SSRS.

Erik Darling: Yeah, everything about SSRS sounds like end of life.

Brent Ozar: A friend of mine said – working for a company – SSRS is like the store brand spaghetti sauce. You get it because it’s cheap and it’s on sale and it works, but it’s not like when family comes over or other people come over you really want to serve them the store-bought spaghetti sauce. You want to put your own tomatoes in there, something like that, to make it taste better.

 

What should I look out for when changing fill factor from 10% to 80%?

Brent Ozar: Mike asks, “When I’m changing fill factor on a whole lot of indexes from 10% up to 80%, what should we consider, i.e. transaction log size, plan cache, et cetera?”

Tara Kizer: I want to know how you got to 10%. Was that just a mistake and you were supposed to be 90%? We’ve had clients in this situation before. It’s like, do you know that your pages are mostly empty?

Erik Darling: Yeah, that’s tough. I don’t think I’d be terribly worried about the plan cache. I think making that change is a far more important one performance-wise than maintaining the plan cache there. Transaction log size is going to be interesting, especially if you have any HA or DR technologies in there like availability groups or mirroring or log shipping. I’d also be really mindful of locking. You may not be able to get this all done in one go. If you just have a standalone instance, this might be one of those we’re going to switch into simple recovery maintenance windows, flip that stuff over, do your changes and then flip stuff back to full. That was the first thing that I would start looking at.

Tara Kizer: And if you’re Enterprise Edition, which a lot of people aren’t, but if you are, do the online equals on option so that the existing index stays online. That’s if you care for production to continue while this maintenance window is occurring.

Erik Darling: I would want to time that too, on a dev instance, just so I know what I’m in for.

Tara Kizer: And then just make sure you have enough free space in the transaction log for the largest index. So if your largest index is say 100GB, that you have at least 100GB transaction log file size. I’d want to make sure I’d have like 150GB or so, but it’s around 100% of the size of your largest index.

Erik Darling: Yeah, I’d want to think about sorting in tempdb too on that, while we’re talking about options.

Brent Ozar: I like that. I was going to say too, the thing that always gets me is even if I’m not allowed to put them in simple recovery model – I said model, Tara…

Tara Kizer: You did, yay…

Brent Ozar: If they still have to be in full recover model then you got to do transaction log backups frequently…

Tara Kizer: Every minute.

 

Should we use log shipping or AGs?

Brent Ozar: Bill says, “Okay, forget my last really long question…”

Tara Kizer: I liked his question too. I was going to summarize it because I did read the whole thing.

Brent Ozar: Really? Do you want to summarize it and answer it?

Tara Kizer: Sure, so they were using log shipping, didn’t have a whole lot of issues. The reason is a warm backup, rarely had a problem, they run SQL Server 2005, they moved over to Availability Groups in SQL Server 2012 and started having some issues. There were some hiccups in the application when failovers would occur and just wants to know, should we move back to where we weren’t having issues? Is there a learning curve to Availability Groups? Sure, there’s a learning curve to Availability Groups, but you can’t really compare the two environments that they had because log shipping has no automatic failovers. What is probably happening, either the manual or automatic failovers, you’re doing a failover and the databases go through crash recovery. So the databases are not available when this occurs.

The same thing would have happened with log shipping but it would have been a very manual step, it would have taken longer and your applications would have had hiccups too. You went from a disaster recovery technology to a high availability technology. You can’t really compare your two environments. You’re not using them the same. Both of them are susceptible to latency. His information indicated that the secondary gets a lot out of sync with Availability Groups; the same thing must happen on log shipping, you just weren’t monitoring it, I bet. And it would probably be even worse there because there’s a delay. You’re waiting for the log backup to occur, the copy to occur, the restore log to occur, whereas availability groups, it’s always happening asynchronously. It’s happening at a much lower level than the file level that log shipping happens at.

So log shipping would have had way more latency than Availability Groups, at least from what I’ve seen. As far as learning curves, sure, big learning curve. We do recommend training, at least to DBAs. You don’t have to be a DBA to support this, but clustering knowledge is very, very helpful, making sure you have a staging environment. I mean, I like Availability Groups; just what are you trying to achieve? Do you just need a warm standby? Use Availability Groups and have it be asynchronous and then it kind of acts like log shipping and there’s going to be manual steps to do those failovers. But if you’re wanting high availability, you know, to reboot a node or if you’re patching or whatever and just a quick downtime for the application, the hiccups are going to be normal. That’s what occurs when you do a failover and the same thing with log shipping.

Erik Darling: And if you feel like you’ve bit off more than you can chew with the availability groups, you can always switch to me and Brent’s favorite setup, which is the failover cluster plus log shipping. Then you get HA from the failover cluster and you get DR with the log shipping and you don’t get all of the headaches of the AG.

Brent Ozar: Classic two-piece suit of high availability and disaster recovery.

 

How can I tell who ran a query?

Brent Ozar: Pablo says, “When I look at the plan cache, is there a way I can tell who executed those queries without starting an audit of some kind?”

Erik Darling: I’m going to answer in Spanish; no. Unfortunately, plan cache doesn’t keep that kind of information around. You would have to either audit it or run sp_WhoIsActive or sp_BitzWho – log that to a table, get a monitoring tool, one of those things, to make sure you collect that information when queries run. You could probably do it with extended events, but I would be hesitant to want to collect that stuff with extended events, just because usually when you set up that kind of session, you’re looking at collecting the SQL text or the query plan or something like that and collecting that stuff with extended events is just a nasty, angry, just woke up from hibernation, bear. It just wants to treat you like a salmon in a warm stream.

Brent Ozar: The other thing I’d ask is why you want to do it. If you’re doing it for audit purposes, like you want to know who broke something if you’re looking for inserts, updates, and deletes. You could use transaction log reader utilities. Those are not easy or straightforward. They’re kind of a pain in the rear in order to use. If you have control over the query, like over the stored procedure, I love putting a line to log it to a table. Just, like, inside the stored procedure, put in an insert into a table with the person who called it, the time they called it, et cetera.

Erik Darling: Yeah, if it’s write queries, you can do that with a trigger. If it’s just read queries, you’re in a little bit of a different place.

 

I’m getting this error when I run a backup…

Brent Ozar: Drew says, “I’m having some issues with SQL Server backups.” And then he goes on for four paragraphs. Okay, hold on a second. Let me see if I can spin this in a new window. Holy cow, Drew. He says, “I have an error…” Oh good, oh – so I have to read this out loud for posterity now because it’s interesting. Drew says, “I’m having some issues with SQL Server backups which are being written to a share on an AWS appliance from an EC2 SQL Server.” This is like there’s Rube Goldberg OK Go music video going on here. “I’m consistently getting the same error in the SQL Server event log for two of my SQL Server instances saying that they failed to write to a backup device, i.e. an unexpected network error occurs.” Welcome to the cloud. You should expect writes to fail from time to time in the cloud, especially going across a network share.

What I would probably do is – and this is a terrible thing – I would try to find the most reliable file path that you can, whether it’s a Windows server or whether you do it locally, whatever. But then after the writes finish, as soon as they finish, then you sync the file somewhere else. And I would also say just to be paranoid, do your log backups pretty much continuously but get them off the box as quickly as possible, even while you’re doing the fulls, because a half hour long – two hours – you’re saying two hours into your backup – two hours into your backup fails, you’re going to miss your RPO and RTO by a mile.

 

Can I use log shipping to seed an AG?

Brent Ozar: Next up we have Katie. Katie says, “We’re looking to move some databases into an Availability Group.”

Tara Kizer: I like these people.

Brent Ozar: Right, yeah…

Tara Kizer: It’s good to have no replication questions, all Availability Groups.

Brent Ozar: Don’t jinx us now. All the replication questions happen when… Katie says, “We were hoping we could speed things up by using log shipping and then adding the databases into the Availability group. Does this seem logical or are there better methods?”

Tara Kizer: Yeah, definitely, although I think it just adds too much complexity having to set up yet another tool just to get Availability Groups synced, so I just do it manually. And I just went through this exercise on Monday with a client – a client hired us for consulting hours rather than our critical care and so I was helping them set up – it was database mirroring, but same thing, and starting with the full backups from last night, rolling through all the transaction logs – so I have a handy script to do that. You can use log shipping, but there’s scripts out there to roll through the logs. We also have the sp_restore – whatever the name of that stored procedure is in the First Responder Kit that can help with that.

And as long as your files are named in such a way that sorted alphabetically in, say, Windows Explorer, they’re in the right order, that script would work. It assumes Ola Hallengren naming convention, but any convention, as long as once sorted alphabetically they’re in the right order, that script would help you. I had another script in my toolbox. I was using that and this client had the weekday name in the file name, so Monday was coming before Sunday, so I had to manually move commands around, which was annoying. But sure, you could use log shipping, I think it just complicates things by having to have another tool in there.

Brent Ozar: And if you’re on 2017, something kind of cool came out this week. So I’m going to go to sqlserverupdates.com which is where we post the most recent updates. No, I lie to you, I said 2017; it’s 2016. So in 2016, there is a brand new improvement in the latest CUs where you can automatically seed replicas. You can change the settings so that as you add databases to an existing Availability Group, SQL Server will automatically do direct seeding to other replicas. There’s a gotcha that I’m remembering as I say it out loud. It was a distributed Availability Groups only, which makes me want to give it the finger, but yeah, whatever.

Erik Darling: Katie brings up another point. She says she has a couple of terabytes of data and she’s trying to minimize downtime, so when I’ve had to do this in the past, when I did large data migrations, I set up mirroring because it was just like an easy one-way ticket. I put it in asynchronous mode up until I was ready to go live, flip it to synchronous, and then do the failover then. I had a pretty easy time with it. I know Tara doesn’t like it because it’s too complicated but…

Tara Kizer: No, database mirroring I will use. Log shipping is just bleurgh… Not a fan. I mean, the situation still exists. You still have to get it to apply the last transaction log, so that’s the challenge, especially when you’re talking about a couple of terabytes of data. Now, if we’re talking about say one or two databases, I’m probably going to switch my method and not use full backups so that I can get the data over quicker via SAN snapshots, restore that and then apply. I mean, you could do that, right, with the no recovery and apply a transaction log chain, so I’d probably be looking into other methods if I’m talking about a lot of data. But if it’s a lot of databases, then yeah, that’s not really going to help.

Erik Darling: For the really big ones, I had the easiest time by using a full and a diff and then going over there. And the full and the diff combo usually got me to a close enough LSN where mirroring was like, cool.

Tara Kizer: I had a 7TB database that I would have to mirror and occasionally there would be issues and we’d have to start it over again and we went from San Diego to Las Vegas for primary and disaster recovery and the network wasn’t fast enough to transfer the backup file. Even with compression, the backup file was just too long and the backup retention wasn’t long enough. By the time I was ready to start applying transaction logs, they were gone. They’d already been deleted from production and I couldn’t change that for legal reasons. And so we occasionally had some employees traveling to Las Vegas, and so we’d send a tape. We’d fly a tape with them and so they would hand deliver that and that would allow us to catch up. It was funny; flying is faster.

Brent Ozar: Katie follows up with she has four nodes in her Availability Group so she wanted to do log shipping to get all four of them into quick succession, versus mirroring, which restricts me to only one node; makes sense.

 

How can I do USE in Azure SQL DB?

Brent Ozar: Bill says, “Since you can’t use the use command in Azure SQL DB, how do you modify scripts that get a list of databases and look at something on each database?” Well, Bill, as soon as you figure that out, let us know because that’s one of the big reasons why things like sp_Blitz don’t work in Azure SQL DB.

Erik Darling: Sadness…

Brent Ozar: I wish that we had that capability. I wish that we could get – there’s two things that really kill me about Azure SQL DB; the inability to have the use command and the inability to do three-part naming. You know, as in database schema table – those two things are just so huge. So yeah, it’s – and it’s not that we don’t want to get these scripts to be compatible, we just don’t have clients using Azure SQL DB. So every time someone comes in from the community and they’re like, hey can you make this work in Azure SQL DB, I’m like, you’re using Azure SQL DB you say?  Why don’t you make them work? Well, that’s too hard. Exactly…

Erik Darling: It’s not any easier for us. We have access to the same stuff you do.

Brent Ozar: We are big fans of getting it to work in managed instances. We immediately, as soon as managed instances came out, we were like going through all the scripts to make sure that they work, but that is much easier.

Erik Darling: Yeah, well those just don’t have the limitations. Like you can have the use command and you can cycle through all the databases on a managed instance. If we could do that everywhere, we would just do that everywhere.

Brent Ozar: Piece of cake. Well thanks, everybody, for hanging out with us this week at Office Hours and we will see y’all next week. Adios.

Register here to attend the next live Office Hours podcast taping on Wednesday.

Previous Post
How to Test Disaster Recovery Before You Go Live
Next Post
Building SQL ConstantCare®: What Mentoring Is (and Isn’t)

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.