[Video] Office Hours 2016/08/03 (With Transcriptions)

This week, Richie, Erik, and Tara discuss availability groups, sharding databases, using DNS entries to mask DB server names, and Erik’s passion for internet buttons.

Here’s the video on YouTube:

You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.

Office Hours Webcast – 2016-08-03

 

How can you create test data?

Erik Darling: Okay, there’s a question from Joseph. “Are there any products or tools that you like to use to create volume data for performance tuning queries? I’ve tried a couple but I’m not satisfied. I’d like something I can kind of point at a table and it’s going to then figure out what dependencies exist and create appropriate data.”

Richie Rump: Sounds like magic. Wizardry.

Erik Darling: So I’m not sure which tools you’ve already tried. The only one that I’ve messed with, and it was sort of a beta version, was a Redgate tool that created a bunch of data. I was okay with that. If you don’t do that, if that doesn’t meet your needs, then you might have to home grow something because I’m not sure of anything that you can just point at a table and create a bunch of dependencies like that. Anyone else? You guys ever do anything?

Tara Kizer: No, I’ve always worked at large organizations where we have a performance team and they use tools like LoadRunner. They have tools. It’s all magic to me.

Richie Rump: Yeah.

Erik Darling: Like, “How did you do that?”

Tara Kizer: Yeah, I just come in after the database is ready and we start a load test.

Erik Darling: There you go.

[Richie starts speaking]

Erik Darling: Oh, go ahead, Richie.

Richie Rump: No, it’s just a lot of the perfs that I’ve done is actually, “Hey, production is slow,” right? Everyone is going crazy.

Erik Darling: That one’s fast.

Richie Rump: That’s right.

Erik Darling: Now it’s slow. Now it’s fast.

 

How do you set up distributed availability groups?

Erik Darling: Tara, this one is right up your alley. Tommy asks, “Do you have any experience with setting up distributed availability groups? What are the requirements and best way to use them?”

Tara Kizer: What is meant by distributed though? I’m not familiar with that.

Erik Darling: Oh, I believe he means—oh, I think I read something about this. Give me a second because I have to take my internet buttons off my fingers. So SQL Server 2016, distributed availability groups… All right, I will paste that link over to you if you want to give that a look at. I’m going to paste this into chat since I don’t think any of us have a particularly good answer on that just yet because it’s maybe some new magic that we haven’t played much with. Tara would have the best chance of knowing some magic on that one.

Erik Darling: Let’s see, Joseph says, “There was some great music in the 80s, it just gets lost under the hair and makeup.” Yes. So did my grandmother. We still haven’t found her. It’s just a pile of hair and makeup.

Tara Kizer: Oh, so you have two availability groups. I haven’t done it that way. The availability groups that I’ve used have been for HA reasons and DR reasons so it was a multi-subnet configuration. We had one availability group stretching across two sites. Three replicas at the primary site, two replicas at the DR site. So I haven’t used this methodology before. Yeah, this is new for 2016 and I’ve only used it for 2012 and 2014. So I’m not sure what the advantage is of using a distributed availability group as compared to just say a multi-subnet configuration with one availability group.

Erik Darling: Maybe that’s something we could toodle around with at DBA Days when we have all that hardware at our disposal. I mean, none of it’s really like widespread so I don’t think we can get any of the funkiness that comes along, but we could try to do it.

Tara Kizer: I’ll definitely look more into it.

Erik Darling: Cool.

 

How does licensing work for upgrades?

Erik Darling: All right. Joseph has a follow-up question asking—we’re going to go back to the upgrade past 2005 to 2016. That double hop should not be a licensing issue. I think they give back level licensing with your latest level, right?

Tara Kizer: It depends if you have software insurance.

Erik Darling: Yeah. You have to pay that SA tax, buddy.

Tara Kizer: Yeah, just to add to that. Microsoft does want you to upgrade so if you don’t have the 2012 bits to get it installed, I would contact whoever sold you the license and see if you can get that because they really want you to upgrade off 2005. So I’m sure that they could help you here.

Erik Darling: Sure enough.

 

When I call queries across servers, where is the work done?

Erik Darling: Ian asks, “If I call a sproc from server A that is on server B, where will it be processed? In other words, where will the memory and CPU and disk be hit?”

Tara Kizer: It depends how you’re doing it. I believe if you use OPENQUERY that that will actually run everything on the remote server. So we’d have to see how this is being called. So where does the stored procedure… oh, it’s on server B.

Erik Darling: Yeah, so it changes a little bit. It depends on where the data is. So if you’re using linked servers, if you’re using distributed queries, Conor Cunningham has a great talk about this on the SQLBits website but I’m also going to answer it a little bit. So it really depends on where the data is and also then further where the SQL decides the best place to have the data is. Because it can actually push some results around from one server to the other. If there’s a smaller table on server A, it can actually bring that data over to server B and do a join there. Conor is a really smart guy. He’s one of the architects of SQL Server. He’s been at Microsoft forever and ever. He knows absolutely everything about this so you will probably get all the information and more that you need from this talk. I highly recommend it.

 

Does sharding data across SQL Servers help blocking problems?

Erik Darling: Lee asks, “What about the idea of sharding a 1.5 terabyte database into multiple databases on separate SQL instances in order to help performance issues related to blocking? I have a couple of developers who are pushing this idea on me despite my strong resistance. They believe it will help performance because it will be less data for SQL Server to traverse during queries. I feel the complexity simply is not worth it and that sharding isn’t the silver bullet in this situation.” Any initial thoughts on that?

Tara Kizer: I don’t like the sounds of it. I mean, I agree with Lee that this is going to be complex. How are you going to keep them all in sync or are these completely separate databases that you don’t have to keep in sync? What performance issues are you having now? You’re likely missing indexes, bad queries, there’s something wrong. A 1.5 terabyte database, it’s large, but it’s not that large. I’ve seen bigger databases than that.

Richie Rump: Yeah, it’s not large and I think are you going to have to pull the data together and process it, right? So now you’re going to multiple servers, multiple databases. Then I’ve got to put it into one place, then I’ve got to actually get the results I want out, and that is a pain. It is terrible.

Erik Darling: That is a really painful process. So some things you can try instead if you want to talk to those little hamster wheel developers that you have trying to get you to shard everything, like this is a dupe. So what I would suggest is a) if you’re having problems with blocking, right? You have performance issues related to blocking. Try an optimistic isolation level, like snapshot isolation or read committed snapshot isolation. Also, make sure that your indexes are effective. Make sure there are no missing indexes on that table. Make sure that there are no crazy indexes on that table. If you have a lot of duplicate, borderline duplicate indexes, or some overly wide indexes, a lot of the queries that are going to modify data can hold some pretty long locks on things and keep your other queries from doing stuff for a while. Don’t use NOLOCK, it’s not going to help you. If you find that you really do need to partition things out, I would try to go with the partitioned view idea before I would go to actual partitioning. That’s just where you create tables for a segment of data and you create a view that queries them all together. If you have the right constraints, by date, or some other identifying characteristic, you can actually get pretty good elimination of certain tables when you hit that view. So there’s a lot of stuff you can do before you go crazy separating a database out to a whole bunch of different servers that can help things.

Richie Rump: Yeah, I’ve always just used straight partitioning but when you do that you have to always query by the partitioning key at that point.

Erik Darling: Yeah, you also have to align all your indexes to the partition and… crazy stuff goes on.

Richie Rump: Yeah.

Erik Darling: This button does not partition tables.

Richie Rump: But again, 1.5 terabytes, these days, isn’t really a ton of data. I feel it’s more the norm than anything else.

Erik Darling: Yeah, we deal with some pretty big databases day to day. Just today, I was working on a database that was 450 gigs. Back when I had a real job, the biggest database I ever had was around 11 terabytes. So there’s a lot of stuff you can do before you start having to get all hamster-y about trying to shard data. Really because SQL Server wasn’t really built for sharding like that.

Richie Rump: So what was everyone’s largest database that they’ve dealt with?

Tara Kizer: The one that I was primary DBA on was 7 terabytes at one point but however the actual SQL Server DBA team the largest one I believe at the time was 16 terabytes. I just wasn’t primary on that one.

Erik Darling: Did you ever have to do anything with that?

Tara Kizer: You know that was a third-party product so they did have partitioning set up. They definitely had performance in mind and setting it up properly, had to do with a patent database.

Richie Rump: Yeah, mine was 40 terabytes.

Tara Kizer: Oh, wow.

Richie Rump: Yeah, it was fun. That had some really unique problems to it that you just had to think outside the box to fix them.
Erik Darling: So how much of that was like normal data and how much of that was just like crazy LOB columns?

Richie Rump: It was all normal data. It was all POS data coming in from 35,000 stores across the U.S. So there was just massive amounts of data there that are coming in on a daily basis. The good thing is that they only needed stuff that was for the day or for the month and then the rest of it essentially is not queried very often.

Erik Darling: Oh, man.

Richie Rump: We had a lot of rollups and data warehouses and big Teradata implementation and stuff like that. But, yeah, everything was stored in SQL Server.

Erik Darling: So Lee actually follows up and says, “We are using RCSI and have been for a while. I agree sharding is overly complex to solve blocking issues.” So if you’re using RCSI and you still have blocking issues, give us a call, because that sounds like an interesting problem.

Richie Rump: Erik will look at your database.

Erik Darling: Yeah, I want to stare at that thing for a while and just see what’s happening.

 

How many databases can I fit on one VM?

Erik Darling: Anker asks, “We use VM environment for our SQL 2014. I want to know the thresholds where I can say that this VM is done with new databases and time to spin a new VM up. Everyone says CPU disk memory, but how much? Do you have any SQL which can tell this? Thanks.” So guys, what thresholds do you look at for resource consumption before you say it’s time to get a new server going and put some other stuff on there? Another question would be, how many resources do you give each server and expect to see consumed by your workload?

Tara Kizer: It’s really hard to answer. If I’m looking at new hardware, trying to decide what size hardware to buy for a new database, I’m thinking about what this database is going to need and how much the company can afford. So I like to go all out as far as hardware goes. That way you can be on that hardware for maybe a couple more years than if you went cheaper. But as far as a SQL query, there really isn’t one. If you’re on older hardware and your CPU is up and maybe if you don’t have enough memory, look at your wait stats, have you tuned everything that you can? If you have, you may need to upgrade to newer hardware.

Erik Darling: So you get a button that sends up a new VM.

Tara Kizer: Yeah.

Richie Rump: That’s right.

Erik Darling: This one shuts the old ones down. What I would do is I would, like Tara said, start looking at your wait stats. If you have hit a bottleneck in one of those areas, do that. Or, sp_BlitzFirst. You can run that over spans of time: 30 seconds, a minute, five minutes, whatever. Just see what your wait stats are. See what your bottlenecks are during your load times. Because not every server is going to be the same, right? As far as just one single query that does it, you’re not really going to get that. There are queries out there that exam wait stats and aggregate them over time but you really want to know what you’re waiting on when things are busy, right? You don’t care what things are waiting on when you’re doing maintenance. You don’t care what things are waiting on when you’re doing backups and DBCC CHECKDB or index stuff, because that’s not customer facing. I would run BlitzFirst. I would get a good idea of what my bottlenecks are and I would try to tune from there. Like BlitzFirst will tell you if your CPU utilization is really high. So like with a VM, you want to typically be using a little bit more than you do with a physical box because with VM resources that are sitting around idle can actually be a performance drag on that. I believe David Klee has a good blog post on that somewhere. I don’t have it handy though. I would just make sure that my CPUs aren’t just constantly pegged at 100 percent. I want to make sure that my memory is, I’m not constantly reading from disk, getting a lot of page IO latch, or stuff like that. So I would want to measure each server differently, find out whatever bottleneck I’m hitting.

Tara Kizer: Just a quick correction. I said SP3 was released a couple weeks ago—I really meant SP2. I got my numbers mixed up for 2014.

Erik Darling: SP3 is 2012.

Tara Kizer: Yeah, exactly.

Erik Darling: This button downloads SP3 for 2012 and this one downloads SP2 for 2014. God, that’s confusing.

 

What are symlinks and how do they work?

Erik Darling: Monica asks—is this an Excel question? “What do you know about symlinks? We are running out of space on our SAN causing our backups to intermittently fail until we can get the space added. One of the suggestions was to move a database to another server and use a symlink.”

Tara Kizer: I don’t know what that is.

Erik Darling: Me either. Is that a SAN thing? Because that doesn’t sound like a SQL Server thing.

Richie Rump: No, it’s a symbolic link. Think of it as like a super…

Erik Darling: A pointer.

Richie Rump: Yeah, pointer. A shortcut, right?

Tara Kizer: Does SQL Server support that? I mean, is it a SAN technology that’s going to redirect it somewhere else?

Richie Rump: From the OS perspective, it thinks the file is actual there but it goes to somewhere else. I’ve never tried it with SQL Server. It seems slow. It would feel, I don’t know, but it’d be interesting. Go ahead and give it a try and see what happens.

Erik Darling: Yeah, I don’t know, I would be very concerned about… Okay, so you want to move a database to a symlink. No. It sounds like a horrible idea. Because where would the symlink be? It sounds like it would be physically further away, right?

Tara Kizer: Is it another SAN? I just wonder if this increases your risk of database corruption.

Erik Darling: Yeah, I would be concerned about a lot of things.

Tara Kizer: I’ve never heard of it being used for SQL Server.

Erik Darling: Yeah, I would rather point my backups to a symlink than anything else.

Richie Rump: Well now with SQL Server on Linux, you may get your chance of symlinking all day long with SQL Server.

Erik Darling: I always wanted SQL Server on Linux. I’m so excited. That was like the one thing missing in my life. I was like, wife, kid, good job… SQL Server on Linux, where are you? Because Windows Core wasn’t good enough. I need Linux. I want to bash everything. Run my cron jobs to defragment all my indexes. Dream of mine. It’s a dream we all live in.

 

Should I use DNS CNAMES to mask server moves?

Erik Darling: Gregory asks, “Have you ever used DNS entries to mask your db server names to avoid connection string changes when you change or move hardware? Kicking the idea around. Thoughts?”

Tara Kizer: All the time. I’ve used it a lot. I’ve used it back in SQL Server 2000. I actually prefer, I think that right out of the gate you should be using DNS entries and not the actual SQL Server name because over the years you’re going to be upgrading and moving hardware.

Erik Darling: So are you talking about like a CNAME alias in DNS or something else?

Tara Kizer: A CNAME alias.

Erik Darling: That was his question too. So tell us more about it because we don’t have any other questions. So if you want to expound on this…

Tara Kizer: I can’t expound on the actual DNS portion of it, but yeah, we used it a lot. As a matter of fact, there is a way for availability groups to, if your applications are running using older database drivers that do not support the multi-subnet configuration, you can set up the DNS alias so that it redirects to itself I believe is what we did at my last job. It was really fancy. It was a fancy DNS thing. I wasn’t the one who set it up like that but a sysadmin knew what to do. But yeah, I probably used at least 100 DNS aliases over the years.

Erik Darling: Nice.

Tara Kizer: I much prefer it.

Erik Darling: Yeah. So one thing that the CNAME alias is very common with is log shipping. That’s because log shipping doesn’t support any of the connection string fanciness that mirroring and availability groups do. So with log shipping, if you failover and you expect your application to just say, “I need the server,” then you need a CNAME alias so that log shipping can gracefully—or that your application can gracefully handle the log shipping failover to some other server out in the great wild west. This button has nothing to do with CNAME aliases or DNS. Nothing to do with it.

Tara Kizer: You like your buttons.

Erik Darling: This button might end the webcast if someone doesn’t start asking a question or if Richie doesn’t have something really interesting to talk about.

Tara Kizer: We have eight more minutes.

Richie Rump: I think it’s just going to be funny to see you setting all that up in AWS. I think it’s just going to just be kind of a hoot.

Erik Darling: Oh no, I’m going to make you write the code.

Richie Rump: Oh dang it.

Erik Darling: Yeah.

Tara Kizer: I thought it was already set up.

Richie Rump: Oh man.

Erik Darling: No, because they’re going to do all sorts of different things. Like I want once click to do one thing and two clicks to do another thing. Three clicks is going to turn the TV on in the middle of the night and freak everyone out. It’s going to be like, I think you could actually have like a whole Scooby-Doo episode of some dude just hanging out with IoT buttons. Just like: Ghost. Martians. Stair creaking. Chandelier drops. Just like all sorts of stuff. Like pillowcase on coat hanger down hallway. That’s what I would do.

Richie Rump: “If it wasn’t for you and you pesky kids…”

Erik Darling: You pull the mask off and it’s Jeff Bezos like, “Grrr!”

Richie Rump: “I would have stole that money from that old woman.”

Erik Darling: It’s a good idea. I think you could make that work. You could have a whole genre of mysteries surrounded by IoT buttons. By the way, I don’t get any kickback for showing these things. I just enjoy the way they feel in my hands. They’re like little maracas.

Richie Rump: You know, they only have like 1,000 clicks. So you don’t want to be wasting those clicks, man.

Erik Darling: I’m not even clicking. I just did the clicking action. It’s like practice clicking.
Erik Darling: Mike asks…

Tara Kizer: This is a good one.

 

What do you think of Ola Hallengren’s maintenance scripts?

Erik Darling: “What do you think of Ola Hallengren Maintenance Solution’s script jobs?” We love them.

Tara Kizer: We recommend them to all of our clients. A small minority of our clients are already using them. So yeah, we definitely like them.

Erik Darling: They handle all of those grody broom-and-dustpan tasks that no DBA actually wants to do. So backup, DBCC CHECKDB, and index and statistics maintenance. Hooray. And it’s all written out. It’s very well coded. Much better than you could do on your own. It’s kept up to date. Totally free. All sorts of bug fixes and stuff comes out with new versions. So totally recommend them. They are awesome.

Tara Kizer: Just to give you a number. At my job three jobs ago, we had it on about 700 servers so it’s definitely, it’s probably on thousands of servers around the world.

 

What does SOS_SCHEDULER_YIELD mean?

Erik Darling: Gregory asks, “You know anything about SOS_SCHEDULER_YIELD waits? I get CPU queue length waits on my VM db server sometimes.”

Tara Kizer: All right, so sometimes, but what is your overall wait stats since SQL Server has been online? Are you just seeing SOS_SCHEDULER_YIELD sometimes or is that like your top wait? Give us some more information.

Erik Darling: If it’s only SOS_SCHEDULER_YIELD that’s a little weird because usually that comes along with CXPACKET. There’s a lot of CXPACKET. But SOS_SCHEDULER_YIELD is of course when something gets on a worker to start running and for some reason it can’t keep running. Like it’s waiting for a lock on something else to release so it can run. As it exhausts its quantum, its quantum has four milliseconds to get everything it needs and keep on going. If it doesn’t get that, it goes to the back of the queue and waits its turn patiently like a good Englishman to go back and run again. So we would need some other wait stats along with SOS_SCHEDULER_YIELD to give you a better diagnosis.

 

Do we need to do CHECKDB on multiple copies of the same database?

Erik Darling: Mikal asks, I believe it’s Mikal [pronounces name like Michael], it’s M-I-K-A-L. If I said that wrong, I’m sorry. “We do a full DBCC on a restored production database weekly. Do we also need to do it on the actual production database?” No.

Tara Kizer: You do not.

Erik Darling: You can if you want, but…

Tara Kizer: Yeah, on these VLDBs you’re going to want to do that. You’re going to want to have a SAN snapshot or something that gets the database to another server and run CHECKDB there because you just can’t possibly run CHECKDB on very large databases.

Erik Darling: Yeah, so one thing I used to do was I would offload the full DBCC check to another server but I would periodically do physical only checks on my actual production. Just because they’re quicker and they’re a little bit more lightweight. They’re still horrible but they’re a little bit easier to stomach than the full run. So you could like once a month or once a week still run it on your primary but offloading is totally a legit strategy for the full DBCC check. As long as it’s on a full backup, clean, full backup every single time.

 

Are there any gotchas with storing unstructured data in a relational database?

Erik Darling: We have a couple minutes left and a couple questions left. Joseph asks, drumroll, “Do you have much experience with people storing unstructured data in the database, CLOB, BLOB or XML data, querying, retrieving, etc.? Does it perform well or does it have issues/gotchas? How does it compare to something like a dupe?” Don’t know about a dupe, do query a lot of XML.

Tara Kizer: And XML parsing. I’ve seen some clients that do a lot of XML parsing. Unfortunately, it’s just slow in SQL Server so I have a hard time telling them what to do. Move all of that XML parsing into the application and don’t do that work in SQL Server.

Richie Rump: Yeah, don’t do it in SQL Server.

Erik Darling: But if you have to do it in SQL Server there are some things you can do to help it. One thing that I usually suggest is if you are constantly depending on like an exists query in your XML to give you something then I make a computed column out of that and I just query the computed column. XML indexes are also wonderful things. They can really help if you are looking for a few consistent paths. So check out the rules around XML indexes and see if they could be useful to you. One really big downside to XML stuff in SQL Server is that it always runs single threaded, will serialize anything. So you can’t see any parallel processing of XML data. So that’s one reason why it’s always going to be a little bit slower and why you shouldn’t do it in SQL Server. But if you’ve got to, check out indexes, check out computed columns based on certain XML queries because they’re usually better options than doing all that processing on the fly.

Richie Rump: We can just put everything in JSON. That’s faster in SQL Server, right?

Erik Darling: Yeah, totally! That’s the best way to do things! That’s the way everyone should… No.

Richie Rump: Or maybe it’s the exact same code.

Erik Darling: Yeah, get the BLOBs out of the database.

Richie Rump: Agreed.

Erik Darling: With that, I think I’ve had enough for one day. How about you guys?

Tara Kizer: Sure.

Erik Darling: All right, we are at time. It’s been lovely chatting with you all. We will see you—well no, we won’t see you next week because next week we’ll be at DBA Days.

Richie Rump: So we will see you next week.

Erik Darling: So we’ll see you in different ways at DBA Days.

Tara Kizer: Make sure you register, sign up.

Erik Darling: Yeah. We’re going to break all sorts of stuff. Adios.

Tara Kizer: Bye.

Previous Post
Windows Update Now Delivers SQL Server Cumulative Updates
Next Post
Memory Grants Added to sp_BlitzCache and sp_BlitzFirst

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.

Menu
{"cart_token":"","hash":"","cart_data":""}