This week Tara, Erik, and Richie discuss virtualization, replication & mirroring, switching SQL service logins to an AD account, temporal tables, tempdb issues, migrating from SQL 2015, and Richie’s outdoor adventures in “Breath of the Wild.”
You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.
Enjoy the Podcast?
Don’t miss an episode, subscribe via iTunes, Stitcher or RSS.
Leave us a review in iTunes
Office Hours – 12/13/17
Should I add hardware when doing a P2V?
Erik Darling: So Thomas is asking a very good question about migrate, about doing a P to V; the old physical to virtual. “I’m on five-year-old hardware and I’m virtualizing the server; should I add more resources or would the age of the physical boxes be offset by the newer hardware of the VM host?” If you guys were looking to figure out if your virtualized hardware were adequate, what would you look at?
Tara Kizer: Are you asking us, Richie and me? Well, the companies I’ve worked for, we have a performance test team that can run synthetic production loads and see if we have adequate hardware; so that’s always where we would start to answer that question. We have load testing tools that can tell us if we’re going to do okay or not.
Erik Darling: Alright, well pretend that someone doesn’t work for a billion dollar a year chip manufacturer. Where might you start [crosstalk] this stuff?
Tara Kizer: What about you?
Erik Darling: That’s a good question. So where I would start – you were already kind of doing your own form of load testing here. If you just restore whatever production databases you have on there and you do kind of regular maintenance work – like, you know, you run CHECKDB, you do some full backups, or god forbid you run an index rebuild – those are all pretty good ways to test and see if the hardware is enough of an improvement. Because hopefully, you’ve been doing that stuff on your old server too and you can figure out if those basic tasks finish in about either the same, or hopefully faster, then they do on the old hardware. If you’re breaking about even, the chances are your queries will break about even too.
Tara Kizer: That sounds about right.
Erik Darling: As always, measure your wait stats. If you have a wait to run any kind of fake workload on there, go ahead and do that. SQL Query Stress is a pretty good tool for just running a whole bunch of queries over and over again. OStress is another good tool – part of the RML utilities thing that Microsoft publishes. Those are two pretty good ways to just spin up some fake workloads on a new server. That’s how I usually do it. I actually have – do I have a blog post? I think I do.
So back when we wrote white papers for Google…
Richie Rump: You have all the blog posts; what are you talking about? “Do I have a blog post?” Well yeah, sir, you have all of them.
Tara Kizer: Thanks, Brent.
Richie Rump: Great post, Brent.
Erik Darling: Why didn’t that work?
Tara Kizer: [crosstalk] Thanks, Brent thing, while he’s looking, that used to drive me a little bit crazy on my blog posts, but I’m getting better at accepting that everyone thinks that it’s always Brent writing the blog posts. I only do like four per year, so I don’t have to get upset that often.
(Brent adds: it’s an ongoing thing around here that you, dear reader, don’t read the author names on the posts, and you assume that I write everything. If you really wanna make a blogger happy, use their name in the comment, like “Great post, Tara” – assuming of course that you use the right name. In fact, you should probably just assume that it was NOT me, hahaha.)
Erik Darling: It’s a good way to avoid getting upset. Blogging, in general, is just an upsetting process. So a while back, when we were writing white papers for Google about how to test, doing different stuff in the cloud, Tara did stuff on AGs and DR, and I did stuff on performance tuning – and I wrote a blog post about how to use OStress and agent jobs to run all different stored procedures in different ways. So if you want to go take a look at that, I stuck it in the chat comments so you can go and look and you can figure out how to run all sorts of crazy stuff on your server and give that hardware a good kick in the nards.
Tara Kizer: What was that, Richie? Was that a cat? It went by so fast…
Richie Rump: Yeah, it jumped from here, it snuck underneath, like under the camera, and then just started jiggling my monitor; and I’m like get out.
Erik Darling: enough about Richie’s cats – boring. The next thing I know you’ll be taking pictures of your food; nightmares.
Richie Rump: That’s Brent, which he’s doing right now.
Erik Darling: That’s true. He is currently taking pictures of his fancy breakfast.
Richie Rump: Rosé all day.
How fast does my bandwidth need to be for AGs?
Erik Darling: Shree is asking a very good question for Tara. “Since we talked about network speed, what’s the expected good bandwidth between two data centers in different cities when using Always On availability groups?
Tara Kizer: So the answer is more dependent upon if you’re going to go synchronous or asynchronous. If you’re using an asynchronous replica and this also pertains to database mirroring, same basic thing – synchronous versus asynchronous. If a data center is in another city, it’s not within a couple of miles from the data center, I do not recommend using synchronous replica or mirroring because of the – even 15 milliseconds network latency can be noticeable enough, it can degrade performance enough, for users to notice. I’ve accidentally left a database mirroring session in synchronous mode after a failover, we used async failover, then you switch the sync and do the failover so you don’t have any data loss, and I forgot to switch it back to async after the failover. It took a few days to realize what was going on, like, “I don’t know why it’s slower there,” because I don’t know that we had great wait stats on that stuff back in that version; it’s been quite a few years.
But to answer your question, if you were to go with synchronous replica across two cities, you better have a very small load. If you have a busy system, you’re not going to be able to do it. So I generally recommend synchronous at the same data center next rack over, same subnet, everything and then asynchronous to another city; so a disaster recovery site. As far as answering the question about network speed, well if it’s asynchronous, what matters is that you don’t get too much latency. So monitoring latency – but generally speaking, 15 milliseconds would be my target, or better.
Erik Darling: I’m with you on that. 15 to 20 is a pretty good target for latency from one site to another. One tool that you can use that’s absolutely free to test that sort of thing – you know, we always tell folks about using CrystalDiskMark to test their disks, but one good way to test your network is a free tool called Iperf. Iperf.exe – and Iperf lets you set up basically a ping flooding server on one end and then a client on the other end, and you can just send data back and forth and it will give you the network speed and how much data sent across and how much loss there was. So it’s a little bit more involved than just like a ping or like a traceroute or something, because it does send data across, but it’s a very good way to measure network speeds. I’ve used that a couple of times when kicking around hardware for clients.
Tara Kizer: Never heard of it, cool.
Erik Darling: I think even David Klee might even have a blog post where he shows you how to use it. So go out there and try to find that.
Tara Kizer: I did have a recent client that was using synchronous to another city, and it was about 500 miles and the wait stats show that they weren’t having an issue but they were going to be growing and growing and growing. So I was like, “Well you might not have an issue now, but you may in the future.” So I did give them some HADR talk.
Richie Rump: So all I heard from that question was, “What is the airspeed velocity of an unlatent swallow?”
Tara Kizer: Alright, fantastic.
Erik Darling: Nerd alert.
If I switch all my SQL Server’s logins…
Tara Kizer: James asks a question, a security question, “If you had to switch all your SQL Server’s logins to an AD account, how would you check that doesn’t wreck any of your applications?” Well, you need to do this in a test environment, making sure that – so SQL Server logins, all that really needs to happen is make sure that SQL Server can come online and all the features that you’re using there work. So if you’re changing the agent job, make sure that that jobs are still running. But as far as your applications, no, but start in a test environment so that the production maintenance window – you don’t have any issues during that.
Richie Rump: you know, one of the cool things I’ve been playing with this week is creating a cluster in RDS and doing that all using script; scripting all the logins and scripting all the stuff and scripting all the instances. And that has been super cool, being able to bring up a database and tear it down, bringing up the cluster and then adding all these instances to it, then going and start removing pieces from the cluster. Like what if you removed the writer or what if you removed one of the readers and you just start swapping things in and out? Oh man, that crap is powerful, and this is all from script. You can do it all right from script; there’s no need to go to the window or the console or anything. It’s just, do it, and then come in and out. Scary as hell if you’re talking about production instances, but man, I was having a blast yesterday trying to figure all that crap out.
Tara Kizer: Are you being sarcastic with blast?
Richie Rump: No, it was fun. Trying to figure out what the correct syntax is in a script, that was rough. But once I got everything working and I was able to start throwing instances out there, it’s just a matter of, “Hey, I’m going to remove this,” and it’s gone. Or, “Hey I’m going to go ahead and add another one,” and it was just there. It was like – now all of a sudden, you can start checking in your VPCs, all your networking stuff, all your database stuff, and that’s all a script that goes into GitHub and if you ever need to redeploy to a test environment, it’s just here. “Guys, run this script,” and there it goes, off and it runs, and it’s a completely different environment. Oh man, I probably need to write a post about that. that was a lot of fun.
Tara Kizer: Nesta is asking is the script that you wrote was PowerShell? Which I actually teased him earlier in our company chat room… I know the answer.
Richie Rump: No PowerShell… No, what I was doing is using Cloud Formation, which is AWS’s way to do JSON or YAML to generate pieces of cloud. So pretty much anything that you want to create in AWS, you can use cloud formation and just, with the script, do that. and I was using the serverless project as a container for that because I understand that and it’s easy for me to put environment variables everywhere and not hardcoding usernames and passwords and stuff like that into it. I’m sure you could do it using cloud formation; I just haven’t bothered to do it. And the rest of the app that I was working with was in serverless, and it was just a matter of a one line command line, it goes off and it just does everything that you need it to do.
Tara Kizer: All on one line?
Richie Rump: All on one line.
Tara Kizer: I guess that’s how it was when I was having to write the Google white paper for creating a VM and saving availability group there. There was a lot in one line for sure, but yes, figuring out the syntax was problematic. To me, it actually felt like PowerShell.
Richie Rump: Well, it’s kind of poorly documented. Because there was one issue I ran into where I was declaring the password and user in the instance, and I was also declaring it in the cluster, and then it gave me this bogus error just saying, “Hey, you want to declare that in the cluster.” And it didn’t tell me to remove it from the instance, so it tool a little digging for me to figure out, like oh, you only want it in a cluster, you don’t want it in the instance. So as soon as I had to remove a whole bunch of things, things started working. It was like, come on, you guys can write error messages a little better than this. it was like, “Please remove this from the instance because you declared in the cluster. But no, it was this really bogus, crazy – you really didn’t know what was going on with that error. I should probably blog that error too because I screen-shotted that one. I said, “This is a good one, this is crazy.”
Is OLE Automation a security risk?
Tara Kizer: Alright, Dee asks, “What are your thoughts on having Ole Automation enabled?” It’s a security concern. So I don’t know if you have any security audits, but that’s one of the items that they want you to have disabled. It’s by default disabled – same thing as xp_cmdshell, it’s a security risk. Whether or not you need it may be another story, so you can sometimes get around security audits by disabling and enabling as needed, but I don’t ever enable it on the servers I’ve supported. If you need it then you need it, but what are you doing that you need that type of thing? Why do you need that enabled on SQL Server? Why not do that type of task somewhere else?
Should I encrypt pre-2016 servers?
Tara Kizer: James asks, “Is it worth encrypting pre-2016 servers or should I upgrade to 2016 and then encrypt?” I don’t know that there’s any benefit to pre encrypting, except maybe your maintenance window will be faster because it doesn’t have to encrypt. I don’t know.
Richie Rump: I kind of see this as more of a business requirement. It’s kind of outside the technical realm. Regardless of pre-2016 or 2016, you have the requirement to encrypt, then you should encrypt at the version that you’re in and then worry about upgrading to a later – they’re not one is better versus another – you have a business requirement. Any version of SQL Server that is supported is going to be able to handle the encryption. If you need to do it then do it. Of course, those requirements need to be prioritized by your management. And hey, what’s more important; us upgrading or us doing the encryption? And meanwhile, you’ve got to let them know that there’s no technical difference between the two. There’s no benefit to one versus another. I would definitely do it in a test environment first, get more familiarized with the encryption and the certificates. It does get a little wonky there because it’s very different, but that’s a business thing and it doesn’t really depend on any technical things that the SQL Server has.
Erik Darling: Is my audio any better?
Richie Rump: Yes, you’re here.
Tara Kizer: So far so good.
Richie Rump: It’s the voice in my head, can you hear that?
Erik Darling: God, is that you?
What authors do you read?
Tara Kizer: Thomas says, “I see John Grisham and a dictionary. Who are some of the authors behind you, Tara?” The funny thing is, this actually is not my office. I’m at my parent’s house. I’ve moved into a new house and this is brand new construction and we have no internet. Been there for two weeks tomorrow, still no internet. I’ve been working at the library, I’ve been working at Starbucks and I’m so annoyed with the public at this point and I wanted to attend Office Hours, so I went over to my mom’s house, which isn’t too far away. My mom’s a big John Grisham fan. I definitely have read John Grisham. Also, Dean Koontz is also another one that’s in the background. You probably can see, or I’ve zoomed out too much, but there’s childhood pictures on the wall back there; probably some embarrassing ones, so don’t zoom in.
Richie Rump: So I did read Murder on the Orient Express about a month ago when I was in jury duty; sitting there waiting for the entire day. And we saw the movie adaptation last night and it wasn’t bad. I was expecting meh, but it wasn’t bad. There was differences, but I was like, okay I could live with those differences. The guy who played Aaron Burr in Hamilton was in the movie, and I was like, “Aaron Burr did it; he did it.” I was screaming it because there was only four people in the theatre, “Aaron Burr did it.”
Are tempura tables delicious?
Erik Darling: So what questions you want me to get on?
Tara Kizer: Temporal tables – I can answer temp tables, but temporal tables – can you tell us everything you know about temporal tables, and our answer is, “Well I know nothing.” So moving on…
Erik Darling: I mean, I like them more than like change tracking and change data capture, but I haven’t used them a ton. I have a couple of blog posts about them. There are some blog posts out in the world about them. They’re probably more informative than anything I’m going to tell you here in the next five minutes. But I do like them and I like the idea of them, I do think it’s a much better way than having devs try and roll their own sort of system of keeping track of data changes over time. It really does depend on what your use case for them is going to be though.
Richie Rump: Okay, so think of me as a SQL Server noob for the new stuff. Tell me about temporal tables and what they can do, please.
Richie Rump: Why do you always ask the noob questions? So temporal tables…
Richie Rump: because I’ve been using Postgres, man, come on now.
Erik Darling: So temporal tables are like having a shadow copy of your table. So you have like the main copy of your table and then you have this shadow copy of your table that tracks all of the changes to data made in the base table. So you can actually query backwards in time and you can look at how your data has changed over time and you can actually, if you needed to, go back and actually pull data out and fix data with what’s in there.
So I don’t know if it’s a really sneaky way for Microsoft to not have to put object level restore back into the product, but temporal tables are a really cool way to – it’s almost like Oracle Flashback where you can – like, “Oh no, we need the table as of now, let’s just query how it used to be.” So it’s very neat for that stuff.
Tara Kizer: So if you were to use it for auditing data changes, do you just query that table or then do you move it to an auditing table; historical tables?
Erik Darling: No, you just query the table and you can see there are some keywords with the dates that you want to look at and you can do like as of or dates between and stuff like that. so you can see how data looked on a certain date, between certain dates and you can just see how particular data has changed over time.
Richie Rump: So then if I delete data, that will be there as well, or not be there?
Erik Darling: Yes – well deleted data, it will show it’s deleted…
Tara Kizer: And then you could purge the data so it’s not massive, if you wanted to.
Erik Darling: I believe you could set retention policies on it, so you can decide how much data you want to keep in there.
Tara Kizer: In what version does this start on?
Erik Darling: 2016.
Tara Kizer: I had a feeling it was newer. I had a recent client who was doing auditing via triggers, and their design was not going to – and the way that they implemented it was bad for scaling reasons. I’m kind of okay with if you just quickly dump the data into another table as is, you know, the before and after. But they were doing some off things, you know, they weren’t on 2016, but maybe that’s something I could include in the future. “When you get to 2016, look into this instead.”
Erik Darling: If you ever get to 2016, you lazy, lazy people.
Richie Rump: We’re almost in 2018, come on now, seriously.
Erik Darling: SQL Server 2018 is on the way. Here you are… This week, my clients are running – I’m looking at two servers and the clients are running 2008 and 2008 R2. It’s like why? They’re seven and ten years old at this point. I’m like why can’t anyone just upgrade? What is the hold-up? What is the issue? Which I’m sure is a question that Microsoft asks as well.
Is trace flag 834 a good thing?
Tara Kizer: Adam asks, “What are the concerns and gotchas to watch for when enabling trace flag 834, which is the large page allocations and buffer pool?” I don’t know. I’ve never implemented that trace flag. How about you, Erik?
Erik Darling: I mean, I’ve just never seen it make a difference. It’s like one of those old DBA fables, where it’s just like, “Oh all we need is the large pages and everything will be cool.” But I’ve just never seen it actually fix a problem and I’ve never seen it like even kind of improve a problem in a way where it was like, “Okay, we made it past this benchmark so now we can really focus on the stuff that will fix problems.” I’m sure that someone wrote a really interesting blog post about how cool it is for one particular thing, but often in the real world, I’ve just never seen it do much of anything good.
Tara Kizer: The trace flags that I go with are the ones that are recommended by SQL Server experts, the known people in the community that are saying, “These trace flags, you should just go ahead and enable everywhere.” And then, you know, specific issues that I’m encountering, maybe I need a database mirroring or replication trace flag so they can ignore each other and not have too much latency. So I don’t go ahead and implement trace flags just because.
Erik Darling: I guess, you know, if I was going to voice a concern about them, my concern would be like, have you tested it in a way that you’ve seen it improve upon a problem? Like do you enable it in a development environment and – are you sure that you have a problem it’s going to solve? You’ve watched it solve a problem somewhere else and now we’re going to roll it out in production. Like that’s my concern, that you’re button smashing and you’re not actually fixing a problem.
Is it bad when TempDB runs out of space?
Tara Kizer: Alright, [Rod] has a maintenance job that’s got an error and its tempdb ran out of space tempdb because the primary filegroup is full. Any reason for concern? Well, you may need more tempdb space in order for that job to finish. And maybe it’s not just that job; maybe it’s the other load that’s occurring plus the job running at the same time. So your queries need space in tempdb to do work. Maybe you’re using temporary tables, maybe it’s a sort, maybe you’re spilling to disk, who knows what. There’s lots of reasons why tempdb gets used. And tempdb gets used a lot on SQL Server, even if you’re not doing temporary tables, it still is being used. And if you got that error, is this job important? Does it need to be able to complete successfully? And if it does and it’s continuing to fail, look at the query. Is there anything weird about the query that maybe you can make it more efficient so it uses less tempdb space, or maybe you just need to add more storage so that tempdb can grow and you don’t have that error.
I’ve said this several times in Office Hours, but I literally have supported a server had a 500GB tempdb data file sizes. There were eight files but they totaled 500GB. And I’ve got some clients who are like, “Oh my god, I’ve got 8GB to 20GB tempdb.” That’s nothing. That cost you a quarter. So I don’t worry about the temodb space. I want the queries and jobs to complete successfully. I will go back and look at the queries that may be causing it to grow past some number. There are ways to make better use of tempdb.
Erik Darling: Yeah, I think I would want to look at what the job running is. I know that DBCC CHECKDB has the tendency to use a whole heck of a lot of tempdb space. And if you’re rebuilding indexes with sort and tempdb on, you’re going to be using an awful lot of tempdb. And if all of the sporting that goes on for statistics updates – so if you’re doing statistics updates on rather large tables with a full scan, those sorts can actually go off the tempdb. And I’ve seen those sorts fill tempdb up, even on servers that have tempdb the size of what Tara’s talking about with the 500GB allotted to it. All the stuff she said, plus take a look at what the jobs you’re doing are. As always, we will not recommend that you just go and keep rebuilding indexes blindly and just hope that they fix something. Make sure that the indexes that you’re rebuilding actually need the rebuild to happen and that you’re fixing more than you’re hurting by doing it.
Any gotchas with upgrading SQL Server 2005?
Tara Kizer: Alright, one last question from James. He’s got a brand new job, and they do have a SQL Server 2005 server. It’s not his fault, he wants to make that clear; this is a new job, he’s inherited it. I don’t want to say how old he was in 2005. “Are there any gotchas I need to watch out for when I upgrade?” I mean, what are you upgrading to? I don’t think you can even go from 2005 to 2016, you know, you need a hop. So you’re going to need a 2012 box, and I wouldn’t recommend just stopping at 2012. We’re in 2017, so I’d probably go for 2016 or 2017. I don’t care which one between the two, really, unless you’re going for some feature – but you’re going to need to do a hop, probably to 2012 and then over to the new version.
Now, you can do that all in the same maintenance window, because it’s just going to be a backup and restore then another backup and restore, and you don’t have to do full backups, you can do this through log shipping and just keep sending it over to the newest server. As far as gotchas, I would recommend running the upgrade advisor tools to see if you’ve got any code. If you’re using stored procedures, it can analyze your code and let you know if you’ve got anything that may break in the newer version, but obviously, you’re going to have to test this in a test environment. In companies I’ve worked for, we’ve done full regression tests. It can take three months of testing only for a SQL Server upgrade. And obviously, they try to get in application bugs and features as well during that time, but do a full test cycle. The upgrade advisor can be helpful for that. I’m pretty sure it still exists in newer versions.
Erik Darling: Yeah, the other thing I would just be aware of is the new cardinality estimator. So if you’re moving up to 2014 or beyond, the new CE can really help queries or really hurt queries or a strange mix of the two. So before you go and set that database into the new compatibility mode, make sure that you do some testing on how your queries react to it.
Richie Rump: Or just roll it back to the old compatibility mode until you have to test.
Erik Darling: Yeah, whatever, Richie. Thanks for being thoughtful.
Richie Rump: I’ll stop, dude. All my thoughts right now are on the Last Jedi, so…
Erik Darling: You’re in a very Jedi state of mind being that thoughtful about things, rolling stuff backwards and forwards and testing. Look at you. I bet you’d unit test that, wouldn’t you?
Richie Rump: I absolutely would.
DBAtools.io plug
Tara Kizer: One last comment from Thomas, just because it’s based upon James’ question, “DBA tools for migration for the win.” DBA Tools is a PowerShell script. I highly recommend checking out the website for it and seeing if there’s any scripts that could help you do your job. They do have migration scripts, so that is helpful. But as a long time DBA, I’m so used to my own backup restore and doing a differential and chaining together transaction logs. It’s super easy, I don’t need a script. I have custom T-SQL scripts anyway for it that I wouldn’t bother with PowerShell for that task because I’ve been around so long.
Erik Darling: Maybe if I had to transport a bunch of settings and users and all that other stuff, the PowerShell command would make sense. But just for moving the actual data over, I’m going to want something that reduces the cutover time, not something that is just part of the cutover time. Like if it’s just doing a straight backup and restore for a smaller database, not a big deal. As soon as you start getting up into like 50GB, 100GB, 200GB plus databases, you’re really going to want to do mirroring or log shipping or something else that gives the ability to do a faster cutover to the new server.
Richie Rump: Yeah, and if you’re not familiar with PowerShell, I wouldn’t start with a migration trying to get familiar with it.
Tara Kizer: Definitely in a test environment first for it. Troubleshooting those errors is horrible.
Erik Darling: Yeah. I think we have] reached about the limit of our time here. We went a little bit over because we talked about Zelda too much. I hope you’ll forgive us; we tried to make it up on the backend. Thank you, everyone for joining us. I apologize about my potato-y audio and video. I’m going to call Time Warner and yell at them again. Tara and Richie, do you want to say anything?
Tara Kizer: Nope, bye.
Richie Rump: Go climb something…
Tara Kizer: Bye.
1 Comment. Leave new
Testing network throughput with iperf | Convergence of Data and Infrastructure
https://www.davidklee.net/2013/11/11/testing-network-throughput-with-iperf/