[Video] Office Hours 2016/09/07 (With Transcriptions)

This week, Brent, Richie, Erik, and Tara discuss parameter sniffing, database backups, referential integrity in databases, clustering, creating and using indexes, in-place upgrades, bench marking tools, and more.

Here’s the video on YouTube:

You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.

Enjoy the Podcast?

Don’t miss an episode, subscribe via iTunes, Stitcher or RSS.
Leave us a review in iTunes

Office Hours Webcast – 2016-09-07

 

What’s parameter sniffing?

Brent Ozar: J.H. says, “Would the following two queries with the same execution plans but with different costs be considered parameter sniffing?” He’s got two different queries with just different where clauses, specifying different values. Totally. Imagine for a second that one has more data than the other, one has a million rows and the other has just five rows. So they could totally end up—I’m fresh off of parameter sniffing, I’m giving our parameter sniffing class as we speak during our week-long performance class so I immediately jump in. I’m like, “I should answer that, no one else knows.” Everyone else on the team knows.

Erik Darling: No, that’s like textbook parameter sniffing to me, right? When just one thing changes in the where clause. Like given that query, it’s probably even just getting a trivial plan so SQL is just like, “Oh, I got this thing… go do something…”

Brent Ozar: What’s your favorite resource on parameter sniffing, if you wanted to go learn more about it or go tell someone to learn?

Erik Darling: Ours. I would look it up on our site.

Brent Ozar: That’s not fair.

Erik Darling: Yes it is. It’s totally fair. This is our webcast.

Brent Ozar: We need a shortcode on that.

Richie Rump: [inaudible 00:01:05]

Brent Ozar: We need a shortcode for parameter sniffing on our site come to think of it.

Erik Darling: You know, that’s actually a good question because I feel like I’ve learned so much in small pieces over the years. I couldn’t point to one thing that I’m like, “Oh, that’s where I figured it all out.” It’s been like just little chunks and bits of, “Oh, that’s…oh…that’s… and I fix it with what…”

Tara Kizer: J.H. follows up and says, “Slow query has much less rows. Kind of weird.” I would say you could try out the optimize for option if you’re on SQL 2008 or greater. Try optimizing for whichever value has the faster speed.

Brent Ozar: Yeah, know that it’s technical debt. When you do that, you’re hardcoding it into your query.

Erik Darling: I would actually try recompiling first, just to see if you get a different plan for one or the other.

Tara Kizer: I would argue that they don’t have the same execution plans. They may be the same shape and have the same operators, but they are not the same if the costs are different.

Brent Ozar: Yeah, agreed. Especially, it just may just be larger amounts of data that’s moving from one place to another but your giveaway there is that the costs are different.

Tara Kizer: Yeah, the arrows in between the operators are going to have different sizes or whatever it is.

Brent Ozar: If you have access to SQL Server Management Studio 2016, so Management Studio 2016 is totally free to go download. There’s a new compare showplan feature where it will show you just the things that are different between two execution plans. It’s not perfect. It’s got a lot of gotchas and drawbacks, but if you google for SQL Server Management Studio 2016 showplan or compare showplan, there’s a bunch of folks who have written up different write-ups on how you go about using that.

Erik Darling: You know what, actually given his circumstances and the relatively small amount of questions that we have, we can spend some extra time on it. I would actually start looking at statistics at this point. If there are a much smaller amount of rows but it’s much slower, because I think something weird might be happening with the histogram, like maybe it’s not picking up on the skew or something. I would take a look at that and check when the last time stats got updated. Or just look at the histogram and see if it’s an off histogram step, if it’s just sort of guessing a bad amount of rows for that value.

Brent Ozar: The other thing we pop up a lot is the Erland Sommarskog’s epic post “Slow in the App, Fast in SSMS.” So if you search for “Slow in the App, Fast in SSMS,” Erland has like this 50-page long treatise on parameter sniffing. It gives all kinds of different causes and all kinds of different fixes too.

Richie Rump: Wow. That only took four minutes to bring up that post. That must be a record.

Brent Ozar: I was going slow. I was like, “I shouldn’t… I shouldn’t… Not introduce this immediately.”

Richie Rump: Pull the trigger.

Brent Ozar: I can’t believe I don’t have a parameter shortcode for that. Like Brent Ozar.com/go/ — should be probably be Erland, because I just reference that guy so much for that one post.

Erik Darling: There are so many great treatises, like biblical, like “In the beginning, there was SQL Server 2000.”

Brent Ozar: I did an entire week newsletters just of Erland Sommarskog’s post at one point. One of our Monday newsletters was nothing but Erland’s post. Just wonderful, gift that keeps on giving. Which reminds me that I need to set up a watch that page alert for whenever his posts change.

Erik Darling: It changes.

Brent Ozar: Yeah.

Erik Darling: Because he is nice enough to keep them updated.

Brent Ozar: He is.

 

Should I run my backups as an SSIS job?

Brent Ozar: Frederick asks, “Can you guys think of any reasons to run your database backups through an SSIS job? I took over servers from another DBA and it seems he was obsessed with SSIS. I want to replace all these backup jobs.” Tara, did you do backups through SSIS? Because I know you were an SSIS and RS fan.

Tara Kizer: Of course not, I didn’t do data backups through SSIS. But, to answer the question directly. Is there any reasons to run them through SSIS? No. But is there any reason to change it? Not really if it’s working. But if you want to change and the guy is no longer around and you’re not going to hurt his feelings, change it. Use a custom solution. I mean, it’s working though.

Erik Darling: Yeah, one reason that I can think of to do it is if I had a centralized SSIS server that I wanted to run jobs in a bunch of other servers for. So if I wanted to not manage backup jobs on each and every server and I just wanted to have one central server that I could reach out to other stuff with, I might do it that way then, but I also have never opened SSIS, so.

Tara Kizer: Yeah, we did have an SSIS central server to run everything on all the AGs and stuff, but the actual backups were on the actual SQL instance because you want to be able to backup system databases. You can still do that through SSIS, but we just had the other things for SSIS on the central server.

Richie Rump: It just smells like bad idea jeans. You know?

Brent Ozar: It does smell like bad idea jeans, yeah.

Erik Darling: It smells like someone was most comfortable with SSIS and just used that hammer for every nail.

Richie Rump: Such an awkward hammer, oh my gosh.

Erik Darling: Well, it’s a graphical hammer, where you can put things where you want them and then point arrows to other things.

Richie Rump: There’s no leverage in a hammer.

Erik Darling: It’s actually a two-dimensional hammer.

[Laughter]

Brent Ozar: To be fair, we like SSIS. Nothing wrong with SSIS. Just use it for what it’s used for, backups is kind of an oddball thing. I tried to come up with reasons why someone would want to do something and I’m like, “I’ve got nothing.”

 

Why are my deletes taking five seconds per record?

Brent Ozar: The Unknown Unknown asks, “I’m deleting records from a table and it takes five seconds per record.” Five seconds per record. We will now pause and simulate how long it takes this man to delete one record. One… Two… Three… Four… Five… He says, “There are no triggers in the database. What else should I look for?”

Tara Kizer: See what the query is waiting on. Do sp_whoisactive, sp_who, sysprocesses, whatever your favorite tool is to take a look at current activity. See what that’s waiting for. Probably blocking, I’m assuming?

Brent Ozar: Oh, I like that. Richie, what did you think?

Richie Rump: I’d check the indexes, maybe it’s trying to do a lookup on a foreign key somewhere.

Tara Kizer: That’s true, yeah.

Brent Ozar: Oh, I like that too. Erik, how about you?

Erik Darling: I would not look at index fragmentation. Let’s see, I think I’m with Tara on this one with the blocking.

Brent Ozar: Yeah I love sp_whoisactive for that.

Erik Darling: It might be weird, like just going through for it and fetches whatever to find what it needs.

Brent Ozar: Oh, a heap with large amounts of deallocated space, if you have to scan the whole heap, that could do it too. Oh, this would make a great blogpost. So Unknown Unknown, I’ll actually just bang out a blogpost about here are all the places I would go look next but I think the start of all our troubleshooting would just be run sp_whoisactive to see what that wait type is for that query when it’s running.

Erik Darling: I’d also kind of be opposed to deleting things one record at a time. Like if the mechanism you’re using to delete one record at a time is slow, like if it’s a cursor or a weird while loop that does some other stuff.

Brent Ozar: Or the foreign key referential integrity thing, if it’s even deallocating heaps, I mean that one at a time would kill you there too.

Erik Darling: Yeah, a lot of stuff.

Richie Rump: Yeah, one of the things that I—when I used to go into client’s, first thing I did is, “Oh look, you don’t have any indexes on your foreign keys. You just had a script that automatically populated all of them.” And all of a sudden, “You’re a genius. What did you do?” “Oh, I don’t know, some secret sauce or something I can’t divulge,” blah blah blah. But yeah, it’s the easiest thing to do you. You don’t have indexes on your foreign keys, all of a sudden your updates and your deletes are faster too.

 

Should I enforce referential integrity in the database?

Brent Ozar: This makes for an interesting question. How do you guys feel about referential integrity enforced in the database? So like do you like having foreign keys? It’s so interesting because Tara goes thumbs up. Richie goes thumbs down, or thumb sideways. Erik, how about you? Any opinion?

Erik Darling: Absolutely. I like having referential integrity enforced in the database for perf reasons, aside from things get weird. If you do have foreign keys and you index them properly, you can get join elimination and some other nice stuff out of execution plans. But there are some issues with when you have cascading actions attached to your foreign keys, especially if you’re altering large amounts of data under the blankets. Under the blankets what SQL does in order to enforce the cascading actions is put into the serializable isolation level which can cause some pretty nasty blocking and blocking situations if you’re not careful.

Brent Ozar: Tara, you immediately put your thumb up, like way up. So why is it—you’re really excited about it. What do you like about it?

Tara Kizer: As a production DBA, you want to protect the data. You can’t rely on the developers to do the referential integrity, to follow the business rules in the application. So you want it on both sides really.

Brent Ozar: Then it’s funny that our developer, Richie, immediately put his thumb down.

Tara Kizer: Of course. This is why DBAs don’t like developers.

Richie Rump: I thought we were going directly towards the cascading, which I hate. I mean, it’s just really annoying. Referential integrity, yes, absolutely. Cascading, down, bad. I’ve had arguments with architects for some people I used to consult for about having referential integrity inside a database because, “Oh, the app will do it. It will always do it.” “It will never load data inside the database directly to it with like a bulk load?” “Well, occasionally.” “Well don’t you think it’d be nice to have the referential integrity there to protect the data?” “Well, maybe.” Yeah. Okay, yes. And the answer is you need it in both.

Erik Darling: One reason why I’m really in favor of doing it via foreign keys and stuff is because people will often fall back on triggers as a second option to enforce it and that just ends up with all sorts of bad news stuff happening.

Brent Ozar: Yeah. I like it because I don’t usually see it as a big performance drawback. Why would you not like data reliability at hardly any of a performance impact?

Erik Darling: It’s just a few asserts in your execution plan, what’s the big deal?

Brent Ozar: Just a few asserts. Besides all those table scans that you’re doing for other reasons, this is the least of your problems.

 

Is VMware HA as good as SQL clustering?

Brent Ozar: J.H. says, “Our VM SAN admin is trying to persuade our DBA team to move away from Windows clusters. He said that his VMware technology provides high availability. I think we still want clustering. What do you guys think?”

Erik Darling: Prove it.

Tara Kizer: You can have both. Do it on the VM side as well as do it on the Windows side. You want to be able to have a failover cluster or some other node, database mirroring, availability groups, some other high availability solution in place for Windows that when you do Windows patching you don’t have a lot of downtime waiting for the VM to reboot. You just have the 30 second failover for instance. Technologies work in conjunction with each other, I mean, they’re completely separate but you can do both.

Brent Ozar: The more protection the better, right?

Tara Kizer: Yeah.

Brent Ozar: The problem with the VM as high availability is when somebody screws up a Windows patch or a SQL patch or drops a table or trashes a system object, all of a sudden I can be totally screwed. Somebody fills up the C drive is the other classic example. So again, nice high availability there in clusters.

Erik Darling: Another thing that a lot of people don’t think through when they do that is that you then have to set up some DM affinity so that your nodes don’t end up on the same server at the same time because then that kind of ruins your high availability. Because if that one node goes down, both servers are on that, goodbye.

Brent Ozar: High unavailability.

Erik Darling: Yes, you’re highly unavailable.

 

Can I use statistics to figure out where I need indexes?

Brent Ozar: Brent asks—not me—but another Brent asks, “Is there a way to use statistics to see where I should put indexes? So if SQL Server has automatically created several [inaudible 00:12:36] stats, should I create indexes on those same fields?” Erik got excited there.

Tara Kizer: Take it away, Erik.

Erik Darling: I did because that’s really cool because SQL does create statistics on columns, if it finds them interesting. Do you need an index there is a good question though because you wouldn’t just go and create a bunch of single column indexes on every column that SQL has created a statistics object for. And you wouldn’t want to create one wide index across all of the statistics objects that SQL has created system objects for, because SQL only creates a histogram based on the left-most column in your indexes. So SQL, obviously, your where clause may, you know, you have more items or your joins have more different items in them than what is just in the outer most column of the histogram. So I wouldn’t just haul off and be like, “Yeah, let’s just create an index on everything that SQL has a statistics objects for.” I would go back and I would start looking at my queries. I would start looking at my execution plans, perhaps for more obvious—either for straight up missing index recommendations or for some obvious index tuning opportunities, like if you have like any big sorts or any hash joins on small amounts of data, stuff like that.

Richie Rump: Yeah, my favorite tool to use would be sp_BlitzCache, take a look at that and see what’s being slow in cache.

Erik Darling: Or BlitzIndex in the old mode three will get you all your missing index requests.

Tara Kizer: Is it mode three? It’s mode four.

Erik Darling: Three for just missing indexes.

Tara Kizer: Oh.

Brent Ozar: Four gets you everything. Three only gets you the missing indexes. It’s funny, we all use them in different ways. I’m like, yeah, I never use mode three.

Tara Kizer: I’ve used mode three but it doesn’t show any output. So maybe it was a different version. Well, there’s missing indexes. Maybe it was a bad version.

Brent Ozar: Which we have a lot of. Speaking of which, which also reminds me I’ve got to fix—I was doing a demo today and I realized that sp_BlitzIndex now, with get all databases equals one, the indexes aren’t sorted in impact order. They’re sorted backwards for impact order. I’m like, oh, man, I’ve got to go look at that.

 

Why do your first responder kit stored procedures start with sp_?

Brent Ozar: Unknown Unknown asks in a related question, “Why do your first responder…”

Erik Darling: Sirhan Sirhan

[Laughter]

Brent Ozar: Madonna Madonna asks, “Why do your first responder names start with sp prefix, sp_? Doesn’t Microsoft still advise avoiding this special prefix?” Okay, so I’m answering this because this is one of my personal hot buttons and Aaron Bertrand can go suck it. So Aaron Bertrand is one of the other big guys in the SQL Server community who is like, “Don’t name your stored procedures with sp whatever, you know, a row and a record are different. Fields and columns are different.” So there are edge cases where the column of a stored procedure can incur some extra penalty if SQL Server has to do name resolution to find the stored procedure because it’s going to look in master when stored procedures start with sp_. Guess where people but the blitz stored procedures? In master. Guess where they want to run it from? Every database. They want to be able to run, for example, sp_BlitzIndex from any database without prefixing anything. Here’s the deal. I don’t care if all your stored procedures begin with sp_. If the time it takes to resolve a name is the biggest problem in your environment, you deserve a Medal of Honor and a big slab of chocolate cake. That is not any kind of significant performance impact for any environment that I’ve ever seen. Meanwhile, I’ve seen people going, “Make sure your stored procedures are named correctly” and the code inside the stored procedure is so heinous I need to take a shower after I look at. So, focus less on the names and more on the content. Now, having said that, you probably shouldn’t look at the content of our stored procedures either because that’s heinous too.

Erik Darling: And it’s a little bit awkward for Aaron to say that considering he wrote sp_foreachdb.

Brent Ozar: That’s true. Yeah.

Erik Darling: Yeah, a little awkward. So sp_foreachdb is really cool. It’s sort of Aaron’s replacement for the built-in stored procedure MSforeachdb which loops through your databases and adds a bunch of known problems around skipping databases and not being able to figure things out. So Aaron rewrote this cool thing called sp_foreachdb. It has a bunch of awesome flags where you can skip over things and put in name patterns and skip system databases and all good stuff. I used to use it all the time back when I was doing relativity stuff and I had to deploy indexes across all the databases. Because it was a lot of work to do to make sure that there were actually columns that matched the names in all those… God, I’m going to go cry after this.

[Laughter]

Brent Ozar: Bad times episode.

 

Where do sp_BlitzIndex’s missing index recommendations come from?

Brent Ozar: Nate asks, “sp_BlitzIndex has recommended some high value missing indexes that don’t seem right, especially ones that include almost every column in the table. Why is that and where are those indexes coming from?”
Erik Darling: Anyone? All right, fine.

Brent Ozar: You worked on it recently, so.

Erik Darling: They come from missing indexes. I know, it was probably me, I may have forgot to put descending in for that…

Brent Ozar: No, I tested it too and it looked good to me.

Erik Darling: Oh, crap. All right, it’s something else then, woo hoo. So missing index requests come from a part of the query optimization process called index matching. What index matching does is SQL Server takes a query and it says, “If I needed the perfect index for this query, I would go and look for this one.” It sort of throws the dice and if it doesn’t find exactly the index that it wants and it feels that that index could reduce the query cost by some internal algorithm percent or amount then it logs that opportunity in a missing index DMV. Sp_BlitzIndex only looks at missing index DMVs. We try to do some smart filtering on what would actually improve performance, but we can’t know everything about your server just from the missing index DMVs. The reason that you’re coming up with these indexes that are perhaps keyed on one or two columns but include every other column in the table is someone is either running select * queries or you have EF developers who are not trimming down the columns that they actually need from their EF queries. So you really are selecting every single column in the table and your where clauses only keyed on perhaps an ID column or maybe an ID and a date column or something. So that’s why it has those two key columns there but the only way for SQL to really make an efficient index to resolve that query is to include every single other column because it’s dragging back every single column and it may not want to do key lookups for all those columns, even if it’s just keyed on a couple for the where clause or join. That’s all I got to say about that.

Richie Rump: In defense of entity framework folks, sometimes we do need to get all the data from a table.

Erik Darling: That’s fine, create an index for it.

Richie Rump: But not all the time, people, it’s ridiculous. Sounds like a post I’ve got coming out soon.

Brent Ozar: There is a post. Richie has a post coming out about how to change entity framework so that it doesn’t select all the columns. It turns out it’s easy. So easy a caveman could do it. I was impressed. I’m a caveman.

Erik Darling: I still couldn’t do it. I’m still partially frozen, so.

Brent Ozar: Just a simple unfrozen caveman.

 

Are there any gotchas with failing over an Always On Availability Group to DR?

Brent Ozar: Temple asks, “I have a four-node cluster running SQL 2012 with Always On Availability Groups configured. I have one node in a fileshare off at another site. I want to failover to that other site and test our DR plan.” Any gotchas from our experience?

Tara Kizer: Oh, I got this one, I got this one.

Erik Darling: Take it away.

Tara Kizer: You only have two resources at that other site. So if you lose the network connection between the two sites, your cluster is likely going to go down. Your production databases will be down. So you’re going to need a third resource. So maybe have another node. Having one node out there is not the greatest. What happens if something happens to that node? So maybe two nodes out there and your fileshare witness. So you don’t want just two servers for a quorum and having votes.

Brent Ozar: That would be awkward.

Tara Kizer: Yeah, when I first set up an availability group in production we had six nodes, three at the DR site, three at the primary site. This was on SQL 2012. So one of these servers at the DR site was a cold standby. You couldn’t have that many replicas but we needed to have identical hardware at both sites, that was a requirement that we had with our customers in case we had a failover. So we did have a network glitch between the two sites. That is going to happen at times. At the time, before that, we’d always used failover clustering and the Windows admins had taken care of quorum and votes. And with availability groups, that falls on both teams basically. I didn’t know about votes and suddenly production was down when we lost the connection between the two sites. That is when I learned about cluster votes. We had to disable the votes on the DR site and then we had node majority on the primary site.

Erik Darling: Which version of Windows was that with?

Tara Kizer: Yeah, that would have been Windows 2008 R2, but yeah, Windows 2012 R2 you do have the dynamic quorum and it can manage that for you and a fileshare witness is recommended and I believe the default on that version of Windows. Still, you should still understand quorum and votes even when using 2012 R2.

Brent Ozar: Oh yeah, because if you lose network connectivity between the two data centers, even when 2012 R2 can’t recover in time, if too many of them fail all at once, it’s going think that there was a split brain scenario and yank the plug.

Erik Darling: But you can do forced quorum resiliency and you can bring one side up, right? Isn’t that the way that works?

Brent Ozar: Well, you have to do it and then you have to restart the cluster again because you can’t start in an availability group when you did a forced quorum. I have an hour-long session during the senior DBA class where I walk through this happening and show people screenshots and eyes just kept getting bigger and bigger. Like, “You should do this before you have to do it live.” That, of course, that’s only one of the gotchas there. There can be all kinds of other gotchas. Make sure you enable your backup jobs over on the other side. I’m a huge fan of the instant that I failover if I don’t already have full backups on the other side, I need to take a full backup as quickly as possible. If I have fulls but no logs on the other side, I need to take a deferential as quickly as possible. Make sure that your log backup jobs are up and running over there. Your user accounts can be different on both sides. You need to sync SQL agent jobs, all kinds of things you’ve got to sync across there.

 

Should I set MAXDOP = 1 if I’m running 300,000 queries per second?

Brent Ozar: Unknown asks, “Could there be some performance improvements combining NUMA with MAXDOP equals 1. Sorry if my question is vague.” It is. “But I heard this a few days ago from the DBA of a high throughput system, 300,000 transactions per second, and I wanted to ask you.” So, I’m not saying 300,000 transactions a second is impossible, it’s doable, but that person is probably doing some very specialized tricks. I would ask for more information about what the system is like, where his guidance is, because there are people out there—dagnabbit—what’s the guy in the suit? Thomas Kejser. Thomas Kejser is a classic example of guys who do this. Who’s the other guy in New York?

Erik Darling: Thomas Grohser.

Brent Ozar: Thomas Grohser, yes. These guys have great—they’re so cool to listen to their stories but what they’re doing is very different than the rest of us great unwashed out in the gutters are doing.

Tara Kizer: The 300,000 transactions per second probably if you looked at batch requests per second it’s probably much lower. Usually transactions per second is a very inflated number as compared to batch requests per second.

Brent Ozar: Well, and, I have so many thoughts on that. If they are 300,000 per second, you probably do only want MAXDOP 1, ain’t nothing going parallel when you’re doing 300,000 queries per second.

Erik Darling: Yeah, at that point, you’re doing very narrow key lookups, you’re doing one row at a time stuff. You’re not doing reporting the queries. Everything is very properly indexed. Your isolation levels are on point for, and change within the stored procedure. You’re doing everything right.

Brent Ozar: Well, I would argue you’re doing one thing wrong, which is you’re doing 300,000 queries per second against a single box. You should shard that out and spread the load around, but that’s cool.

 

When is SQL Server 2018 coming out?

Brent Ozar: J.H. says, “I saw your recent email on SQL 2018 features. Any idea when that’s coming out?” Should we go migrate now to SQL 2016 or is the community not at all confident yet? J.H., that was a joke. Those features, if you read those features carefully, for example “scented error messages,” your server is pretty safe. Those are not coming out any time soon.

Richie Rump: No, but if it was?

Brent Ozar: What if it was? Then I would hold out for the edible result sets. That alone would be worth the results there, worth the upgrade.

 

The Upgrade Advisor says it’s okay, so should I?

Brent Ozar: Person with a very tough to pronounce first name, whose first initial starts with “s” says, “I have a SQL 2012 to 2016 upgrade I want to do. I want to do it in-place. I ran the upgrade advisor. I only got a few warnings, so should I go ahead and upgrade this thing in-place?” What do you guys feel about in-place upgrades?

Tara Kizer: I didn’t even used to in-place upgrades in the test environment. Just too much disaster can happen. You don’t have a fallback plan if you do an in-place upgrade. Just imagine trying to undo that if it doesn’t work. I’ve done side-by-side upgrade when each instance exists on the same physical server or VM, because that’s still considered side-by-side when the instances are next to each other, you know, backup and restore, detach, attach, whatever it is. I usually prefer though that the servers be completely different. But in-place upgrade I think is risky, even in a test environment.

Brent Ozar: Yeah, I have the same exact feeling on it.

 

What precautions should I take on a web-facing SQL Server?

Brent Ozar: Lori says, “A developer is requesting a separate SQL Server to house the data for an internet-facing application because he’s concerned about security if his data shares stuff with the payroll application data. As a DBA, what should I do about setting up separate security precautions about this kind of application?”

Tara Kizer: I think the developer is right. I mean I think that you should separate this out. That’s what I’ve always done. It usually has special firewall rules in place to really lock it down and your internal stuff is going to be on the intranet. Yeah, I would not put these on the same server if I were the DBA. As far as if you are going to put it on the same server, as far as security goes, make sure no one has sysadmin. Pay attention to what kind of server roles users have. I don’t even like db owner for applications. I like just the minimum amount of permissions for each application account. I like stored procedures in that sense, so it’s just exec.

Brent Ozar: I would also explain what licensing looks like because on an internet-facing server you are licensing by the core. There’s no CAL licensing available on that, so it’s the real deal.

 

Followup questions

Brent Ozar: Unknown follows up with his 300,000 queries per second, he says, “Please don’t say the company name but the company name is” blank. I’ve heard that company name before and I think they are doing SQL Server. I vaguely remember hearing this. I wouldn’t be surprised if they were doing 300,000 either batch requests a second or transaction requests a second but you would want to shard that out. You mentioned that you’re taking an interview. This is one of those ways if you want to strap a rocket to your back and learn a ton, if you want to learn a lot about how high concurrency stuff works, just make sure you strap it to your back and not your rear end because you go into a shop where they’ve done something like that and you’re on call, you can have a really ugly [inaudible 00:28:04].

Erik Darling: That’s weird, I’ve never heard of Gabby’s Goldfish before.

[Laughter]

Richie Rump: I love them.

Brent Ozar: Contoso. Interviewed at Contoso.

Brent Ozar: The in-place upgrade guy follows up and he says, “This is SharePoint.” Again, yeah, we wouldn’t do in-place upgrades.

Tara Kizer: No, doesn’t matter.

Erik Darling: Makes no difference.

Tara Kizer: Could be AdventureWorks, I’m still not doing it.

Brent Ozar: Yep, no thank you.

 

Do you have a favorite hardware benchmarking tool?

Brent Ozar: Unknown says, “Do you have a favorite hardware performance benchmark tool?” If you’re going to go benchmark new SQL Servers, do you guys use anything for benchmarking? Have any of you guys ever done benchmarking? I don’t know that I have.

Erik Darling: Yes, minimal.

Tara Kizer: Yeah.

Richie Rump: I’m in the cloud now so I don’t do any of that stuff.

Brent Ozar: In the cloud, there are no benchmarks.

[Crosstalk]

Tara Kizer: I can’t remember, it is SQLIO Stress? Or something like that…? Yeah, we did that. But we also had a whole performance load test team. They would just hammer the server, when new hardware came.

Brent Ozar: That’s kind of awesome. Wow. What kind of job would that be?

Erik Darling: That would be fun as hell. That would be like us at DBA Days all the time.

Brent Ozar: Oh my god, that would be so much fun.

Tara Kizer: It requires very special, like developer knowledge, because they are writing scripts and doing programming and stuff like that. So it’s really not a DBA-type role. I liked working with them though. I get to find all these cool things.

Richie Rump: Yeah, nobody wants to be a developer. They suck.

Brent Ozar: I was going to say, no, she’s saying that because she’s seen our code.

Tara Kizer: I’m just saying from a DBA perspective going into performance load test team might be challenging if you don’t have a developer background.

Brent Ozar: Every time we bring out a new version of sp_Blitz or sp_BlitzIndex, she hears us complaining about our lack of unit testing. “Is anything broken?” “God, who knows.”

Erik Darling: Works on most of these versions, in most scenarios, but I don’t know.

Brent Ozar: Yeah, that’s about it. Well, thanks everybody for hanging out with us this week. We will see you guys on the next Office Hours. Adios everybody.

Erik Darling: Bye-bye.

Previous Post
[Video] 500-Level Guide to Career Internals
Next Post
Learn to Use sp_Blitz, sp_BlitzCache, sp_BlitzFirst, and sp_BlitzIndex with These Tutorial Videos

4 Comments. Leave new

  • With regard to in-place upgrades, would you still feel it’s a bad idea in a VMWare environment where we take a VM snapshot before doing the upgrade? If things go badly, we would just restore from the snapshot. We take snapshots before applying service packs or CUs, just in case. We allow time in the maintenance window for doing that restore. Wouldn’t this work for version upgrades too?

    Reply
    • Mark – wouldn’t you want to do testing before the upgrade?

      I would. Especially with all the changes to the cardinality estimator (CE) in 2014/2016. Build a new SQL Server and test the bejeezus out of it before you go live. You can’t do that with a snapshot.

      Reply
      • In theory, that’s what a test system is for. In reality, you generally don’t have the same number or speed of cores or amount of RAM in test than you do in production, so you make a very good point.

        But in situations where you have very similar databases (for example, the same SaaS setup) for different customers on different VMs and you’ve already vetted it out for a couple of them, an in-place upgrade for the rest should be relatively risk-free with the snapshot.

        Reply
        • Mark – I would never use the term “relatively risk-free” in combination with an upgrade, but that’s just because I’ve been burned over and over.

          Plus, you’re dealing with downtime during the upgrade. Why not build a new box and avoid the downtime altogether, just have a 60-second outage to fail over log shipping or mirroring?

          Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.