[Video] Office Hours 2017/03/01 (With Transcriptions)

This week, Brent and Tara discuss Always On failure scenarios, partitioning without consulting vendors, multi-subnet clustering with AGs, licensing, long running queries, replication, improving index maintenance, avoiding deadlocks, index space usage, and much more.

Here’s the video on YouTube:

You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.

Enjoy the Podcast?

Don’t miss an episode, subscribe via iTunes, Stitcher or RSS.
Leave us a review in iTunes

Office Hours Webcast – 2017-03-01

 

Should you mess with AG preferred/possible owners?

Brent Ozar: We’ll go ahead and start tackling some of the questions here. Richard Seltzer asks, “I’ve read that you shouldn’t mess with preferred and possible owners for availability group roles in Failover Cluster Manager since SQL Server dynamically sets those. But what about the failover and failback settings like max number of failovers per period, prevent and allow failback? Can I set those other than the default? If so, what should I set them to?” Have you ever touched those?

Tara Kizer: I don’t really touch anything in Failover Cluster Manager if it’s an availability group. All of my config goes into the availability group in Management Studio or T-SQL commands. But Failover Cluster Manager, I can’t really think of anything except for checking the status of say the listener, that type of thing. Just be looking for red/green type stuff.

Brent Ozar: I get where you’re going with the prevent/allow failback stuff, Richard, but yeah, I would never touch those either. Failbacks aren’t going to happen automatically anyway. Once an AG is running on a node, it’s just going to keep running on a node. It’s not going to come back, by default, as soon as the primary owner comes back up. I can’t think of a scenario where I’d want to touch those. That’s a really good question for Stack Exchange though. If you go to dba.stackexhange.com and post that, because I bet people do have reasons for changing it. I just can’t think of any where I’d want to change away from the defaults.

 

How should I configure a 22-core processor?

Brent Ozar: Kyle says, “We have a server with one specific Xeon processor. It has 22 cores and 44 threads.” He has hyperthreading turned on. Dude, I don’t know if you’re trying to set some kind of licensing record on how much you pay for a single processor, but that is a lot of freaking money. “This also means that there’s 11 cores per NUMA node. Sp_Blitz reports that node zero has 11 cores. This is a really bad NUMA configuration. This is how the chips should be [inaudible]. Should we be concerned?” I would not run SQL Server on that. At Enterprise Edition, that’s $7,000 a core. Standard Edition, it’s $2,000 a core. But either way, that is a ton of licensing. If you really need that many CPU cores, you probably need more than one socket. I’d rather have multiple sockets broken up that way. I would want to know more about what the chip choice was with that.

 

What’s faster: triggers or doing the work separately?

Brent Ozar: Next up, J.H. says, “Would performance be the same if I created after insert triggers dumping into a history table or as opposed to having my T-SQL insert statement process that?” He says, “I’m doing it like 100 times a day, what’s the best way to capture changes with a trigger or by changing my code?” J.H., my thing is if it’s only 100 inserts per day, it shouldn’t be that much of a bottleneck. That’s only four times and hour, five times an hour. If shouldn’t be too bad. If it’s only run that often, I’m kind of okay with it being in a trigger just because that way it’s only one place to change the logic and you’re done. If it’s 100 times a second, then things start to get more interesting. But at 100 times a day, don’t worry about it.

 

Where can I learn about Availability Group failures?

Brent Ozar: Brian says, “I haven’t made it through the Always On Availability Groups in the HA/DR module on your site.” Well get to work, Brian. Come on. What are you doing? “I wanted to ask if you could recommend a site that would have a compilation of various Always On failure scenarios.” Oh, what a great question. Stack Exchange. Go to dba.stackexchange.com. Search for AlwaysOn—one word, no spaces—grammatically incorrect but that’s how they tag it over there. You will find a spectacular amount of failure scenarios there.

Tara Kizer: I attended a session in PASS 2014, I forget who it was but it was someone from Microsoft put it on that went over the most common issues and what the resolutions were and how to monitor. It was really a nice session. I think that he said that it was supposed to be like a half-day session because there’s just so many problems but he condensed it to the most common ones and had an hour session of it.

Brent Ozar: Man, if any of you ever get the chance to go to major conferences and you get the chance to see someone at Microsoft present about Always On Availability Groups, I would totally do it. They have a lot of really good presenters, and a lot of really good presentations too.

 

What’s a cheap place to play with the cloud?

Brent Ozar: Brian asks another question. “Is there a good, cheap place for developers to work with cloud-based Windows virtualization, such as Google Cloud or Azure?” The catch with machines up in the cloud is that there’s no difference between production and development, it’s the same price essentially to the provider whether you’re using it for production work or development. So when you talk about cheap stuff in the cloud, that gets a little bit trickier. What you could play with if you just want to experiment with someone else’s VM is search for the Microsoft Virtual Labs. Microsoft has a bunch of VMs that you can spin up and use for—I want to say it’s like two hours at a time. They’re free. You can’t copy/paste. There’s some other limitations, yada yada yada.

 

Can I partition a 3rd party vendor’s database?

Brent Ozar: Guillermo asks, “Have you ever partitioned a table in a third-party vendor app without consulting the vendor about it first? Are there any possible issues with implementing partitioning without breaking the app?” Tara, is that something that you would go do?

Tara Kizer: Never. The most I do in vendor databases is maybe create indexes, update statistics, that type of stuff. But, yeah, I don’t touch things because a lot of times it’s going to violate your contract and if they see that you’ve messed with things, that could cause some problems for you. What are you trying to solve though? Is it for performance or are you just trying to get rid of data easier, faster? Wonder what you’re trying to solve here.
Brent Ozar: You can totally break apps by doing that, absolutely. Man, I’ve done stuff like that, even just adding an index I broke somebody’s app because they did a deployment script and they were renaming things, renaming indexes and they chose an index name that happened to be the same as what I was using…

Tara Kizer: Oh, geez.

Brent Ozar: And the script broke. It was a bad day.

 

How do you feel about multi-subnet AGs?

Brent Ozar: Robert asks, “How do you feel…” Wonderful. “…about multi-subnet clustering with AGs? I have one machine in one location and another machine in another subnet.” Tara, how do you feel about that?

Tara Kizer: Love it.

Brent Ozar: Love it, yes.

Tara Kizer: I love everything about availability groups. At the companies I’ve worked with, they’ve all had disaster recovery requirements so when we deployed availability groups all of our availability groups were multi-subnet configurations. The only issue is maybe legacy applications can’t use the multi-subnet failover equal true connection parameter. So you want to make sure you have database drivers that can, otherwise you might have issues connecting at times, you get 50 percent failure rate. There are some ways to work around it. My last client was able to work around that issue because they had a lot—I think like hundreds of legacy applications. One of the workarounds did work for them. As I was talking to them, I said, “None of these workarounds worked at my previous companies.” They’re like, “Oh, it worked for us.” I was like, “Oh, finally a success on one of those workarounds.” We tried them all.

Brent Ozar: He says, “Is the main pain point for it the network setup and possible latency between the sites if it’s not done right?” What would you describe as like pain points that you would worry about?

Tara Kizer: I don’t really have any pain points with the multi-subnet configuration. As long as your application can use that connection parameter. It’s only going to connect to the active site anyway. You’ll have two IP addresses and only one of those IP addresses will be active. So there isn’t a performance issue. There isn’t a performance issue between the two. However, if you set up your replicas to be synchronous across to your other site, now that’s where performance can certainly have problems due to the network latency, but that’s not really relevant to the multi-subnet configuration question. It’s related but, not relevant.

Brent Ozar: Did you guys use synchronous replication between multiple subnets or did you just use it for async, like DR type stuff?

Tara Kizer: Almost entirely it was all synchronous at the primary site and then asynchronous to another site. However, there was one application that we were able to get away with synchronous between the primary and the DR site. Which were probably like 200, 300 miles away. That’s because that system, that application had such low throughput it didn’t matter, very, very small transactions. And the network latency, whatever it is, 15 milliseconds, just wasn’t noticeable enough. So we wanted to limit how many replicas we had. So it made sense for that specific database to be synchronous across sites. Other than that, synchronous is terrible. You want to stay away from that if performance matters.

Brent Ozar: Yes.

 

Can I use Developer Edition to verify my backups?

Brent Ozar: Reece says, “We’re finally setting up backup and restores to a new database server for verifying them. We should have done this a long time ago. If the database is production, is it legal to restore them on a Developer Edition for the purpose of verification only or do we need to buy another Standard Edition license to be compliant?” This is tricky. What I would say is you should have a development server that you restore to everyday that your developers can go run queries and tests against it. Then that way, you’ve accomplished the goal that you’re looking to accomplish and it’s still just a Developer Edition server.

Tara Kizer: I was going to ask that question because I’ve been asking questions about this this week. I was wondering because I had systems were I restored a developer database because they needed access to production data and it could be older. So they developed against it and we ran CHECKDB there. I was like, “Was I supposed to license that with production licenses?” We did have msdn licenses for those developers, so they were covered there.

Brent Ozar: The trick is CHECKDB. As soon as you offload CHECKDB, if you’re not doing it in production, and you are doing it on this other server, then suddenly that’s magically much more like offloading production workload. That’s a little trickier.

 

Should I change auto create statistics for tempdb?

Brent Ozar: Tom says, “Is it a good idea to set auto create statistics to on for tempdb? I realize that it’s already defaulted to on for model, so we think tempdb gets these settings from model where it’s created each time it’s restarted.”

Tara Kizer: Yeah, I’ve never even changed tempdb setting like that. You noticed it from model, why aren’t you able to check it for tempdb? Just right click on the database and see is it on.

Brent Ozar: Yeah, it’s already on. It should be on. And is it a good idea to leave it on? Yes. Yeah.

 

How do I identify long-running queries in a data warehouse?

Brent Ozar: Bobby says, “We have a data warehouse on SQL Server 2008 R2 and [inaudible] are allowed to run ad hoc queries against it. When a query has been running for a long time, meaning hours, we can usually tune it. However, we don’t have a systematic approach to identifying long-running queries. Is there a low overhead way to gather the most long-running queries that users are running?”

Tara Kizer: Have you heard of us? We have the Blitzes. Use sp_BlitzCache to figure out what your long-running queries are. You can sort it by average duration, duration, average CPU, CPU, there’s all sorts of things you can sort the plan cache by and BlitzCache can help you do that work. It gives you all sorts of neat warnings that have been worked into it, the execution plan, so much wealth of data returned from BlitzCache. Use that. Also, use sp_BlitzIndex for help with adding high value missing indexes that maybe didn’t uncover by looking at execution plans.

Brent Ozar: You don’t have to install anything other than the stored procedure. Just works immediately. It makes you look like a rock star consultant. More like her, less like me.

Tara Kizer: And they’re very lightweight. On servers where we’re running it on, you won’t even notice a hit when these things are running. They may run for several minutes depending on how big your plan cache, the complexity of your execution plans are, but it’s not going to be a noticeable load added to your system while it’s running.

Brent Ozar: Ben says, “Tara sounds so proud, like it’s her new Blitz baby.”

Tara Kizer: Oh no, god, no. That’s not me.

Brent Ozar: That would be Erik Darling.

Tara Kizer: I one time modified code for sp_Blitz and it has probably been about three or four months. It’s like I don’t know if I ever want to do that again.

Brent Ozar: I am constantly amazed that people contribute code because sometimes I’ll go into a stored proc I haven’t seen for a while and I’m like, “Oh, god.”

Tara Kizer: You don’t even know where to go and you’re trying to follow the logic and…

Brent Ozar: It’s huge.

Tara Kizer: Then GitHub is just tough also.

Brent Ozar: Yes, yes. GitHub, it took me years to get used to GitHub. Jeremiah was always pushing that on me. I’m like, “No. It’s evil. It’s awful.”

 

What’s better: AG replication or transactional replication?

Brent Ozar: Mike asks a question that I think is custom written for Tara here. “We have heard that Always On replication is much better for replication than regular SQL replication. Is that the case and why?”

Tara Kizer: I’m confused by the question. Always On replication?

Brent Ozar: Well, when you choose between Always On and say transactional replication, which is better under which circumstances?

Tara Kizer: So they’re asking about a readable secondary as compared to using transactional replication? Okay. I mean, I prefer availability groups for readable secondary. So I supported transactional replication for years and years to offload select queries to another system to avoid blocking issues, even with RCSI in place we were having issues. So we offloaded work and when availability groups got announced and I realized it was going to be HA/DR plus readable secondaries, I was like, “Yes, finally.” My whole goal was to get rid of transactional replication. It was nice to also be able to consolidate the number of features we were using down to just one. So when you’ll use transactional replication as compared to readable secondary availability group, that’s going to be dependent upon if your reporting databases can be identical to your OLTP database, then you can use availability groups. If you need additional indexes added to your reporting database, you will have to add those to your OLTP database because it’s an exact copy of your database. Transactional replication, you can add objects, you can have different indexes. It could be hosting the database for another application and you’re just replicating the extra tables over to it. So it just depends on what that reporting database is going to look like. Can it be a copy of your production database or is it going to be different?

 

How do I prove I have a storage network problem?

Brent Ozar: Graham says, “We have SSDs that host the data and log files for our various servers. I think the SSD drives are overwhelming the 10 gig NICs on some of our servers. We’re experiencing query timeouts. But the server team isn’t convinced that it’s the NICs. What should I look at?” I’d just run a load test straight against from the SQL Server to those solid state drives. Run a load test and have your network team look at network utilization during that time. It should be very easy for them to monitor switch port utilization. Then if they say, “Oh, there’s more headroom, you could get more through it,” have them do more things, like give you more solid state drives to test against. Then you should be able to see that thing go up. Generally speaking, I don’t see switch port utilization monitoring done full time. It’s something that people only drop in periodically and go look at. I haven’t seen a lot of good monitoring apps that can trend that over time.

Tara Kizer: Just to add, he’s mentioning experiencing query timeouts and then going into all this SSD and network stuff. I was like, that’s not where I would be starting. I would be capturing activity via sp_WhoIsActive on a regular basis and then going back and looking at what queries did timeout. Maybe the ones that took 30 seconds, 31 seconds, whatever your application query timeout is and seeing what those queries were waiting on. Start logging activity every 30 seconds, every 60 seconds, maybe even more frequently so you can catch more. Just be careful with how much you’re going to be looking at and the retention of that. But I wouldn’t be looking at these things first. I would be looking at what are the queries waiting on. That’s going to tell me where I’m going to go next.

 

More about 3rd party database partitioning…

Brent Ozar: Guillermo asks a follow up to his partitioning question. He says, “Regarding third-party apps and partitioning their tables. The reason I asked is to improve index maintenance on very large tables.” So if you wanted to improve index maintenance, what would you do?

Tara Kizer: I would disable the jobs.

[Laughter]

I’d get rid of it. I would only be looking at really update statistics and does that help with table partitioning and how long is that taking? As far as index maintenance, what are you trying to prove? Are you trying to run it daily and have less locking? I just don’t think that index maintenance is too important. I’m of the opinion of maybe run it monthly, quarterly. Run update statistics daily but the other stuff less frequently, just isn’t really needed.

Brent Ozar: It’s one of those things where I notice people are like—you also have to know it’s in your transaction log. So you’re scrambling all this data doing all these writes. Even if you’re only doing a portion of the table, you’re still writing a ton of data there. If the table is big enough to be partitioned, you probably have years’ worth of history inside there, how much do you really want to go scrambling that data all the time?

 

More about triggers…

Brent Ozar: J.H. asks, “An add-on to my earlier trigger question. Would performance be the same if the inserts were much higher, like 100 times a second for triggers versus inserts?” This is where you just have to look at, for example, the logic that is going into the insert table, the transaction volume of other things hitting the same table at the same time. If my application is able to do the change in one statement and make a log to a logging history table, I might even say that logging history table may not even be in the same database or on the same server. I may have a really crappy history server where I go just dump things like logs that I’m not as urgent with. I have one client for example that goes to try to get 50,000 inserts per second into a table. So they do their inserts in one table and logging any changes that they do over in a totally separate environment. It’s not even SQL Server.

 

How do I avoid deadlocks?

Brent Ozar: Perkash says, “How do I avoid deadlocks due to a clustered index on a table? I have multiple sessions that are inserting records at the same time and I’m seeing deadlocks.” That’s kind of a trick question there.

Tara Kizer: Maybe the clustered index key needs to be changed so that you don’t have the hotspot? Because can’t an identity column—that has a hotspot. I know that we don’t really concern ourselves with hotspots as much, like on SQL Server 6.5, hotspot was like the thing we always talked about. But do we talk about that much anymore? I wonder what the key is on.

Brent Ozar: Yeah, well I would want to see the deadlock graphs. I have a hunch…

Tara Kizer: I don’t want to see that.

Brent Ozar: That’s a good point.

Tara Kizer: I can’t read those darn things.

Brent Ozar: Neither of us really want to see it.

Tara Kizer: Send it to Jonathan Kehayias. He loves looking at those things.

Brent Ozar: Yes. Post it at—I want to say it’s sqlserverperformance.com or is it sqlperformance.com? Kehayias answers questions on there. You can include the deadlock graphs. I have a very strong hunch that there’s other tables involved or other indexes involved in that, not just the clustered index.

 

How do I learn more about resource semaphore waits?

Brent Ozar: Ben says, “I’m looking for good ways to investigate resource semaphore waits.” Also, he wants to know if there’s a way to push something through when that is the wait type. Tara, have you played around with this wait type and if you’re facing it, where do you go look to identify more?

Tara Kizer: I have to look it up every single time because I can’t remember. I haven’t experienced it in a while so all these poison waits that we see with our clients, I’ve never experienced these. I don’t have these quite memorized. I know that this one is the severe memory pressure, right?

Brent Ozar: Yeah.

Tara Kizer: This is a really bad memory pressure issue. Pushing something though? I wonder if maybe he’s referring to—when resource semaphore occurs, is it like the denial of service that you see with threadpool?

Brent Ozar: It’s a lot like it except it’s memory instead of worker threads.

Tara Kizer: Okay. What errors do they start getting?

Brent Ozar: You get no errors at all. The queries can still pile up on worker threads but you do see resource semaphore as a wait type. What you need to do is, and Erik actually improved sp_BlitzCache because we started running into this a lot at one period in time. With sp_BlitzCache, there is a sort by memory grant. So you can sort by memory grant to see which queries are getting granted the most memory and then go tune them so that they don’t ask for ungodly amounts of RAM. Maybe it’s query tuning, maybe it’s index tuning, it’s different in every case, but that’s how you go about tuning those queries. Is there a way to push something through? Yeah, make it use less memory. Change the plan so that it only needs a tiny amount of memory to get through instead of gigabytes.

 

How do I find out what licensing I have?

Brent Ozar: Perkash asks, “How do I find out what license type and the number of licenses I have on an existing instance?” We shall play the sad trombone for him. There is no way of knowing. You have to go to the accounting department. Sadly, so SQL Server doesn’t have a key that you have to put in—up until 2016, they started asking for the key. But before that, you could install Enterprise Edition without being licensed for it. That’s why we hired Tara to get her out of a program because she had put in thousands of instances of Enterprise Edition without paying, so we had to get her out. Stop installing that!

 

Should I index foreign keys by default?

Brent Ozar: Let’s see here. David says, “Hi. First time with you live.” As opposed to when we’re dead. He’s enjoying the catch up shows. “Here’s my question. Years ago I created a script that added indexes for foreign keys.” That’s kind of cool.

Tara Kizer: I love it. I like it so far.

Brent Ozar: Yeah, you should start a blog and share that because people would get excited about that, lots of people like doing that. He says, “Now, after running sp_BlitzIndex I find most of these are not used. So before I delete them, what’s my opinion on creating indexes for foreign keys in order to aid joins?”

Tara Kizer: When I’m creating a new database and it’s ready to go into production, I want all my foreign keys to be indexed, just as a starting point. Because those are usually the columns that we’re going to be joining on. Now if they end up not being used, I would wait, write down when your last server restart was. Wait a couple weeks, four weeks, you need a lot of workload to occur. Then look at the unused indexes. Then make decisions based upon that. I always tell this to my clients, I dropped an index on a Tuesday about ten years ago on a 7-terabyte mission-critical database and on a Saturday I called by our knock. CPU had been at 100 percent for about an hour before they called me. I logged in. Sure enough, CPU still high. I ran a query to check who was using the most CPU. The query that was doing it, it’s where clause matched exactly that index I had dropped. I had dropped that index two weeks after we had last restarted the server. We did the Microsoft security patches monthly. It turns out there was a query that would run on the first of every month and that query, the first of the month was not included in my workload when I checked that report. So, we ended up moving that query to another system anyway because it didn’t need to run on production, on real-time data. But, it’s really important when you look at the unused indexes that you know how long your system has been up and what your workload is. Do you have weekly queries? Monthly queries? Make sure you have some up time.

Brent Ozar: I think we’d all agree it’s a good starter practice. Like when you don’t know anything about a database, it’s a nice place to start.

 

Any upcoming deals on training?

Brent Ozar: Guillermo asks, “Any upcoming deals on Brent Ozar training?” We tend to run a few sales a year. One of them is our Black Friday sale in November. One of the other ones we run is the company’s anniversary sale in May. So in May, we’ll run a deal. I don’t know what the deal is yet. For years it was when we had our first year we gave people 10 percent off, two years 20 percent, three years 30 percent, even up for five years, last year, was 50 percent off. I’m just guessing it will probably be 60 percent off at our sixth birthday in May. It will only be on the online stuff though, not in the in-person training.

 

Why does a table get bigger when I rebuild it?

Brent Ozar: Deborah says, “Can you help me understand index space usage?” Yes. “I have a table with a primary key, SSMS indicates that at a certain size an index space is a certain size. So I add two more indexes and the index space is higher. Next I rebuild something in the index…” Oh, so let me boil down your question, say it a different way. Why when I rebuild an index would it get larger?

Tara Kizer: Fill factor is the only thing I can think of. So you’ve got a lower fill factor.

Brent Ozar: Yes, yes.

Tara Kizer: So maybe your fill factor server setting has changed to a lower value and you weren’t expecting that. Or maybe you are using the lower fill factor than 100 percent. So look at your fill factor for that index.

Brent Ozar: And where do they find fill factor?

Tara Kizer: For the index, just go to the properties of that index. Oh, and then at the server setting, right click on the server in SSMS and I think it is the database settings that’s in there.

Brent Ozar: [Mumbles name]—I believe I’m saying his name completely incorrectly—keeps sending in messages about how to see licensing. Unfortunately, that’s just the number of sockets and cores you have, not how many you’re licensed for. That’s the problem there. I can go set up an 80 core server and yes I have 80 cores. That doesn’t mean I magically made $560,000 worth of licensing appear from the darkness.

 

Is it possible to have too many statistics?

Brent Ozar: Tom says, “Is it possible to have too many statistics?” Yes. So what are the problems when you have a boatload of statistics?

Tara Kizer: I don’t know. Keeping them up to date?

Brent Ozar: Yes. SQL Server has—Tara has a great post on this. If you search for “statistics explosion Kendra Little.”

Tara Kizer: I do not. I was going to say, you said Tara. I’m like, “I do not.”

Brent Ozar: Oh, Kendra.

Tara Kizer: No such thing.

Brent Ozar: Yeah, so Kendra years ago wrote a blogpost about the secret I/O explosion. If you have ten statistics on a table, SQL Server will scan the freaking table ten times in order to update stats.

Tara Kizer: Oh, I did not know that.

Brent Ozar: I know, me neither. Because I’m like, who creates that many statistics on a table?

 

Do I need a heartbeat network in my cluster?

Brent Ozar: Robert says, “I’m setting up a two node cluster on Win 2012 R2. Do I need a heartbeat network or does the virtual cluster adapter take care of this?” When you were building clusters, did you build a separate heartbeat network?

Tara Kizer: I let my sysadmin take care of that kind of stuff. I worked in organizations that had Windows teams, Unix teams, SQL Server teams. We all had our role. I did the SQL Server installation after the sysadmin team had configured the cluster and done that stuff. I know we had heartbeat IPs, I don’t know what the virtual cluster adapter is, so.

Brent Ozar: A lot of people, you find the bigger that the company is, the more the Windows team will build clusters and then hand off the working cluster to the SQL Server team, with the understanding being that if the cluster service doesn’t start, then it’s a Windows problem. The Windows team troubleshoots it. It’s hard for a SQL Server DBA to be good across the entire stack. I loved working in tiny companies where I got to do everything but then I did a crappy job at everything.

Tara Kizer: Yeah, right.

Brent Ozar: Then to say, I would never do a heartbeat adapter these days. In the days of 10 gig Ethernet, you just don’t need separate throughput to dedicate. There are edge cases where you do if you’re doing iSCSI and regular network traffic over the same stupid 10 gig adapter, then you can run into saturation. But that’s fairly unusual.

 

How can I get my coworkers to write good T-SQL?

Brent Ozar: Colin says, “I work with morons…” You haven’t come to work here, yet. “…who constantly write cursors and triggers. What can I do to get my coworkers to write good T-SQL?” I’ll rephrase this differently. Are cursors and triggers bad?

Tara Kizer: They can be. It just depends. Is the database designed in such a bad way that cursors and triggers are needed? Triggers can be fine because it depends on how you’re using them. If you’re using it so that you’re protecting the integrity of the database and putting business logic into them, then that’s a really valid point. If you’re using triggers to do auditing, yes, that’s good too, but that adds performance overhead. If you’re using triggers to do all sorts of other work and just adding time to your transaction time, I don’t like that. But cursors, it just depends and what is needed to complete the task. So maybe they’re constantly having to write cursors because that’s what the database design requires. That means that the database design needs to change. But, there are things that require cursors. Can the queries be rewritten to be set-based instead? Work with them and show them that this cursor can instead be rewritten to do this and maybe they’ll learn.

Brent Ozar: What I seem to find is that when someone resorts to it in one certain coding pattern, it’s because it’s what they know. It’s just the way they’ve always seen it. So training usually makes a bigger difference. The other thing is sometimes they have a manager with a gun to the head saying, “You have to ship 15 features today. It’s got to get out.” So they’re just trying to find the fastest, dirtiest ways to ship it. No one wants to write crappy code, other than me, I thoroughly enjoy it every time I get to do demos. The keyword being that you work with morons. So you basically have two choices. Either get the morons trained or go somewhere where you’re not working with morons. Bad news: there are morons everywhere.

Tara Kizer: Everywhere, yeah.

 

Should I disable an index or drop it?

Brent Ozar: John says, “Related to the unused index question. Is it safer to disable an index instead of dropping it?”

Tara Kizer: Sure, I mean disabling it does remove the space that it’s using. All it does is it’s keeping the index definition in place so that if you decide to rebuild an index it’s now in place again. You don’t have to go through email, through your scripts, or your source control, to figure out what that index definition is. I’ve actually never—I’ve never disabled indexes. I just drop them when I don’t need them and I know that all of this stuff has been in source control because that’s what we did at these companies.

Brent Ozar: I like disabling indexes only for one purpose. This is the only time I’ve ever done it. Was when I had a third-party vendor and they were like, “You can’t touch our indexes.” I could disable their indexes and they didn’t know they were disabled, because they still look like regular indexes in object explorer. Other than that, I drop them too. If I need to get rid of them, just drop them. I feel differently about tables. If I need to drop a table, I will rename it with like “to be deleted” at the end of the table name with a date.

 

Does auto update statistics cause performance problems?

Brent Ozar: I’ll take one last question from Nestor. Nestor asks, “Have you ever encountered a performance problem due to auto update statistics being set to on?”

Tara Kizer: I actually have. I didn’t disable auto update statistics on it. What I did instead is I made sure that my statistics were being updated with our own code, daily, multiple times per day, so that the auto update statistics wouldn’t kick in. I wanted it still enabled in case we weren’t hitting a new table, at least that would kick in for that new table for instance. But I believe that the issue was that the auto update statistics uses sp_UpdateStats which uses the last—it uses a weird sample. So our code, well, in our opinion, through testing, our code did a better decision on what the sample size should be. So we wanted our custom code to run usually.

Brent Ozar: All right. Thanks, everybody, for hanging out with us this week. We will see you guys next week at Office Hours. Adios, everybody.

Previous Post
New White Paper: How to Build a SQL Server Disaster Recovery Plan with Google Compute Engine
Next Post
[Video] Office Hours 2017/03/08 (With Transcriptions)

6 Comments. Leave new

  • Alex Friedman
    March 12, 2017 10:46 am

    Ahahaha the vendor said you can’t touch the indexes but they didn’t notice the indexes were disabled? That’s brilliant.

    Reply
  • In my rush to type my question before the list got too big, I mistyped that there was one 11 core proc. Yes there are 2 sockets of 11 cores. Does that change your answer other than wondering about my licensing costs?

    Reply
    • Kyle – that’s a heck of a lot of money to light on fire for licensing. If your CPUs aren’t heavily used, it’s going to make much more sense to pick different processors instead of lighting 2 sockets * 11 cores * $2k = $44k on fire.

      Reply
      • Let’s say my wealthy step-uncle named Bill gifted me the licenses, and/or we have a big party every year called Burning of the Benjamins. My question had to do with sp_Blitz complaining “Node 0 has 11 cores assigned to it. This is a really bad NUMA configuration.” and so on through Node 7. My understanding is that the chip would be seen as a 22 core proc, (even number), by SQL Server. Is this actually a “bad” NUMA configuration performace-wise?

        Reply
  • Hmm… We have been curious about the Preferred and Possible Owner stuff. I just posted a question on it at http://dba.stackexchange.com/q/167289/199 and would love to hear opinions!

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.