[Video] Office Hours 2016/11/16 (With Transcriptions)

This week Richie, Tara, and Erik discuss replication, mirroring, memory grants, sp_BlitzCache, failover cluster instances, VLFs, restoring databases, qualifying schema and object names, multiple instances, and table variables.

Here’s the video on YouTube:

You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.

Enjoy the Podcast?

Don’t miss an episode, subscribe via iTunes, Stitcher or RSS.
Leave us a review in iTunes

Office Hours Webcast – 2016-11-16

Erik Darling: “Should mirroring be removed where the mirrored database server will be offline for several days?” Yeah, because your transaction log isn’t going to be able to truncate for those several days. So if you take log backups, it’s not going to help because mirroring is going to be like, “Hey, I need to be able to send these transactions else-whistle before the transaction log can be cleared.” So if it’s going to be down for several days, yeah, I’d probably want to remove mirroring. Unless you can afford to have your log file balloon out to gigantic sizes.

 

Erik Darling: “Does this new version support online index rebuilds yet?” You know, it doesn’t appear to—at least Aaron Bertrand’s roundup said that it doesn’t. I don’t know if anyone else saw that it did somewhere else. I haven’t tried it yet and I have a puzzler of a time trying to use Standard Edition because all I have is Developer Edition which is Enterprise anyway.

Richie Rump: Come on, can’t you burn two hours of your valuable time to figure this out? Come on, man.

Erik Darling: I could, but then I wouldn’t be able to release cool things in sp_BlitzCache for people so that they can know when they’re using table variables and heaps and functions that ruin their performance.

 

 

Erik Darling: “Right after VSS snapshot taken of all server volumes, vendor then backups the databases on the SQL Server instance with a destination of something like file equals one [inaudible]… this breaks the log chain. Contacting vendor rep, but why is this happening?” I don’t know. Ask the vendor. That’s dumb that they do that, break the log chain. Couldn’t tell you. If I could ever see inside the mind of most vendors who work with SQL Server, I would probably just go mad.

 

Erik Darling: “Recommended MAXDOP on a four-core SQL Server to start. They have CPU count equals four and hyperthread ratio of one.” If you have a four-core server, a single socket four-core server, I would not change MAXDOP. I would not set it to anything. It’s not really going to do you much good. I would still want to change cost threshold for parallelism though. You might want to be a little extra aggressive with that and make sure that if something goes parallel it’s because it really needed it. You might even find 50 is a bit low because parallel queries, they use extra threads, they use extra resources. If you have a single socket four-core server, I’m willing to bet you also don’t have much RAM either. So you could be in for a load of performance hurt if too much stuff starts going parallel. You could start taking thread pool. You could start seeing all sorts of resource semaphore waits. So I would mind my beeswax when it comes to parallelism there. I wouldn’t totally disable it. I wouldn’t set MAXDOP to one. I would leave it at zero and just set cost threshold high enough so that things aren’t just wanting to be going parallel.

 

Erik Darling: “Blame Paul Randal since he kills cats for every shrunk database apparently.”

Richie Rump: It’s true.

Erik Darling: Yes. That is true. Slowly and savagely. It’s nasty really. I don’t know how he lives with himself.

Richie Rump: Just don’t shrink a database and you won’t have to worry about that anymore.

Erik Darling: Yeah, exactly. It’s like all the weird scenes out of The Cell where it’s just like intestines and weird…

Richie Rump: I’m not familiar with that and I will continue to be so.

Erik Darling: You never saw The Cell?

Richie Rump: No.

Erik Darling: It has JLo and Vince Vaughn in it. How have you not seen The Cell? She like plays a psychiatrist who goes inside a serial killer’s mind to help find some girl who he is like holding hostage.

Richie Rump: That’s probably why.

Erik Darling: Yeah, all right. Sorry you have no taste in movies.

Richie Rump: Did you just drop a 90s film reference? Like a really obscure 90s film reference?

Erik Darling: It’s not even obscure. There’s a sequel to it. It’s not obscure. It’s totally—it’s got Vincent D’onofrio in it.

Richie Rump: Yeah, like they made a sequel to Anaconda. Whatever.

Erik Darling: I don’t know. It’s true. Did they make a sequel to that shark movie with Samuel L. Jackson and LL Cool J?

Richie Rump: Deep Blue Sea.

Erik Darling: Yeah, was there a sequel to that?

Richie Rump: I would assume that SyFy probably did something like that.

Erik Darling: Yeah, probably. That’s what turned into Sharknado probably.

 

Erik Darling: Let’s see here. Someone is using Windows “21012 R2, SQL Server 2012,” also R2. There is no such thing as SQL Server 2012 R2. There is only 2012. “How much RAM for OS, how much for SQL? 32 gigs on the server.” I would probably give 8 gigs back to Windows that it can do things. That’s it.

 

Tara Kizer: Can you guys hear me?

Erik Darling: Yes.

Tara Kizer: Finally.

Richie Rump: You’re still a little faint but we can hear you.

Tara Kizer: All right. I’m as high as it can go. It doesn’t happen on WebEx.

Richie Rump: Hear that everyone? She’s as high as she can go.

Erik Darling: Rocket man. So Tara, there was a replication question.

Tara Kizer: Yay.

Erik Darling: Yeah, I know. It’s a good one, maybe. “Does the snapshot folder for replication need to be on its own drive?”

Tara Kizer: It does not but you may run into a problem with disk space. We had all of our stuff separated into mount points and the default location for the snapshot folder ends up being put with the system databases, maybe not tempdb, but the other stuff. You know, also where the error log is. In my environments, that mount point was always created small since everything there is usually small, so it caused replication to fail at one point when we were doing a snapshot. We just didn’t have enough disk space for it. So we ended up moving it to its own mount point so that it could not impact the system stuff also. So not required but it might be recommended, depending upon the size of your snapshots.

Erik Darling: All right then. Well, hope that answers your question, replication person. Your life is hard enough with replication in it.
Erik Darling: “I have someone hogging the cache and killing page life. I am aware of a couple DMVs that can point out possible offenders but could it be possible that the offending query doesn’t show up in these DMVs?” Geez, that’s a weird one. So if you could clarify a little bit. When you say that you have “someone hogging the cache and killing page life,” I assume you mean that their query is asking for a memory grant or asking for memory that is draining the buffer cache, the data cache, from SQL Server. If you’re on a decent version of SQL, like 2012, I think SP 2 or 3, or 2014 SP 2, or 2016, you have a lot of good information about memory grants, prior to that you don’t have a lot. What you could try doing is if you’re on a more modern version of SQL Server you can run BlitzCache and you can order by memory grant or average memory grant. You can see if any queries are asking for great big memory grants and if they have unused memory grants. I added a couple months back. Prior to that, not really a lot of good information on that. It would kind of be hunting and pecking to try to see how much memory you’ve got assigned to a query. Anyone else? Want to chime in on that?

Tara Kizer: No.

Erik Darling: No? All right. You’re fine.

Tara Kizer: You got it.

 

Erik Darling: “While you are on MAXDOP and WIN 20000 R2, 8 procs, cost threshold 50, MAXDOP 4. Why do I have nine ECIDs for one SPID?” Was anyone paying attention when I did the parallelism [inaudible] at the precon?

Richie Rump: No.

Erik Darling: No.

Tara Kizer: I was tired.

Erik Darling: Everyone was tired that day. So the reason you have 9 is because parallelism doesn’t just control cores, it also controls threads. So you can have up to four threads per parallel operator per core. So if you have two—let’s just call them clustered index scans—happening on two different processors and they’re both using four cores, they’re both using four threads on each core rather, you will see four threads a piece and then one thread coordinating. So that’s probably why you see nine. That’s what I have to say about that.

 

Erik Darling: So, “sp_BlitzCache?” That’s someone’s question. Have you no decency, sir? Have you no decency?

Richie Rump: So, sp_BlitzCache. Go!

Erik Darling: Oh my word. How do you end up in Office Hours and not know about BlitzCache? How does that happen?

Richie Rump: That is what we like to call in the business a softball.

Erik Darling: Man. All right. Well, I’m going to send that link to the First Responder Kit out and you can go and you can look at all the awesome and interesting things that we have in there. There’s Blitz, it’s BlitzIndex, it’s BlitzCache, and it’s BlitzFirst, and now it is also includes BlitzWho because we separated out some of the code from BlitzFirst into its own stored procedure. BlitzyMcBlitzFace is not in there, yet.

Richie Rump: Damn it.

Erik Darling: We’re working on it. We’re trying to figure out how to get it to just return weird ASCII faces but I’m just not good enough at using replicate to do that.

Richie Rump: And I’m not going to help you.

Erik Darling: No, no one is helping. I keep emailing Kendra and Jeramiah like, “Dude, you’ve got to help me with BlitzyMcBlitzFace” but they’re like, “No. We have lives now.” I’m like, okay, fine.

 

Erik Darling: All right. Tara, here’s one for you. You’re going to love this. “I have a two-node failover cluster, SQL Server 2014 Enterprise and Windows 2012 with a default instance. I want to add named instances to the same machines. Do I need a SQL cluster and separate drives for each new instance?”

Tara Kizer: You need to install a new failover cluster instance. So you install SQL Server and installation is going to recognize that you’re on a cluster and it’s going to install a new instance using the cluster—go through the wizard with the cluster stuff. Then you do need separate drives for each new instance. I’m not sure that you want to put more than one more instance on a two-node failover cluster though. That wouldn’t be recommended, really only one is recommended for a two-node failover cluster, but maybe two. I wouldn’t go for three though or any higher.

 

Erik Darling: “Is pull better than push for replication performance?”

Tara Kizer: I don’t know the answer to that. I’ve never used a pull subscription, always push. I think that most people who use transactional replication use push. That’s the default. I have no idea about performance though.

Erik Darling: All right. Good stuff. I don’t know the answer either, clearly. No replication going on here.

 

Erik Darling: We have a question about BlitzCache. “SQL Server 2014 Standard. When I run BlitzCache during the day I’m seeing that everything in there has been created within the last minute. Am I right to assume from that that we are constantly recompiling our code?” No. If you are constantly recompiling your code there would be nothing in there. If you’re constantly compiling your code, you’ll have a lot of single use plans in there. So where you’re asking about temp tables, that will cause a statement level recompilation, that won’t cause the whole thing to recompile. And that will only happen when the temp table changes enough to trigger a recompilation. So it’s typically like six uses or a lot of modifications within the temp table. There’s a great post by Paul White called “Temp Tables and Stored Procedures.” I’m going to grab that link for you in case you don’t already have it. I’m going to drop that into the Q and A section so you can read that. Since you’re on 2014, one thing you can do to help stave off recompilations based on temp tables is if you’re adding indexes to them you can add indexes as part of the inline syntax.
Erik Darling: Wow, another replication question. Tara, you ready?

Tara Kizer: I actually might have some information on that one.

Erik Darling: All right, all right. I hope I’m reading the right one then. “Distribution database. 8 million rows. Retention set to four days due to Informatica PowerExchange and specific requirement. Can we set allow anonymous and immediate sync in publication? Is it good practice?”

Tara Kizer: Unfortunately, I don’t have the answer to the actual question but I do have a comment on your retention of four days. I have worked at a job where we did use Informatica PowerExchange. I’m very familiar with this issue. It causes blocking in the distribution database. Informatica reads the distribution database directly rather than being an official subscriber in replication. We had to set our retention down to, I think it was 12 hours. So the Informatica team that we had said that—I think the minimum that they were saying was two days. Actually, I think they wanted four days but we had agreed to two days. We ended up having to go down to 12 hours due to how busy our system was, how much data was—you know, inserts, updates, and deletes on the publisher. So you do not have to keep yours to four days. We definitely did not use it. If you ever need to do a resync, you’re going to have to do a snapshot obviously, but how often does that happen? And with Informatica Power Exchange, they create the publication so if they’re not setting allow anonymous and immediate sync this is really a question for Informatica rather than for us. Contact them to see what they allow for the publication properties.

Erik Darling: Curse you, Informatica.

Tara Kizer: I did not like supporting that environment.

Erik Darling: I don’t blame you. From what I hear, that’s a tough one. Informatica is like difficult for like really smart people. I can only imagine how lost I would be if I had to deal with that.

Richie Rump: Yeah, I didn’t like Informatica at all. It was no fun.

Erik Darling: It’s just weird because it’s a widely touted ETL tool. People are like, “Yeah, Informatica. Get all your data…” blah, blah, blah. I’m like, okay. I will write my own damn BCP. But I’m dumb, so don’t always do what I do.

 

Erik Darling: Eric asks… Eric, you spell your name wrong, first of all. “Someone has been trashing tempdb on a dev server.” So my first question is, who cares? “How can I catch someone after the fact that has been killing tempdb?” Well, you can’t. Not really. You can check the plan cache and see if anything in there is weird but there’s not really a way to sort by tempdb usage after the fact. I don’t know guys, what do you think?

Tara Kizer: After the fact? I don’t really think so but if this is a recurring problem start logging the data. Start logging sp_WhoIsActive maybe every 30 seconds, maybe more frequently temporarily. At my last job, we had a 500 gigabyte tempdb database and I think our mount point had like 800 gigabytes. This was for business analytics type stuff. Crazy, crazy queries pulling massive amounts of data. We got an alert that our tempdb mount point was running out of space. I took a look and sure enough it had grown past our half terabyte, up to 800 gigs or whatever it was set to. All I had to do was going into sp_WhoIsActive and look at the past couple hours and you can see under—I forget what the call name is but it’s like tempdb allocations, I think. Immediately I could see a query that had a massive amount of tempdb allocations. It just jumped right out. I did an order by that column and it was really easy to see. So I would highly recommend that in any production environment that you do log sp_WhoIsActive every 30 seconds, every minute let’s say, keep maybe ten days of data. Then if you need to do this like in a dev environment, you know, just do this temporarily or just don’t log the data very frequently.

Erik Darling: Again, this doesn’t help you after the fact, but if you want to bust someone for doing it, I wrote a blog post a while back about tracking tempdb growth with extended events. So if you want to fire that on up, you can see if you catch anything interesting happening. I’m willing to bet that someone is just like doing something dumb like rebuilding indexes and sorting in tempdb. That’s always fun.

 

Erik Darling: There’s an Entity Framework question, Richie.

Richie Rump: Uh-oh.

Erik Darling: Let’s see, “I am seeing Entity Framework running ‘set showplan all’ on and then seeing ‘showplan permission denied’ in database in traces for user errors. Also seeing ‘must declare the scalar variable at 469’ errors in Entity Framework. Are these settings I can ask the devs to change or is someone…?”

Richie Rump: I have not seen anything that sets showplan on. I would assume that it’s somewhere in the code somewhere. That would be my first guess. You really shouldn’t need showplan, right? I mean unless we’re actually doing something in the application with our execution plans, then probably I would need it. But yeah, I’d dig into that just a little bit and take a look at the actual code. Not the code that’s going into SQL Server but the code that from Entity Framework and your data access layer. I’m assuming someone probably copied something somewhere and it’s explicitly putting that line into probably the Entity Framework pipeline to go to SQL Server. So that would be my gut check because I don’t know of anything in Entity Framework that does showplan.

Erik Darling: No, especially showplan all. I would be careful if someone is running like profiler or DTA on the server. They might be doing something kind of weird and trying to like gather stuff while it’s running. They might be profiling the code in some way.

 

Erik Darling: Let’s scroll on down. “What version of Windows 2016 Server should be used for SQL Server 2016 Enterprise?” Ugh.

Tara Kizer: “What ver–?” I don’t understand.

Erik Darling: I guess because you have like Standard and Enterprise and Core at all that stuff. I’m pretty agnostic on that one. I don’t really have a lot of opinion on that.

Tara Kizer: I’ve used—not for 2016—Standard and Enterprise and a lot of times we would use SQL Server Enterprise with Windows Standard. We just didn’t need the Enterprise features of Windows. But you need to compare the editions for Windows 2016 Server and see what you require for your business. One of the things is going to be how much memory do you need on that server.

Erik Darling: Good point on that one.

 

Erik Darling: VLFs. “On a scale of 1 to 5, with 5 being how much attention we should pay to it…” That is not how you scale a question. You should have a most and least in that. “What is your opinion, is there a way to measure its benefit?”

Tara Kizer: So VLFs are a problem for database recovery. When you restart SQL Server, it goes through crash recovery and if you have a lot of VLFs on your database, that database may not come up for a while. I’ve had a database that have 75,000 VLFs in production, it was a mission-critical database. I didn’t know about VLFs, this was probably about ten years ago at this point, but the database took 45 minutes to come online. After contacting Microsoft, I learned about VLFs, so we had to reduce them. Once you fix your autogrowth, you’re not going to encounter the VLF issue anymore. It’s a problem with restoring a database, and for most people, it’s going to be what happens to the database when you restart the SQL instance as it goes through crash recovery. I’ve seen some comments about it possibly affecting performance but I don’t know how to even check that. I’ve never been told that performance issues were due to high VLFs. But as a result of the 45-minute outage we had, I then set up monitoring for VLFs. Run it maybe once a week, a VLF script across all your servers and databases and then report back in for anything that’s showing high numbers. Once you fix it though, it should not be encountered again on that same database.

Erik Darling: I once had a database with a 500 gig log file that had 5,000 VLFs in it and it took 21 hours to recover.

Tara Kizer: Oh, wow.

Erik Darling: Thankfully this wasn’t like a production outage 21 hours. This was like me restoring it to offload DBCC CHECKDB. But as soon as my script went into the restore portion, it sat there for 21 hours. Just an amazing amount of time. So it’s totally something to pay attention to if you restore a database and you see that it takes a long time. I’d probably pay more attention to it then or if you’re like planning on doing some setting up for log shipping or mirroring it might be something to look into. But overall, I think Tara is right. Check in on it like once a week or once a month. Again, once you fix it the first time, it’s hard for it to get out of hand again.

 

Erik Darling: “SQL 2012–” but this really can be asked for any question. I don’t think this is just for SQL Server 2012. “For app developers, do you recommend always using fully qualified object names? Example, database, schema, object, most specifically schema.” Yes, yeah. I would qualify schema and object.

Tara Kizer: Schema and object, yeah, but not the database really.

Erik Darling: Yeah, not database.

Tara Kizer: Unless you need to cross over to another database. You’re going to get the database attached through whatever database you connect it to from your connection string but if you need to cross database, then that’s when you do three-part name.

Erik Darling: Yep.

Richie Rump: It depends how you are architect the database though right? I mean if you’re not using schemas then what does it really count?

Tara Kizer: Well, yeah. I like the best practice of having always using dbo. if you’re not using schemas.

Richie Rump: Yeah, I got into an argument at like a code camp or something with some Oracle guy that was telling me that I should always qualify with schema and all this other stuff. I’m like, “Look, what does it matter unless I’m actually using schemas? If I’m not using schemas, who really cares?”

Tara Kizer: Yeah.

Richie Rump: He huffed off in his Oracle, huffy, I’m-better-than-you way.

Erik Darling: Well, in Oracle everything is a schema, right?

Richie Rump: That’s right.

Erik Darling: You don’t have as much database separation as you do within SQL Server. SQL Server you have an instance and you create databases. In Oracle, you have a database and that’s your instance. If you want another database, you have to install another instance. So within a database, you may create lots of different schema to separate user workloads out but you still sort of end up with just this one monolithic pile of garbage. Not that I’m saying Oracle is garbage, it’s just tough to maintain all that.

 

Erik Darling: Obvious follow up on the cluster question. “Why aren’t multiple instances recommended?” For the same reason you don’t recommend multiple instances anywhere. It is a pain in the butt, the entire butt, to troubleshoot performance stuff when you have stacked instances. Anything else you guys want to add on that?

Tara Kizer: I don’t like to stack instances on a single server. So let’s say it’s a two-node cluster. If you have two instances and they were both running on the same node, that’s the same thing as stacked instances anyway. But now let’s take a look at a two-node cluster where we make it active-active. I know that that terminology isn’t used anymore—you know, isn’t the right terminology—but it is still used. So you’ve got an instance running on each node. I actually like that environment because we’re using the hardware. The caveat though—and Microsoft highly recommends that you do not do this—you run active-passive instead so only one instance on a two-node cluster. So the caveat is if you have an outage, you lose a node, both instances are now running on the same server, can that hardware support the load of both instances? I always tell people, well, so what if we have an outage. The companies that I’ve worked for, we had these expensive contracts with the vendors to replace hardware within like four hours. So it’s not like we’re going to be running on the same node for two days. We’re going to have it replaced and fixed in probably less than an hour. But let’s just say it’s a major hardware issue, it’s going to be within four hours or so. So, I don’t care about that. I like to use all my hardware that I can.

Erik Darling: Cool.

 

Erik Darling: I think the last question that we have time for, “Is there ever an instance where a table variable is better than a temp table?” Do you guys have anything?

Tara Kizer: As far as better, I mean the only time is if you want to persist the data. If a transaction gets rolled back, the data gets rolled back in the temp table but if you put it into a table variable, that data would still persist. It survives the rollback transaction, but I just use temp tables everywhere until I have a tool that won’t allow me to. For instance, SSIS, and I can’t remember the specific issue, but I had a stored procedure that was using a temp table and it would not—there was some kind of bug in the SSIS and I had to switch that specific code to use a table variable to get it to work and it killed me to do this because I hate them for performance reasons but it was the only way we could get the package to work. It’s a known bug apparently.

Erik Darling: So for me, there is one other sort of interesting side to table variables, that is that they will never cause a recompilation. So SQL doesn’t generate statistics for table variables so if you use them in a stored procedure, you will always, each and every single time, you’ll get the same heinous cardinality estimate but you will never trigger a recompilation. The tricky, sucky thing is that the only way to get an accurate cardinality estimate out of a table variable is to use a recompile in it. So it’s just like you have this feature, it’s like, it won’t cause recompile but if you want good performance, you have to recompile. So my use of table variables is pretty much limited to code and tasks where performance does not matter. A perfect example is Ola Hallengren scripts, he sticks everything into table variables and runs off those, it doesn’t use any temp tables just because performance doesn’t matter. He’s just iterating over one object at a time anyway. So that’s the kind of stuff where I do the table variable, maybe, but not even then guaranteed. That brings us to 12:45. Welcome back, Tara. It was lovely having you here.

Tara Kizer: Yay.

Erik Darling: Answering these replication and clustering questions.

Tara Kizer: There were a lot today.

Erik Darling: There were a lot last time too and I just didn’t read them.

Tara Kizer: Sorry.

Richie Rump: Yeah, he just walked right over them. Just like blah, blah, blah.

Erik Darling: Like nope, nope, nope, nope, not happening. All right. Cool. See you guys next week. Thanks for coming. Goodbye.

Tara Kizer: Bye.

 

Previous Post
A Little Fun With Math
Next Post
Initial Thoughts on 2016 SP1

2 Comments. Leave new

  • Hi, I really like to read these posts.
    One thing I would like to suggest is that could you for example use bold for the question section, so it could be easier to find the questions?
    It would also help, if one is not interested to read all answers to a certain question but would like to quickly and easily jump to the next question.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.

Menu
{"cart_token":"","hash":"","cart_data":""}