Blog

SQL Interview Question: “Talk me through this screenshot.”

After writing about “For Technical Interviews, Don’t Ask Questions, Show Screenshots”, lots of folks asked what kinds of screenshots I’d show. Here’s this week’s example.

I show each screenshot on a projector (or shared desktop) to the candidate and say:

  1. What’s this screen from?
  2. What does the screen mean?
  3. If it was a server you inherited from someone else, would there be any actions you’d take?
  4. What questions might you want to ask before you take those actions?
  5. Would there be any drawbacks to your actions?
  6. What would be the benefits of your actions?
Rorschach test

Rorschach test

After a few days, I’ll follow up with my own thoughts.


UPDATE 2016/05/20 – Great thoughts, everybody. This one was fun because it stems from real-life scenarios I’ve seen several times. You wouldn’t believe how long it takes folks to recognize this screen in real-life interviews – often it takes DBAs tens of seconds to realize they’re looking at TempDB. (They often start by talking about some other database because the file name tempdev is so deceiving.)

The DBA heard that they were supposed to create a file for each core, but they misunderstood the difference between cores and processors. The server had 2 processors, each with 4 cores – but they created 2 data files originally.

They had a super-fast SSD attached to the SQL Server as E:, and it’s got a relatively limited amount of space – say 128GB – so they started with small file sizes and let them autogrow.

At some point, the SSD ran out of space, so the DBA added another emergency overflow file on a slower drive (M:). Maybe they shrank it back manually, or maybe they have a job to shrink it – in either case, I get a little suspicious when I see small file sizes because there’s probably shrinking going on.

I got a chuckle out of the answer about the server being a dev box because the database file is named tempdev – even though I see a ton of servers, the default “tempdev” makes me pause every time because it was such an odd file name choice by Microsoft. Funny how everybody’s just as bad at naming conventions as I am.

So to answer the questions:

3. Would I take actions? I’d check to see if there are shrink jobs set up on TempDB, and if so, I’d start by disabling those. I might consider adding more TempDB data files, although if it only had one data file, I’d be a little more careful because it can have a funny side effect.

4. What questions would I ask? What wait types is this server facing? Is the E drive actually a good spot for TempDB? How are the file stats looking on that drive? Have we had a history of running out of space here? How big are the user databases? Are we sorting indexes in TempDB?

5. Any drawbacks? If TempDB is getting regularly hammered, and it runs out of space and needs the overflow file, I might not know it due to the shrinks. I’d start by disabling the shrink jobs so that I can see if this thing grows, and what it ends up growing to. That’ll help me plan for capacity.

6. Benefits to my actions? Some folks mentioned adding files or pre-growing files can make it faster for end users, but be really careful there. Anytime you say something will be faster, then as an interviewer, I’m going to challenge you to define what you would measure, and how it would change. If you don’t have metrics at the ready, then I’m going to suspect cargo cult programming.

[Video] Office Hours 2016 2016/05/11

This week, Brent, Angie, Erik, Jessica, Richie, and Tara discuss backups, failover events, tempdb errors, errors, other errors… oh, did we mention errors?

Here’s the video on YouTube:

You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.

If you prefer to listen to the audio:

Play

How do you feel about third party backup software?

Jessica Connors: We have a question from Marcy. She says, “Any experience/thoughts on backup software? Do you know will doing a local backup invalidate the restore chain up to time of last full?”

Erik Darling: Yes, unless you do copy only.

Brent Ozar: So copy only, what kinds of backups does that affect? Like if you take a full backup with copy only what does it do?

Erik Darling: It takes a full backup of the database without changing the LSN differential base. So you can take a copy of it, those are really good for restoring to refreshing dev databases or doing other stuff like that.

Brent Ozar: It’s particularly important for differentials if you’re the kind of shop that’s taking differentials. If all you’re doing is transactional backups, it doesn’t matter. Transactional backups can hook up to any full. It just really matters when you’re doing differentials.

Erik Darling: Yep.

Brent Ozar: Often when people ask this, “What kind of backup software are you using?” They’re trying to mix native backups and like Backup Exec or Veritas. But when you’re doing your backups, you want copy only. Often these third-party options don’t have those config settings.

Erik Darling: Or a lot of people if they’re doing something like log shipping won’t be able to use the jobs inside there or like they’ll have their backup software still timed. I’m like, “No.”

Brent Ozar: Yeah, oh, that’s a good point, too.

Erik Darling: That won’t work.

Brent Ozar: If you’re doing log shipping, don’t use the Backup Exec or Veritas log backup jobs. I don’t know that we even need to tell them that because within five seconds of doing it, you’re going to figure out its broken.

Erik Darling: Unless you’re not learning or anything.

Brent Ozar: Well that’s true. That’s true.

Jessica Connors: Hey, Brent. Your email is popping up on the slides.

Brent Ozar: …Twitter. Like how does that happen?

Jessica Connors: Uh-oh.

Brent Ozar: Now that people know it, they’re going to be tweeting like crazy and having it show up on the webcast.

Erik Darling: I’m famous.
[Laughter]

 

What’s your favorite kind of backup software?

Jessica Connors: Do we prefer any certain kind of backups? Like on SQL Critical Care, like the intro or our sales calls, they say they’re either using SQL native backups, they’re not doing transaction logs, or they’re using [inaudible 00:01:58] third-party software. Is there one that’s standard? Better than the other?

Brent Ozar: All right, we should go around through alphabetical order. Angie, what’s your favorite kind of backup?

Angie Walker: Ola Hallengren.

Brent Ozar: Let’s see here. The other Brent Ozar in the black shirt.

[Laughter]

Erik Darling: I like Dell LiteSpeed.

Brent Ozar: Why do you like it?

Erik Darling: Because it has cool stuff other than backups. You can read transaction logs with it which I love and you can do object-level restores which I love.

Brent Ozar: Yeah.

Erik Darling: DBA-friendly stuff.

Brent Ozar: I would have to say as the other Brent Ozar, I agree with you there. The ability to pluck objects out of a restore, that’s freaking phenomenal. To be fair, IDERA SQL Safe and Redgate SQL Backup have those same capabilities as well. Richie, how about you? What’s your favorite kind of backup?

Richie Rump: A DBA that does them.

Brent Ozar: Yeah, right. Or Azure? That was your chance to say Azure.

Richie Rump: I said that last time. I said that last time.

Erik Darling: SAN snapshots.

Brent Ozar: Tara, how about you?

Tara Kizer: I used LiteSpeed back when it was Quest. We converted over to Redgate SQL Backup just because of the cost reasons. They were basically the same product but Redgate’s solution was just so much cheaper and they threw in SQL Bundle for the whole DBA team at a job a few years ago. But native backups with compression. As long as your backups are compressed, that’s what I’m happy about. Then having full backup, possibly differentials and transaction logs, very frequent transaction log backups.

Brent Ozar: I like it.

 

Should you run CHECKDB on a log shipped secondary?

Jessica Connors: All right. Let’s move on to a question from Heather. She says, “Do you need to run DBCC CHECKDB on a log ship secondary that is always in restoring state?

Tara Kizer: So you can’t, right? I mean because you’re not able to run commands on it.

Brent Ozar: Can you take a snapshot on a database that’s in restoring?

Tara Kizer: No.

Brent Ozar: I didn’t think so.

Erik Darling: No. You could bring it into standby and do that but…

Tara Kizer: You can bring it into standby mode every single time log shipping has to do the restore which is generally every 15 minutes or more frequently. Is your DBCC CHECKDB command going to complete in that window, which it’s not on larger databases. So, you’re screwed there.

Erik Darling: Well log shipping won’t pick back up until you take it out of standby. So it will just accumulate logs.

Brent Ozar: Oh, no. It will keep right on going. It will just go to the next log.

Tara Kizer: You’d have to disable the jobs for it to stop.

Brent Ozar: Especially if you’re one of those guys who sets up log shipping to kill all the user connections whenever it’s time to do a restore.

Erik Darling: I’m not one of those guys.

Brent Ozar: Nice.

 

Why don’t my Extended Events work when I fail over?

Jessica Connors: All right. Nick asks, “Is it possible to hook into failover events? My extended events are never on after it fails over. It doesn’t happen often but it’s a pain to lose all my query [inaudible 00:04:35] events.”

Brent Ozar: Oh, wow. I bet you’re using AlwaysOn Availability Groups. So why wouldn’t you just run it all the time? If it’s an extended events session why wouldn’t you run it on every server?

Erik Darling: Oh, I bet he doesn’t have them set to automatically start when SQL starts.

Brent Ozar: When the server starts?

Erik Darling: Yeah.

Brent Ozar: Oh. Okay. So if it’s a failover cluster, you probably want it to just start every time the SQL Server starts up. If it’s an availability group and you’re failing around the AG from node to node, I would have it run with server startup there too. Just so that, because people could start running queries on the secondary. Tell us more about, if you want to, Nick, follow up with more about your question and your scenario, that’s curious, I’d like to hear more about that. If it’s kind of extended events, I don’t know that I want to hear more about it, but I probably should.

 

How do you know when Resource Governor is causing a bottleneck?

Jessica Connors: Question from Dennis. He says, “How do you know when the Resource Governor is causing a wait type?”

Brent Ozar: Oh, that’s a good question. Oh, without hitting the Resource Governor DMVs, I don’t know if you could. There’s Resource Governor DMVs where you could look at the pool.

Erik Darling: What wait type specifically?

Brent Ozar: SOS_SCHEDULER_YIELD. So if you’re banging up against CPU. If you have a CPU limit [inaudible 00:05:47] latch, now if you’re dealing with the new storage waits. That is such a good question. I’m actually going to look at that in the transcripts because that’s a great idea for a blog post because I want it to hit it first the Ask Brent. I would want to know during a five-second sample that queries are being throttled by Resource Governor and I don’t know how to do that offhand. If anyone knows in the listeners who is watching, in the chat feel free to post in a question that answers that because we would all love to see that and then you’ll save us from doing work. We don’t really want to do work today. We’d rather surf right now.

Angie Walker: Hey, I have to work today after this.

Richie Rump: I haven’t been able to work today. So I would like to do some work today, how about that?

Brent Ozar: Oh, I’m scheduled in meetings with all of you. I guess that’s actually work.

Angie Walker: For you.

Brent Ozar: Yes, for me.

 

Are there any bugs with SQL Server 2014 SP1 CU6?

Jessica Connors: Let’s see, do we know of any issues or bugs with SQL 2014 SP1 CU6. We have our buddy, Scott, he is about ready to deploy to production after lab testing.

Tara Kizer: That’s very specific.

Brent Ozar: There’s one with Hekaton. If you’re using Hekaton there’s an issue with backups. I don’t know that the hotfix is available to the public yet. Right now they’re saying if you have Hekaton and you’d like backups, because you know, they’re kind of nice to have. Then you want to stick with CU3 I believe it is. There’s a regression that came in, then 4, 5, and 6. That’s the only one that I’ve heard of though.

Angie Walker: I used SP1 CU6 at my last place. We rolled it out. We did dev and staging, all our environments. We didn’t have any problems but it’s going to [inaudible 00:07:28] vary everywhere you go. So if you tested it in your environment…

Brent Ozar: I call BS on that because this came out April 19th. You’ve been working with us for a while.

Angie Walker: Oh, no, no, no, maybe it was…

Brent Ozar: Are you holding out two jobs? Are you working somewhere else on the side?

Erik Darling: Moonlighting.

Angie Walker: Puppy databases.

[Laughter]

Angie Walker: I think I missed the SP1 part. We were just running CU6. I think RTM CU6.

Brent Ozar: There you go.

Angie Walker: Never mind.

 

How do I move a lot of data into the cloud?

Jessica Connors: Phyllis says, “I’m a developer and have restorative backup to my local machine. I need to remove most of the data so it can be easily copied up to Azure for testing. Most of the data is in three tables. These three tables have between 500 million and 800 million records I need to delete. What is the fastest way to delete this amount of data?”

Erik Darling: Don’t do it.

Jessica Connors: “I have a bunch of a foreign keys in the mix so I don’t think I can just copy the data to keep truncate and move it back.”

Erik Darling: You don’t have to do any of that. You don’t have to a lick of work. You can script your database to statistics only and you can put that anywhere. You can actually just run that as a script and it will create all your tables and it will create the associated statistics. Then every query you run, SQL will act like it’s hitting those tables because the statistics will feed the optimizer certain information. I think there’s a blog post out there somewhere about it that I’ll dig up.

Richie Rump: Right. But if in this, so if you’re moving it up to Azure for testing, you’re probably doing some app testing. How would that work with that, Brent Two?

Brent Ozar: It’s going to fail anyway. It’s Azure.

Richie Rump: Oh.

Brent Ozar: I kid. So what you do and to elaborate on Erik’s just because not everybody is going to get this. You have to go into tools options in SSMS. Scripting the statistics is not on by default. So go into tools, options, and there’s a script set of options. So whenever you go to script out a database, you need to include statistics and things like partitions, partition functions, there’s a bunch of things that aren’t included by default. Then after you change those options, you can right-click on the database and go script, and there’s a little wizard for it. So, Richie, brings a great question. What if there’s—or if there’s some parts of some tables that you want to keep, you can script out those tables. You can script them as inserts with the select, with the data inside there. Or, if you’re a developer, dude, you know how to insert data out. You go through and select the data out and you go insert it in somewhere else. So create the schema up in Azure first because [inaudible 00:09:53] that’s going to fail, you’re going to need some kind of objects that aren’t supported in Azure SQL DB. You’ll go figure out what those are and then after you fix those, then you go insert the data up there, just the parts you need.

Tara Kizer: If you have queries that can determine which parts you need, you could just BCP the data out using those queries, using a view, or a query, and then BCP that data into Azure.

Brent Ozar: Yeah, it’s going to be fast.

Tara Kizer: Or SSIS, you know, import/export wizard, whatever.

Richie Rump: That’s what I would normally do is if I need to copy from one place to another, it would just be a query or something, dump into a clean schema and then away we go.

Jessica Connors: All right, Nick R., extended events, Brent.

Brent Ozar: Oh.

 

How do I capture queries that last longer than 5 seconds?

Jessica Connors: He said, “It’s actually not an AlwaysOn Availability Group. It’s a window cluster.” He wasn’t aware of setting and automatically start extended events on SQL Server. He is looking at any query that runs longer than five seconds, doesn’t have a DBA, so he uses the extended events to hit any one with a stick. [Inaudible 00:10:47]

Brent Ozar: I like it.

Erik Darling: Five seconds seems a little bit low to me unless there’s something really cool about your environment where you have like an SLA of under ten seconds. But any query over five seconds seems a little punch happy to me.

Tara Kizer: I used to support a system that had an SLA of three hundred milliseconds. It was a big deal. So five seconds, someone would have been at my door.

Brent Ozar: So Tara… she stood there with a kill command ready to fire.

[Laughter]

Brent Ozar: We should also say too, so now you learned something. You have the session for setting for extended events to turn it on. While we’re talking extended events, we actually like extended events. Be aware that the more stuff you capture, you can cause a performance problem. So if you get things like the query plan and sometimes if you get the full text of the query you can cause incredible slowdowns. So just make sure that you’re gathering as little data as possible in order to get your stick on.

Tara Kizer: We did our query performance monitoring outside of SQL Server and instead the application logged all this information and we used Splunk to turn through the data and it would send alerts if performance was poor. So we didn’t have to add any overhead to SQL Server.

Brent Ozar: Did you just have like your own inside app you would start logging whenever you called Sequel Server log…?

Tara Kizer: I don’t know how the developers did it but it was all developer magic to me. Then we had Splunk.

Brent Ozar: Smart people like Richie. I would say too, if you’re interested in that and you’re a developer, check out the MVC MiniProfiler. The MVC MiniProfiler, you don’t have to be using the MVC design pattern but the MVC MiniProfiler will log whenever you write queries. It will log what queries they are. You can log it to, I want to say, not just SQL Server but Redis and other caching type layers. So you can then analyze which queries are taking the longest in your application. It was totally open source, totally free, it’s by the guys behind Stack Overflow and it’s what they use in order to monitor their query…

 

Have you ever seen this TempDB error….

Jessica Connors: A question from Justin. He’s wondering if we have ever seen this error in tempdb. Have you seen a query case cause this error in tempdb? The error is “property size not available.”

Brent Ozar: I have when running SSMS 2016 against an older SQL Server instance. Older being like 2014 or things like that. It has something to do with it expects memory in memory objects in there. So make sure you’re on the latest SSMS 2016 or release candidate or whatever they’re calling it. Or just try with SSMS 2014.

Jessica Connors: Let’s go back to Nick. He says, “Is there a way to know if my extended events are taking too many resources? I don’t log too much but Brent mentioned the query text which I do log.”

Brent Ozar: There’s a bunch of wait types. If you go to brentozar.com/askbrent. Ask Brent will give you your wait types on a SQL Server and if your extended events wait types show up as big on the list, it can be an issue. Doesn’t mean that it is but you may be just tracing a bunch of stuff. That doesn’t mean it’s a bottleneck. What I would say is too, when you set up your, and we talk more about the wait types in Ask Brent’s documentation. When you set up your extended events session, log asynchronously off somewhere else to a file and allow multiple event loss. There are settings to say, “Don’t lose an event no matter what it is.” You don’t really need that. Allow multiple event loss in case the SQL Server is under pressure. That way you can kind of make the increase more likelihood that you’re not going to slow the server down.

 

How do I know if my network is my backup bottleneck?

Jessica Connors: Let’s go back to backups. Jonathan says, “In my quest to make our backups faster, I found that specifying the buffer count and max transfer size values have increased speed immensely while stripping the backup to multiple files has had no effect. My server is now under significant CPU disk pressure at this time. Is network bandwidth my limiting factor?”

Tara Kizer: You can test it by backing up to the nul device to see what your throughput should be without any external factors such as the network. So do the backup command, do it to the nul device instead and that will tell you what your system can support. Then you compare that to what your backup time is if it’s going to a NAS device and you know what the network is doing at that point. But buffer count and max transfer size can make significant improvements in your backup times. There are specific values you can pass it if your backups are to a NAS or to a SAN or a local drive. I don’t have those numbers memorized but I think there’s some blog articles out there that can tell you what the optimal values are for those two for wherever you’re backing up, wherever you’re sending your backups too.

Erik Darling: Another limiting factor might be the disk you’re backing up to. If you’re only backing up to one disk, you could just be saturating that disk with the backup. So it might be a bottleneck there. Another thing to be careful of is if you’re altering buffer count and max transfer size and you’re also using compression, compression has a [siren wails in background]. Whoa.

[Laughter]

Excuse me.

Tara Kizer: New York City.

Erik Darling: Crime free. When you use compression, it’s three times the memory penalty. So if you’re altering buffer count and max transfer size, you want to watch your memory usage during backups because there’s three different streams. There’s the read, compress, and write up stream. So there’s three times the memory overhead as a regular backup. Just be careful with that. Don’t set your values up too high, you could crash your server.

 

Are there issues with the latest Visual Studio database projects?

Jessica Connors: Let’s move on to a question from Sheldon. “Do you know of any issues with upgrading to the latest versions of SQL database projects in Visual Studio that might impact releases?”

Brent Ozar: Richie, any word on that? I don’t think any of use it.

Erik Darling: I’m trying to Brady Bunch point at Richie, but it’s not working.

[Laughter]

Richie Rump: Yeah, I know nothing, especially as of the last couple Virtual Studio versions, they’ve done a pretty good job about not breaking anything even in the solution. Because before we used to have different solutions but we could link to the same files underneath because of the different solution. The different Visual Studio versions had different stuff inside of them but now especially, I think starting in Visual Studios 13 and especially now in 15, everyone on those lower versions can use it. So I would test it out, upgrade it, have a couple people look into it, but yeah, you shouldn’t have any problems with it at all. Thank you for using it because not enough people use database projects. I think it’s a neat little tool there.

Tara Kizer: We used it at my last job extensively. It has been five months, so obviously as far as latest versions go, service packs, we went beyond the latest versions from when I left at least. But we didn’t have any issues with the various versions as long as you are running the right Visual Studio version or lower for the SQL Server version, you’re okay. If you are trying to use a higher version of Visual Studio than SQL Server, when I say higher version I mean like let’s say SQL 2014, I forget what the versions was. But like Visual Studio 2013 you would use for SQL Server 2014. If you have 2012, you would use for SQL 2012. But you could also use VS 2010 Limited.

Richie Rump: Yeah, don’t use 2010. Let’s not go back there.

Tara Kizer: We had to have it installed as well as Visual Studio 2008 because we had some SQL Server 2008 R2 things that we still were supporting so we had multiple versions of Visual Studio on our desktops.

Richie Rump: I’m glad I’m out of that game and not having four versions of Visual Studio anymore.

Jessica Connors: All right, another error somebody is seeing.

Erik Darling: Yay.

 

How do I know if I have a memory problem on a 296GB RAM box?

Jessica Connors: It says, “There is insufficient memory available in the buffer pool during very busy times. It’s 296 GB box of RAM on the box. Is this because SQL tries to allocate a minimal amount of RAM for a query?”

Brent Ozar: You know what you want to do is run sp_Blitz. sp_Blitz will tell you the number of times you’ve had forced memory grants. Forced memory grants are when SQL Server says, “Look, I know you want a Gig of RAM to run the [inaudible 00:18:51]. It’s $3.95 and 14KB of RAM.” SQL Server tracks this in the DMVs. You can see the number of times it’s happened since startup. It doesn’t tell you when they’ve happened, it just tells you that they’ve happened. So run sp_Blitz and it will tell you all kinds of things about forced memory grants. It’s just a nice thing to run in terms of a health check too. It will tell you things, if you have suspiciously high free memory, which can indicate that queries are giving a large memory grant and then releasing it. Lots of neat memory troubleshooting things that we’ve had across the last five or ten releases in that.

Jessica Connors: People are just copying and pasting their error messages now. You can’t do that.

[Laughter]

Erik Darling: Jessica has to read this stuff, man.

Brent Ozar: “Have you ever seen a rash this bad?”

Angie Walker: Thank god they didn’t send us pictures.

[Laughter]

 

How do you deploy code with zero downtime?

Jessica Connors: David has an actual question. He says, “Do you know a way to apply a new app release with zero downtime?”

Tara Kizer: Yeah, just make sure your code is backwards compatible. If you’re going to be adding columns that your application is not using SELECT *. You’re altering store procedures. If you’re adding new store procedures, those get added before you change your code. Yeah, you could definitely do it with zero downtime. We did it all the time. We had releases every two weeks for the e-commerce website and that was with zero downtime. They use farms of virtual servers for the web tier and all that stuff and then they just made sure all the store procedure code and all the schema changes, it was always backwards compatible. So no matter what version of the application that they ended up using after the deployment was done, it still worked with whatever. We didn’t have to roll back the databases changes.

Erik Darling: Another thing you can do is only add new features. Don’t fix anything old.

[Laughter]

Brent Ozar: The trendy term for developers is called additive changes, that you’re only adding things, you’re never taking things away. If you want to see how Stack Overflow does it, Nick Craver, their site-reliability engineer, wrote a blog post called “Stack Overflow: How We Do Deployment – 2016 Edition.” He goes into insane details about how they do deployments with near zero downtime. Another site if you’re interested in this kind of thing is highscalability.com. High Scalability profiles a lot of websites and how they do database and IIS and Linux-type deployments. There’s a lot of spam in there, there’s a lot of noise. But they’ve got some good signal from time to time, like how Etsy does deployments.

Richie Rump: Yeah, it’s definitely a practice. You’re not going to just like jump into it and say all of a sudden “I’m doing it.” It takes a lot of work to change the way you do development in order to get zero downtime. It’s definitely an effort where all levels need to be bought into it.

 

Where can I learn more about columnstore indexes?

Jessica Connors: Our friend Nick is back. He says, “Any great resources on understanding columnstore indexes? Trying to wrap my head around them but I can’t figure out when to use them over row storing and how to set them up.”

Erik Darling: Niko Neugebauer, I believe that’s how you pronounce that last name. I’ll get a link to it but he’s done like an 80 bazillion-part series on columnstore indexes which answers more questions than you may even possibly have. He started like when they first dropped and he’s sort of cataloged things through until now. So there’s a lot of good information in there.

Brent Ozar: His website is nikoport.com, N-I-K-O-P-O-R-T dot com. He’s from Portugal. Niko is his first name so that’s where that URL comes from and 80-some parts in there in that blog post series. Where it’s for is especially perfect for data warehouses where you have a fact table that’s really wide, so it’s got lots of columns in it. It’s really deep, it’s got lots of rows in it. You can never predict what users are going to filter on or what they’re going to sort by and the table is highly compressible because it has the same data in it over and over again, like sale dates that have compressrf really well. Quantities, those just compress really well as well. So it’s not unusual to see like an 80, 90, 95 percent compression rate with columnstore tables but it is very specifically for data warehouses. It is not for OLTP.

Erik Darling: And it does a lot of really great things for like aggregate queries too.

Brent Ozar: Yeah.

Erik Darling: So it’s really, whiz bang on that.

 

How do you make Reporting Services highly available?

Jessica Connors: Let’s talk about HA. Monica had asked, “What are the best options for SSRS HA? We are most likely moving to an HA DR solution using a failover cluster with log shipping. Right now we run SSRS on our main production server.”

Angie Walker: Where’s Doug?

[Laughter]

Brent Ozar: I sat in his [inaudible 00:23:36] session so I know this answer. What you do for SSRS is you run it in a bunch of virtual machines. You run it in a whole bunch of virtual machines that are behind a load balancer and if any one of them trips and falls over, you just don’t care as long as there’s others that are all pointed to the same report server DB. So if you want to patch one, you can, totally not a problem. SSRS, relatively lightweight resource requirements so you don’t need physical boxes for it in most cases. You can just get by with relatively small VMs, but a lot of them.

Jessica Connors: Michael keeps copying and pasting that error.

[Laughter]

Angie Walker: Jessica is not reading it, sorry.

 

Why can’t I connect to my server?

Brent Ozar: Connectivity issues. So Michael asked a connectivity issue thing. We’ll be honest, it’s way faster, there’s a slide on here that says, “For multi-paragraph questions, go ahead and ask those at dba.stackexchange.com.” We got that up on the screen right now. Go ahead and put in dba.stackexchange.com in your browser whenever you’ve got multiple paragraphs that are involved in your question or error message. It’s a wonderful site, really love it a lot because other people answer it than us. That’s why I’m one of its biggest fans.

Erik Darling: Yeah, I mean, just generally looking at that, I would just say make sure that SQL Browser is turned on.

Tara Kizer: Browser and then see if you can telnet to the port on the client machine, do a telnet session and telnet to the SQL Server with the port. If it returns a blank screen, it means your connectivity is fine. If you get an error, you’ve got something blocking the access, network firewalls, something.

Erik Darling: Something is amok.

 

Are there any drawbacks with trace flags 1204 and 1222?

Jessica Connors: Let’s do one more. Marcy asks, “I know I can try this on a pre-production server just wondering if you do or do not recommend setting trace flags 1204 and 1222 to get additional deadlock information?”

Erik Darling: Yes. But if you are on a newer version of SQL, you can get really great information from the extended events session. There are queries out there to do that if you feel like [inaudible 00:25:36 doggeling].

Brent Ozar: Yeah, really good. So if you Google for like “extended events deadlock session” there’s one blog post that’s notoriously great for this. Read the comments of the blog post. I can’t remember the author’s name but if the webpage is black, keep reading through all the comments and there’s lots of improvements to the query inside the comments.

Jessica Connors: Cool. Well it’s 11:45 here in Chicago. So…

Erik Darling: 12:45 New York time.

Brent Ozar: Time to start drinking, whoohoo.

Erik Darling: Water.

Brent Ozar: All right, bye everybody. See you all next week.

All: Bye.

What I Look For When I’m Hiring Database Professionals

Matan, Guy, and I recording the podcast

Matan, Guy, and I recording the podcast

On today’s episode of the SQL Server Radio podcast, I talk with Guy Glantser and Matan Yungman about what we look for when we’re hiring.

In the broadest sense, don’t think junior or senior:

  • I’m hiring someone for what they already know, or
  • I’m hiring someone for their capability to learn

(In reality, it’s usually a blend of both, but just think big picture for now.)

If I’m hiring you for what you already know, then I’ve got a list of skills, and I want to see your proficiency in those skills. If one of those skills includes communication, then I’m going to judge you based on how you communicate your mastery of the other skills. For example, I might be looking at your blog posts, presentations, or webcasts about the topics you’re great at.

If I’m hiring your excellent learning skills, then I want to see what you’ve been interested in learning in the past, and how you’ve gone about learning those topics. It doesn’t have to be technical, either – maybe you were interested in perfecting an Eggs Benedict recipe. Show me what resources you used, your preferred style of learning, what lessons you picked up along the way, and how you would recommend that I learn that same thing as fast as possible.

To hear more about my philosophies on that, and hear how Guy and Matan approach hiring for their own companies, check out the half-hour SQLServerRadio podcast.

Implicit vs. Explicit Conversion

Everyone knows Implicit Conversion is bad

It can ruin SARGability, defeat index usage, and burn up your CPU like it needs some Valtrex. But what about explicit conversion? Is there any overhead? Turns out, SQL is just as happy with explicit conversion as it is with passing in the correct datatype in the first place.

Here’s a short demo:

SET NOCOUNT ON;	

SELECT
    ISNULL([x].[ID], 0) AS [ID] ,
    ISNULL(CAST([x].[TextColumn] AS VARCHAR(10)), 'A') AS [TextColumn]
INTO
    [dbo].[Conversion]
FROM
    ( SELECT TOP 1000000
        ROW_NUMBER() OVER ( ORDER BY ( SELECT NULL ) ) ,
        REPLICATE('A', ABS(CONVERT(INT, CHECKSUM(NEWID()))) % 10 + 1)
      FROM
        [sys].[messages] AS [m] ) [x] ( [ID], [TextColumn] );

ALTER TABLE [dbo].[Conversion] ADD CONSTRAINT [pk_conversion_id] PRIMARY KEY CLUSTERED ([ID]);

CREATE NONCLUSTERED INDEX [ix_text] ON [dbo].[Conversion] ([TextColumn])

One table, one million rows, two columns! Just like real life! Let’s throw some queries at it. The first one will use the wrong datatype, the second one will cast the wrong datatype as the right datatype, and the third one is our control query. It uses the right datatype.

SET STATISTICS TIME, IO ON 

DECLARE @txt NVARCHAR(10) = N'A',
@id	INT

SELECT @id = [c1].[ID]
FROM [dbo].[Conversion] AS [c1]
WHERE [c1].[TextColumn] = @txt
GO 

DECLARE @txt NVARCHAR(10) = N'A',
@id	INT

SELECT @id = [c1].[ID]
FROM [dbo].[Conversion] AS [c1]
WHERE [c1].[TextColumn] = CAST(@txt AS VARCHAR(10) )
GO 

DECLARE @txt VARCHAR(10) = 'A',
@id	INT

SELECT @id = [c1].[ID]
FROM [dbo].[Conversion] AS [c1]
WHERE [c1].[TextColumn] = @txt 
GO

The results shouldn’t surprise most of you. From statistics time and I/O, the first query is El Stinko. The second two were within 1ms of each other, and the reads were always the same over every execution. Very little CPU, far fewer reads.

Query 1:
Table 'Conversion'. Scan count 1, logical reads 738, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times:
CPU time = 47 ms,  elapsed time = 47 ms.

Query 2:
Table 'Conversion'. Scan count 1, logical reads 63, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 4 ms.

Query 3:
Table 'Conversion'. Scan count 1, logical reads 63, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 5 ms.

So there you go

Explicit conversion of parameter datatypes doesn’t carry any horrible overhead. Is it easier to just pass in the correct the datatype? Yeah, probably, but you might be in a position where you can’t control the parameter datatype that’s incoming, but you can CAST or CONVERT it where it touches data.

Thanks for reading!

Brent says: the key here is that we’re taking an incoming NVARCHAR variable, and casting it in our query to be VARCHAR to match the table definition. This only works if you can guarantee that the app isn’t going to pass in unicode – but in most situations, that’s true, because the same app is also responsible for inserting/updating data in this same table, so it’s already used to working with VARCHAR data. Also, just to be clear, Erik’s talking about casting the variable – NOT every row in the table. That part still blows.

New Online Classes, Including Performance Tuning When You Can’t Fix the Queries

Every time we announce a new SQLSaturday Pre-Con, the number one question I get is, “When are you going to teach Performance Tuning When You Can’t Fix the Queries in my city?”

Since I can’t get to all cities, let’s do this online!

The hip bone's connected to the ... wait, this is the wrong set of slides.

The hip bone’s connected to the … wait, this is the wrong set of slides.

Performance Tuning When You Can’t Fix the Queries – Friday, May 20th Online, $299 – Your users are frustrated because the app is too slow, but you can’t change the queries. Maybe it’s a third party app, or maybe you’re using generated code, or maybe you’re just not allowed to change it. Learn more.

SQL Server Performance Tuning – Tuesday-Friday, July 5-9 Online, $3,995 – You’re a developer or DBA who needs to speed up a database server that you don’t fully understand – but that’s about to change in a class of learning and fun with Brent. Learn more.

The Senior DBA Class of 2016 – Tuesday-Friday, July 26-29 Online, $3,995 – You’re a SQL Server DBA who is ready to advance to the next level in your career but aren’t sure how to fully master your environment and drive the right architectural changes. Learn more.

These online training classes run from 9AM to 5PM Eastern time using GoToWebinar, live with Brent Ozar, with a one-hour lunch break. Audio can come through either your computer audio (headset recommended), or by dialing into a US phone number. For best performance, test your computer’s compatibility before the meeting to make sure you’re not blocked by a corporate firewall. After purchasing your ticket, you’ll receive an email with your personal GoToWebinar link – don’t share this between users, because only one person will be able to log into the meeting with it. During the session, you can ask questions via text chat, or at the end during the audio Q&A as well.

Or if you’d like to join me at an in-person event, check out our upcoming classes.

Why monitoring SQL Server is more important than ever

Moving parts

SQL Server keeps on growing. With every new edition, you get more features, feature enhancements, and uh, “feature enhancements”. As I’m writing this, SQL Server 2005 is less than a week away from support ending, and SQL Server 2016 is up to RC2. Brent’s retrospective post got me thinking a bit.

We went from Log Shipping, to Log Shipping and Mirroring, to Log Shipping and Mirroring and FCIs (yeah, I know, but Clustering 2005 was a horror show), to Log Shipping and Mirroring and FCIs and AGs, and Microsoft now keeps finding ways to add Replicas and whatnot to AGs. Simple up/down monitoring on these isn’t enough.

Dumbfish

Dumbfish

You need to make sure your servers are keeping up on about half a dozen different levels. Network, disks (even more if you’re on a SAN), CPU, memory, etc. If you’re virtualized as well, you have a whole extra layer of nonsense to involve in your troubleshooting.

And this is just for you infrastructure guys and gals.

For those of you in the perf tuning coven, you have to know exactly what happened and when. Or what’s killing you now.

Tiny bubbles

SQL Server has pretty limited memory when it comes to these things. Prior to 2016, with the advent of Query Store, and a ‘bug fix‘ to stop clearing out some index DMV usage data, your plan cache and index DMVs may not have all that much actionable or historical information on them.

And none of them keep a running log of what happened and when. Unless you have a team of highly specialized, highly paid barely cognizant familiars mashing F5 in 30 second intervals 24/7 to capture workload metrics and details, you’re not going to be able to do any really meaningful forensics on a performance hiccup or outage. Especially if some wiseguy decides the only thing that will fix it is rebooting SQL.

Monitoring is fundamental

If you have a DBA, you (hopefully) have someone who at least knows where to look during an emergency. If you don’t, it becomes even more vital to use a monitoring tool that’s looking at the right things, so you have the best set of information to work with.

There’s a learning curve on any tool, but it’s generally a lot less steep than learning how to log a Trace or Extended Events session (probably a whole mess of Extended Events sessions) to tables, and all the pertinent system DMVs, and blah blah blah. You’re already sweating and/or crying.

Because you know what’s next.

Visualizing all that data.

Time and Money

You don’t have time to do all that. You have too many servers to do all that. You need it all in once place.

SQL SentryDell,  and Idera all have mature monitoring tools with lots of neat features. All of them have free trials. Just make sure you only use one at a time, and that you don’t stick the monitoring database on your production instance.

The bigger SQL gets, the more you need to keep an eye on. Monitoring just makes sense when uptime and performance are important.

Thanks for reading!

[Video] Office Hours 2016 2016/05/04

This week, Richie, Erik, Angie, and Tara discuss deadlocks, replication, SQL Server 2016 features, and more.

Here’s the video on YouTube:

You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.

If you prefer to listen to the audio:

Play

Transcript:

Does NVARCHAR(255) Cost More Than NVARCHAR(30)?

Angie Walker: All right. So Steve has a nvarchar(255) column that he knows will never have more than 30 characters. Will it take extra space to store this column and will there be a performance penalty?

Tara Kizer: It’s already using double since you’re using the nvarchar and not just varchar. It’s Unicode so it’s going to be using 60 bytes instead of 30. But is it going to use more space? No. I’ve seen developers want to do this where they want to have every single column standardized to the same data types and size. I don’t understand that reasoning except for laziness. It doesn’t take long to figure out what each column should be. I mean what happens if it ever…

Erik Darling: My only issue with it as like from a development standpoint is SQL error messages for if you try to insert too much data into a column suck. It’s like “string or binary data might be truncated.” And you’re like, oh, I’ll just go figure out which column that was because the error message doesn’t tell you. Nothing tells you. So you have to go back and guess a million times. So I understand why developers are like, “Let’s just make this a 255 because it’s a variable column anyway. It’s not going to mess anything up.” But, you know, figuring out the right length should be a priority at some point because you may have inserted incorrect data into a column if you give it too long of a length if it’s more than it will ever be or more than it could ever reasonably be. But I understand why developers want to do it because tracking down those errors sucks.

 

Why Is Perfmon Wrong or Missing Counters?

Angie Walker: Okay, Jason says their performance monitoring tool is showing that OS CPU is around 25 percent. The instant CPU that they’re getting from a DMV is running around 80 to 90 percent. “It has been suggested our counters are messed up and to unload and reload these counters. Any experience on doing that?”

Tara Kizer: I have many, many years ago. I don’t remember what it was but back in the day there were lots of issues with performance counters. I just don’t remember what you have to do. You could just restart SQL Server or the box to possibly fix this issue. That was one of the solutions back in the day. But yeah, you’ve definitely got something messed up here because the ring buffers should show the same CPU utilization as what Performance Monitor or Task Manager is showing.

Erik Darling: Yeah, I’ve had to reset them a couple times. Actually when you clone a machine and you turn into a VM a lot of performance counters get screwed up. Like a lot of them just don’t even show up anymore. So I had to like mess with loading and reloading them. If I remember correctly, they were pretty simple DOS commands.

Tara Kizer: Yeah, yeah.

Erik Darling: But they weren’t like… you know.

Tara Kizer: Yeah, I don’t remember what they are.

Erik Darling: I lost my notes on that, sorry.

Richie Rump: Sounds like a blog post, Erik.

Erik Darling: You know, Richie, your blogging has been pretty light lately. If you want to take that, it’s all you.

Richie Rump: Yes, yes.

Erik Darling: I leave that in your capable hands.

Richie Rump: But I’ve been working, so, there’s that.

Erik Darling: Yeah, see all those pictures from Disneyland, hard worker.

Richie Rump: Disneyworld.

Erik Darling: Whatever.

Richie Rump: Disneyland is a different place.

Angie Walker: Folks, if you want your questions answered. You have to type them.

Erik Darling: I see one here about deadlocks.

Angie Walker: I saw that one.

Richie Rump: That movie was great. I loved that movie.

Angie Walker: The difference between Jessica and I reading it, I know when you guys aren’t going to want to answer some of these.

Tara Kizer: You can go ahead and ask it.

 

Do File Growths Cause Deadlocks?

Angie Walker: All right. So for deadlocks from Adeels Webb, “When there is a file growth we see deadlock and the object identified is one of the indexes. Is there a way to debug this related to storage or IO?”

Tara Kizer: I would be looking definitely at your IO. I mean how long is that growth taking and if it’s on the data files, do you have Instant File Initialization setup because if it’s not setup then it has to zero out the file and that can talk awhile depending on how slow your storage is. The log file, you can’t use Instant File Initialization but the data files can. So take a look at the perform volume maintenance tasks inside the local security policy on your box and see if the SQL service account is a member of that privilege. If it’s not, you should add it and restart SQL.

Erik Darling: Or in your maintenance window. Not like…

Tara Kizer: Yes, not now.

Erik Darling: Not like right now. Don’t tell your boss we told you to do it right now.

Tara Kizer: I would also be looking at performance monitor counters average. Look at the logical disk counter. The average disk seconds reads and writes and if your values are over say 20 milliseconds you potentially have a storage, an IO slowness issue. The values are going to be in decimal though. So it’s .020 I believe is 20 milliseconds.

Angie Walker: Whatever she said, I don’t…

Erik Darling: Sounds good to me.

 

How Do You Pronounce VARCHAR?

Angie Walker: Yeah. Here’s a good one from Greg. “Is it ‘var’ ‘car’ or ‘var’ ‘char?’ What do you guys say?”

Tara Kizer: I actually say “var” “char” but since you guys had already said “var” “car” I went with it.

Richie Rump: Wow.

Erik Darling: I always say “var” “car” because the variable is character.

Tara Kizer: Yeah, exactly. Yeah, I do say “var” “char” but I see why people say “var” “car.” I think most people do say “var” “car.” I’ve worked with a lot of developers and DBAs in the past 20 years and most people say “var” “car.”

Erik Darling: In Boston, it’s [speaking with a Boston accent] var car though.

Angie Walker: I also recently heard that it should be “vare care” because variable characters. So like you were saying, Erik, but I was like that sounds funny. Like it sounds like a Care Bear character maybe.

Tara Kizer: I think the real question though is do you say Sequel or S-Q-L? That’s an important one.

Erik Darling: Sequel, I don’t have time for S-Q-L.

Tara Kizer: Microsoft is the one who created the product and they specifically said on the Wikipedia page it was pronounced Sequel and not S-Q-L. People from different languages had a problem with it because when they see the letters S, Q, L, it does not pronounce “sequel” to them. So Microsoft changed the Wikipedia page to say it can be pronounced either way. But the original product was pronounced “Sequel” Server.

Angie Walker: Interesting.

Tara Kizer: There was this big debate on the Wikipedia page, I don’t know, like ten years ago or so.

Erik Darling: Which brings us to another interesting question, is it “wi-ki” or
“wee-ki?”

Angie Walker: Wiki. What about “DAY-ta” or “DA-ta?”

Tara Kizer: “DAY-ta.” “DA-ta” drives me crazy.

Richie Rump: Captain Picard called him Data. So it’s “DAY-ta.” Captain Picard is always right.

Angie Walker: I like “DAY-ta” too. It also sounds kind of funny “DA-ta” base because of the off “a” is…

 

Are There Any Alternatives to Transparent Data Encryption?

Angie Walker: All right. Adeels has another question. “What do you guys think about TDE with mirroring or replication and are there any alternates to TDE?”

Erik Darling: Not within SQL Server. TDE is what you get and that’s an enterprise-only feature. If you use it alongside any other feature, it’s going to be interesting because TDE breaks a lot of stuff. TDE, it encrypts TempDB and it also breaks in some file initialization because you have to write out a bunch of junk. So it’s totally fine to use the features together, just be aware of how they operate next to each other.

 

What Happens If I Run Standard Edition on 20 Cores?

Angie Walker: Sean says they have a server with 20 cores but they’re only running Standard Edition that only supports 16 cores. Is there a negative performance impact with this configuration?

Erik Darling: There is if you have to cross NUMA nodes for some things.

Angie Walker: So when would you see that scenario? Do they have to have a specific of NUMA nodes for it to come into play or they already have too many NUMA nodes?

Erik Darling: So is it, how many CPUs? I get that it’s 20 cores but how many…?

Angie Walker: So we have two CPUs.

Erik Darling: Oh, wait, wait, wait. Hold on, yeah… VM 1 uses 20 cores…

Angie Walker: No, the one below that.

Erik Darling: Oh.

Angie Walker: The one from Sean. Yeah, I didn’t read the giant one.

Erik Darling: It depends, Sean. Two CPUs, so no, probably not. But it’s not ideal. It’s not something that I’d aim for. It sounds like someone bought like the dual ten core CPUs thinking that they were going to be really fast and awesome but they’re probably like a really low clock speed or something which is what dual ten core CPUs are. So for something like Standard Edition, we usually recommend getting like dual two four six or eight, whatever the highest clock speed is. Because you go up to 16 and you get the fastest processers to push your workload through. Dual ten core chips usually have a much lower clock speed and kind of stink.

Angie Walker: Yeah. I’m just going to follow up, Dennis, sorry, sad face, that license question. That’s a big license question. We’re not Microsoft. We don’t charge you so I would talk to your Microsoft rep or your software vendor rep. Ask them how you’re going to get charged. Someone else, sorry. But there was a question.

 

How Should I Document SQL Server and Scripts?

Angie Walker: Oh, I think Richie I want to hear from you from Brandon’s question. “Do you recommend any tools for documenting changes in SQL scripts? How about documenting SQL Server? Right now they use Excel and a lot of worksheet tabs to document their SQL Servers.”

Richie Rump: So I don’t have any recommendations for any tools to do this. We always had, at least every organization I’ve been in, we’ve had pretty rigorous change control. So all the scripts would be not only put into a version control system like Git but they would also go down to test, preproduction, and then finally a production area. So we usually didn’t have that big of a problem because all the changes were being tested as it went down the train, the pipeline. So as far as documenting SQL Server, I know there’s a couple products out there. I would just try them out and see how they work for you.

Erik Darling: Yeah, Red Gate has a tool called SQL Doc that works all right.

Richie Rump: Yeah, I’ve written kind of my own tools as I kind of saw fit. But we didn’t have any really big documentation requirements either. So it depends on your requirements and your budget and how much you’re willing to put into it.

Erik Darling: I bet someone out there who really likes PowerShell and really wants to tell you all about PowerShell has written something that would document SQL Servers.

Richie Rump: Yeah, there’s a guy here in Florida, a couple hours up the road here. He spent many years working on a PowerShell documenter, so that’s probably a good one to check out.

Erik Darling: There we go.

Richie Rump: Kendal Van Dyke. Kendal Van Dyke’s, what’s that? SQL Power Documenter or something like that?

Erik Darling: Power Doc is it?

Richie Rump: Power Doc, that may be it.

Erik Darling: Maybe.

Richie Rump: Because everything was being…

Erik Darling: There’s someone walking by your window.

Richie Rump: Dude, it’s the mailman. Oh my gosh.

Angie Walker: He’s already that way.

Richie Rump: He’s at my door. So…

Angie Walker: Wait your dogs are going to start barking.

Erik Darling: There he goes again.

Angie Walker: Okay.

Erik Darling: All right, who’s next?

 

Can I Monitor for Changing Execution Plans in 2008R2 and 2014?

Angie Walker: So, Jason. He wants to know if there’s a way to monitor when SQL decides to change plans or use a bad plan. He knows in 2016 they’re introducing Query Store but what can he do for 2008 R2 or 2014?

Tara Kizer: I’ll tell you what I implemented at the job I was at for 12 years. We had a very very critical system. We’d have severe performance issues if bad plan would happen for a critical store procedure. Every single time I’d just recompile the store procedure and the entire system would start performing better because the bad plan would cause really high CPU. So what I did is I used the ring buffers DMV that was mentioned in an earlier question and I wrote a store procedure to query that and to monitor CPU utilization because I knew that CPU utilization would remain at say 30 percent during the day when this issue didn’t occur. But it would go above 60 percent, 80, 90 percent. So I would monitor CPU utilization and then check the number over like three minutes. If it’s at a high number across several samples, I would then look at the plan cache. What was using the most CPU in the plan cache and then recompile that object. Then it would wait a minute and then check to see if it improved. If it didn’t improve, it would then recompile the next one at the top of the CB list. That caused it to not have to wake me up in the middle of the night or not have to manually recompile store procedures. So you do have that option, the ring buffers DMV gives you CPU utilization and you just write code to do this work for you.

Erik Darling: If monitoring the ring buffers is difficult or beyond your gasp, sp_BlitzCache can help a lot with that. So what you’ll see, you run sp_BlitzCache by CPU. You may see a line for your store procedure and then you may see a separate line for the text of the store procedure that has higher average CPU or reads or something or max worker time or something like that. As like just sort of different averages that make you think, “Okay, this statement may have gotten the wrong plan or something is amuck because this store procedure has these numbers but the statement has these numbers.” So you could see some differences there. If you’re feeling really fancy, in 2014 you can also use Extended Events to capture when plans recompile. I wouldn’t grab query plans along with it maybe because that’s a dodgy enterprise in Extended Events but it certainly is an option.

Tara Kizer: But maybe storing the results of BlitzCache into a table and then comparing the average CPU average reads and if it’s off by a certain percentage, then you possibly know that a plan difference is enough to have caused an issue and that you have a bad plan.

Erik Darling: Yep. There’s some like really interesting parameters in sp_BlitzCache that I’ve never used. Like you can set up variances for like the difference between those things. If it’s over a certain amount it will warn you about it but usually there will just be sort of a general warning for parameters missing where it will tell you all about that.

 

Does Replication Work in Amazon EC2?

Angie Walker: Good information guys. So Mike says that they’re addicted to replication, sorry, Mike. There might be pills for that now. But they’re moving to AWS EC2. “Any comments on replication in the EC2 section of the cloud?”

Tara Kizer: I don’t have any specific experience with this but I would probably just be concerned about latency between the publisher and the subscriber. Or I should say between the publisher and the distributor and the distributor and the subscriber. Because the publisher doesn’t connect directly to the subscriber goes to the distributor. Make sure that it’s flowing nicely because any kind of backlog on the distributor subscriber or publisher can cause for you to take production down if you run out of log space where all the replication log records are being stored in the publisher log file.

Erik Darling: Yeah, one thing about EC2 instances is that you have to pay for a pretty large box before you get over the initial networking bandwidth of 125 megs a second I think. So if you’re really pushing a lot of data across, I would pay a lot of attention to the type of box and monitor how much network bandwith and utilize and all the other stuff and just6 any latency, I would really want to keep really close eye on the latency whether it’s network or just between all the boxes.

 

Does Brent Still Work Here?

Angie Walker: I saw that Richie. Did anybody read the permanently storing objects in TempDB blog?

Erik Darling: No, I only read my own blog posts, sorry.

Tara Kizer: Who wrote it? Was it one of ours?

Angie Walker: It went live today.

Tara Kizer: Okay, then I didn’t read it.

Erik Darling: Brent wrote it.

Angie Walker: I didn’t get to it yet. I’m backlogged on all of Erik’s while I was gone.

Erik Darling: Not “me” Brent. Not Erik. Real Brent wrote it.

Angie Walker: The real Brent wrote it.

Richie Rump: He still works here?

Angie Walker: Yeah. Well… define work.

Erik Darling: He shows up in chat once in a while.

Angie Walker: Oh no, that’s Erica.

Richie Rump: Yeah, that was Erica.

Tara Kizer: Oh, I did read this before it got published. Is there a specific question on it?

Angie Walker: Just what do we think about it.

Tara Kizer: Oh, I mean, I don’t think that TempDB should be a place where you store objects permanently. If you need to store objects permanently, setup a database for it.

Erik Darling: I think, great post, Brent.

Angie Walker: Or Brett.

Erik Darling: Great post, Brenda.

 

What’s Your Favorite Missing Feature in SQL Server 2016?

Angie Walker: All right. Well since we really have no more questions and seven, eight minutes left, we’ll follow up with Brandon’s “if there are no questions” question. “Does anybody have a favorite feature from SQL 2016 or is there anything that you wish made it into 2016 that didn’t?”

Tara Kizer: I think we can probably all agree to the Query Store. We’re all looking forward to using that but it appears to be an enterprise edition only feature which…

Erik Darling: No, they changed it.

Tara Kizer: They changed it again?

Erik Darling: They changed it.

Richie Rump: Yeah, they announced it. They changed it and they changed it back.

Tara Kizer: They changed it again? Okay.

Angie Walker: So it’s for everybody now?

Erik Darling: Real ding dongs.

Tara Kizer: The product hasn’t RTM’d yet. So June 1st, so they could change their mind again.

Angie Walker: That’s coming close though. Less than 30 days.

Tara Kizer: Yeah.

Erik Darling: So what I’m consistently mad at Microsoft about is their restore stuff. Microsoft spends a lot of time and money investing in like Oracle competitive checkboxes. But we still have the same, clunky, all-or-nothing restores. [Inaudible 00:16:54], right? It’s like if you want to restore like a table, you have to restore the entire file. There are third-party tools you can do object-level restores, you know? Like Dell LiteSpeed and probably some other backup software. Tara, does Red Gate do that object-level restore stuff?

Tara Kizer: I know it used to.

Erik Darling: Okay, so, maybe it still does. But I get continuously annoyed. Especially because Microsoft has embraced this, you know, “We’re going to support you using petabytes and petabytes of data.” But if you take a backup of that and you have to restore a table because some ding dong broke 50,000 rows in part of a table, then you still have to restore your entire database. There’s no object-level restore natively with SQL Server. There’s no way to natively read through a log file in SQL Server without memorizing those crazy fn_dblog and dump_dblog commands where you have to pass into default 64 times. There’s no good, intuitive way to figure out when something bad happened and restore it to that point. Oracle offers stuff like Flashback where you can flashback a table to a point in time. You can flashback an entire database to a point in time. You can do all this stuff and get really easy, really restorable data. You just get all your stuff back really easily. I think it’s sort of obscene that Microsoft is still making you restore a 5TB database just to get one table back.

Richie Rump: And that’s Erik’s favorite 2016 feature that’s not there.

Angie Walker: Yeah. What about you, Richie, since there are still no questions, is there anything from a developer’s side of things that you wish there was? Or you don’t really care about?

Richie Rump: It’s not like 2012. 2012 we got a lot of good goodies. In 2014, there was nothing for us and then 2016, it’s the Query Store, right? A lot of people talk about the JSON stuff. I am not thrilled with it. I haven’t really played with it too much but it’s just going to make things easier to go in and out but I don’t like the XML data type or XML stuff in SQL Server so why should I like the JSON stuff in SQL Server? It just doesn’t feel right. It’s just something else that us as developers can screw up. So it’s probably one thing that I’ll be keeping an eye on over the next few months is the JSON data type and kind of how we could use some of that responsibly and not like in the way we’ve seen some XML data types go awry.

Erik Darling: Yeah, like what developer bones got thrown in 2016 like what DROP IF EXISTS and the string splitter. That was it. Like there’s been no like further improvements to window end functions like making a using range over rows, you know, not be horrible. So no like further improvements to T-SQL to make it more ANSI compliant or add in more like the ANSI standard stuff to it. So it’s pretty underwhelming to me from at least from a development standpoint.

Richie Rump: Yeah, and if us as a community aren’t screaming about it, then it’s going to be low on the totem pole. So I think there was a fair amount of people screaming about the JSON stuff because practically every other database has JSON compliance.

Erik Darling: Practically every other database has a way to concatenate comma delimited strings without using XML path in some convoluted voodoo language too.

Richie Rump: You can always just use .NET for that, dude, come on.

Erik Darling: Yeah, I have .NET, Richie. Me.

Richie Rump: That’s why I’m here, right? That’s why I’m here. No other reason just not to write .NET.

Erik Darling: Just to make my string concatenating life easier.

Richie Rump: That’s right.

 

How Do You Verify Your Backups?

Angie Walker: We finally got a new question. It’s from Adeels again. He says he understands that restoring and verifying backups is the way to go. So good for knowing it. But he says it’s not always physically possible. Is doing RESTORE VERIFYONLY good enough or do you have another recommendation?

Erik Darling: Good enough for what?

Angie Walker: I think he’s trying to say if he can’t test his backups by restoring them somewhere else, is it okay to just do RESTORE VERIFYONLY and say that your backup is good and not corrupted or something?

Erik Darling: I mean all that does is test the header and makes sure that it’s a usable backup file. It doesn’t actually test the contents of it for anything. So, it’s fairly reasonable to assume that you can restore that backup. That the header and the format of the backup file are correct. The data within that could still be bonkers.

Angie Walker: So still run your DBCC CheckDB, right?

Erik Darling: Run your DBCC CheckDB, turn your page verification on, make sure your backup checksums are on. Lots of stuff to do there. Make sure that you’re getting alerts for your 823 824 and 825 errors. Other things.

Angie Walker: We have that on the web.

Erik Darling: We do.

Angie Walker: On the blog.

Erik Darling: If you go to BrentOzar.com/go/alerts. We have that all setup for you.

Angie Walker: Yeah, some good stuff out there.

Erik Darling: At least the alerts end.

Angie Walker: We’ll tell you how set them up.

Erik Darling: Yeah, basically.

 

What Tool Should I Use to Read Execution Plans?

Angie Walker: Sean wants to know if there’s a better program that’s free to analyze execution plans.

Erik Darling: Where you have been? SQL Sentry Plan Explorer.

Tara Kizer: That’s what we use. That’s what we use here. It’s what we used at previous jobs too.

Erik Darling: What do you using, Toad? I don’t know.

Richie Rump: That’s still a thing?

Erik Darling: I guess, yeah. I mean there’s still a Toad World website. I don’t know. Maybe someone with MySQL uses it.

Richie Rump: Maybe some of those Oracle guys still use it because that’s when I used it.

Erik Darling: Everyone uses SQL Developer with Oracle. The fancy pants one.

Richie Rump: Not in the 90s, man.

Erik Darling: The 90s are over, Richie. Sorry.

Richie Rump: No.

Erik Darling: Sorry.

Richie Rump: Next thing you know, Nirvana broke up, right?

Erik Darling: No, they’re still together. Don’t look at MTV.

Richie Rump: Whoa, man.

Angie Walker: Just go back and watch I Love the 90s on VH1 on demand or something.

Erik Darling: Kurt Loder will be there. All your friends will be there.

Richie Rump: Daisy Fuentes.

Angie Walker: On that note, folks…

Tara Kizer: I don’t think Angie is old enough for these references.

Angie Walker: Hey, I used to watch I Love the 90s. On that note, we’re going to have to end this episode of Office Hours. Thanks for watching, listening, or reading on the blog. See you all next week.

Erik Darling: Bye.

RAM and Sympathy

With the release date for 2016 finally announced

Everyone can start gearing up to gaze upon its far shores from the 2008R2 instance they can’t or won’t upgrade for various reasons. I’m excited for a lot of the improvements and enhancements coming along, and generally hope I’m wrong about customer adoption.

One annoyance with the new release is the increase in CPU capacity for Standard Edition, with no increase in RAM capacity. You can now have up to 24 cores on your Standard Edition box. Yep, another $16k in licensing! And they’ll all be reading data from disk. Don’t kid yourself about Buffer Pool Extensions saving the day; nothing is going to beat having your data cached in memory. How many people on Standard Edition have CPU bound workloads?

Alright, now set MAXDOP and Cost Threshold to the right values. Anyone left?

Alright, check your missing index requests. Anyone left?

But Enterprise needs to be different

It’s already different. It already has a ton of features, including a plethora that smaller shops can’t or won’t ever touch. Full blown AGs, Hekaton, Page/Row Compression, ColumnStore, Online Index Create/Rebuild, Encryption, really, the list goes on and on. And c’mon, the HA/DR parts are what define Enterprise software to me.

24 cores and nothing on.

24 cores and nothing on.

Having a fast ship is way different from having a ship that’s hard to sink.

So what’s the solution?

Microsoft needs to make money. I get it. There’s no such thing as a free etc. But do they really need to make Enterprise licensing money off of people who will never use a single Enterprise feature? Should a small shop with a lot of data really have to make a $5000 jump per core just to cache another 128-256GB of data? That seems unreasonable to me. RAM is cheap. Licensing is not.

I wouldn’t suggest à la carte pricing, because licensing is already complicated enough. What could make sense is offering higher memory limits to shops with Software Assurance. Say up to 512GB on Standard Edition. That way, Microsoft can still manage to keep the lights on, and smaller shops that don’t need all the pizzaz and razzmatazz of Enterprise Edition can still hope to cache a reasonable amount of their data.

If Microsoft doesn’t start keeping up with customer reality, customers may start seeking cheaper and less restrictive solutions.

Thanks for reading!

Brent says: Adding 8 more cores to Standard Edition answers a question no one was asking. It’s almost like raising the number of available indexes per table to 2,000 – hardly anybody’s going to actually do that, and the ones who do are usually ill-advised. (Don’t get me wrong – there’s some good stuff in 2016 Standard – but this ain’t one of ’em.)

Creating Tables and Stored Procedures in TempDB – Permanently

No, not #tables – actual tables. Here’s how:

USE tempdb;
GO
/* This one is only available during my session: */
CREATE TABLE #myTempTable (ID INT IDENTITY(1,1), Stuffing VARCHAR(100));
GO


/* This one is global, meaning it's available to other sessions: */
CREATE TABLE ##myTempTable (ID INT IDENTITY(1,1), Stuffing VARCHAR(100));
GO
/* You can create both of those at the same time. They're different. */


/* This one is just like a user table, but in TempDB: */
CREATE TABLE dbo.myTempTable (ID INT IDENTITY(1,1), Stuffing VARCHAR(100));
GO

The first one disappears when my session is over, but the latter two persist until the SQL Server is restarted.

Why would you ever do the latter two? Say you need to share data between sessions, or between different applications, or staging tables for a data warehouse, or just faster tables that live on local SSDs in a cluster (as opposed to slower shared storage), or you wanna build a really crappy caching tier.

If you use global temp tables or user-space tables, though, you have to check for duplicates before creating your tables. Local temp tables are just all yours, and you can have a thousand users with the exact same-name local temp tables.

Next up, the ever-so-slightly different magic of temporary stored procedures:

USE tempdb;
GO

/* This one is only available during my session: */
CREATE PROC #usp_myTempWorker AS
  SELECT * FROM sys.databases;
GO

/* This one is global, meaning it's available to other sessions,
    but ONLY as long as my session is available: */
CREATE PROC ##usp_myTempWorker AS
  SELECT * FROM sys.databases;
GO

/* This one is just like a user stored proc, but in TempDB: */
CREATE PROC dbo.usp_myTempWorker AS
  SELECT * FROM sys.databases;
GO

Here, the first TWO disappear when my session is over, and only the latter one sticks around. Diabolical. So the ## temp stored proc doesn’t really help me here because I can never tell when the creator’s session is going to finish. (Not God. His session keeps right on going.)

So why would you ever create stored procedures – temporary or user – in TempDB? You might not have permissions in the user databases, just might not be technically allowed to change things, or maybe you’ve got monitoring queries that you want to hide, or you want to create procs temporarily to check parameter sniffing issues.

All of the above will disappear when the SQL Server is restarted – or will they? Not if you create them permanently in the model database, which is the source of TempDB’s creation when SQL Server restarts:

USE model;
GO
CREATE PROC dbo.usp_myTempWorker AS
  SELECT * FROM sys.databases;
GO
CREATE TABLE dbo.myTempTable (ID INT IDENTITY(1,1), Stuffing VARCHAR(100));
GO

/* Now restart your SQL Server, and check in TempDB */
USE tempdb;
GO
EXEC dbo.usp_myTempWorker;
GO
SELECT * FROM dbo.myTempTable;
GO

Why would you ever wanna do this? Well, say you need to make sure that, uh, in case … look, I’m just an idea man. Somebody, somewhere, is looking for a really bad idea. That’s what I’m here for.

Where Clauses and Empty Tables

Sometimes SQL is the presentation layer

And when it is, you end up doing a lot of concatenating. This isn’t about performance, or trying to talk you out of SQL as the presentation layer, this is just something you should keep in mind. SQL is a confusing language when you’re just starting out. Heck, sometimes it’s even confusing when you’ve been doing it for a long time.

Let’s say your have a website that stores files, and when a user logs in you use a temp table to track session actions as a sort of audit trail, which you dump out into a larger table when they log out. Your audit only cares about folders they have files stored in, not empty ones.

Here’s a couple tables to get us going.

IF OBJECT_ID('tempdb..#aggy') IS NOT NULL
DROP TABLE #aggy;

WITH x1 AS (
SELECT TOP (100)
ROW_NUMBER() OVER (ORDER BY(SELECT NULL)) AS ID
FROM sys.[messages] AS [m], sys.[messages] AS [m2])
SELECT ID, 
    DATEADD(DAY, [x1].[ID] * -1, CAST(GETDATE() AS DATE ) ) [CreateDate],
    'C:\temp\' + CAST(HASHBYTES('MD5', NCHAR([x1].[ID])) AS VARCHAR(32)) + '.gif' [Path]
INTO #aggy
FROM [x1];

IF OBJECT_ID('tempdb..#usersessioninfo') IS NOT NULL
DROP TABLE #usersessioninfo;
CREATE TABLE #usersessioninfo 
(LastActionID INT IDENTITY(1,1), UserID INT, UserMessage VARCHAR(100), MessageDetails VARCHAR(100))

And then we’ll stick some data into our session table like this.

INSERT [#usersessioninfo]
( [UserID] , [UserMessage] , [MessageDetails] )
SELECT 
@@SPID AS [UserID],
'Welcome to your folder!' AS [UserMessage],
'You have stored #' +
CAST(COUNT(*) AS VARCHAR(100)) +
' files in the last 30 days, starting on ' + 
CAST(MIN([a].[CreateDate]) AS VARCHAR(20)) + 
' ending on ' +
CAST(MAX([a].[CreateDate]) AS VARCHAR(20)) +
'.' AS [MessageDetails]
FROM [#aggy] AS [a]
WHERE [a].[CreateDate] >= GETDATE() -30

Everything looks great!

Select max blah blah blah

Select max blah blah blah

But if your table is empty…

You may find yourself with a bunch of junk you don’t care about! Empty folders. Contrived examples. Logic problems. Stay in school.

TRUNCATE TABLE [#aggy]

INSERT [#usersessioninfo]
( [UserID] , [UserMessage] , [MessageDetails] )
SELECT 
@@SPID AS [UserID],
'Welcome to your folder!' AS [UserMessage],
'You have stored #' +
CAST(COUNT(*) AS VARCHAR(100)) +
' files in the last 30 days, starting on ' + 
CAST(MIN([a].[CreateDate]) AS VARCHAR(20)) + 
' ending on ' +
CAST(MAX([a].[CreateDate]) AS VARCHAR(20)) +
'.' AS [MessageDetails]
FROM [#aggy] AS [a]
WHERE [a].[CreateDate] >= GETDATE() -30

What do you think is going to happen? We truncated the table, so there’s nothing in there. Our WHERE clause should just skip everything because there are no dates to qualify.

NULLs be here!

NULLs be here!

Darn. Dang. Gosh be hecked. These are words I really say when writing SQL.

That obviously didn’t work! You’re gonna need to do something a little different.

Having having bo baving banana fana fo faving

One of the first things I was ever really proud of was using the HAVING clause to show my boss duplicate records. This was quickly diminished by him asking me to then remove duplicates based on complicated logic.

Having is also pretty cool, because it’s processed after the where clause, so any rows that make it past there will be filtered out later on down the line. For our purposes, it will keep anything from being inserted, because our COUNT is a big fat 0. Zero. Zer-roh.

INSERT [#usersessioninfo]
( [UserID] , [UserMessage] , [MessageDetails] )
SELECT 
@@SPID AS [UserID],
'Welcome to your folder!',
'You have # ' +
CAST(COUNT(*) AS VARCHAR(100)) +
' files, starting on ' + 
CAST(MIN([a].[CreateDate]) AS VARCHAR(20)) + 
' ending on ' +
CAST(MAX([a].[CreateDate]) AS VARCHAR(20)) +
' in the last 30 days.'
FROM [#aggy] AS [a]
WHERE [a].[CreateDate] >= GETDATE() -30
HAVING COUNT(*) > 0

This inserts 0 rows, which is what we wanted. No longer auditing empty folders! Hooray! Everybody dance drink now!

Mom will be so proud

Not only did you stay out of jail, but you wrote some SQL that worked correctly.

Thanks for reading!

css.php