This week, Brent, Richie, Doug, and Tara discuss Growing databases, most useful SQL certifications, replication issues, group discounts, backup software and more.
Office Hours Webcast – 2016-06-22
Brent Ozar: All right, we might as well get started here. Let’s see, we’ve got all kinds of questions pouring in today. We’ve got 75 folks in here, let’s go see what they’re asking questions about. Going through… oh, Scott asks an interesting question. Scott says, “How would you handle a new manager that insists his databases should not grow?”
Doug Lane: Insist his business should not grow.
Brent Ozar: I like it.
Richie Rump: Monster.com?
Brent Ozar: Yeah, you’re never allowed to sell anything. You’re never allowed to add new data. What I would do is make sure that he understands that update statements don’t make the database grow. So if he updates your salary, it doesn’t take additional space in the database, it just changes an existing number.
Tara Kizer: I wonder if there’s more to the question though. Is the manager SQL Server savvy and is saying preallocate it big enough that it never has to auto grow? He is probably not a good manager though, doesn’t understand SQL Server technology or databases in general.
Doug Lane: We’re quick to judge in these parts.
Brent Ozar: Scott says, “No, he is not technical.”
Tara Kizer: And he’s saying that SQL databases should not grow. That is just odd.
Richie Rump: If he’s not technical, why is he giving technical advice? That doesn’t make any sense.
Brent Ozar: It’s probably a budgeting thing. He’s like, “Look, I’ve got to keep my budgets the same number. It’s very important.” Scott says, “He is very used to monitoring the database.” What you do is, I had a monitor—I shouldn’t say—okay, we’ll I’ve started down the road so I just might as well. So I had a manager once, not in my department but in another department, who was asking anal things like that. Like, “Oh, I want to make sure CPU never goes above five percent.” So what we did was we hooked up his monitoring tool to point at a server no one used and we just called it SQL whatever and told him that’s where his stuff lived. He was totally happy. He was completely happy. Thought we were amazing. Yep, that’s my job.
Brent Ozar: All right, so Gordon asks a related question. Gordon says, “I’m not really sure what to set autogrowth to on my very large databases.” Usually when people say VLDB they mean like a terabyte or above. He says, “Yes, I should be growing manually during a suitable maintenance window, but I also have an autogrowth value set just as well in case. Is one gig a good number or what should I do for a one terabyte database?”
Tara Kizer: I used one gigabyte on the larger databases, sometimes even maybe a little bit bigger. As long as you have the instant file initialization for the data files so they could be zeroed-out so that the growth isn’t a slow growth. Log files may be a little bit smaller. I did a lot of times [inaudible 00:02:25]. Sometimes I did one gigabyte on larger databases where I knew the patterns and it was going to use a larger file for certain things. But I tried to preallocate those.
Doug Lane: And IFI is something that comes on by default if you want it to in SQL 2016.
Tara Kizer: Oh really? I didn’t know that.
Brent Ozar: There’s a checkbox in the install.
Doug Lane: There’s a little checkbox that says, “I want IFI to work with this installation.”
Brent Ozar: Microsoft calls these things “delighters.” They’re trying to add delighters into the product. I’m like, “I am delighted! That’s actually wonderful.”
Richie Rump: It’s just faster.
Brent Ozar: It is faster. It’s just faster. And they’re right. I like them.
Doug Lane: It works.
Brent Ozar: I have an interesting question from Wes. Wes asks, “What are the most useful SQL Server certifications?” So we’ll go through and ask these folks for their opinion. Richie, we’ll get started with you because you’re on the left on my monitor. What do you think the most useful SQL Server certifications are?
Richie Rump: The one you have. That’s it. That’s the only useful one.
Brent Ozar: The A+?
Richie Rump: Yeah. Certified Scrum Master. No, the MCM, right? I mean that’s by far the most useful one you have. I mean as soon as you get it, you’re recognized as an expert anywhere.
Brent Ozar: You say that but nobody still believes that I actually passed that test, for rightful reasons. I wrote a check, it was a really large check. Then I brought another bag of $20s and I gave that to the instructor and off we went. Tara, how about you?
Tara Kizer: I’m against SQL Server certifications. A while ago they had all these practice tests online and I am a terrible test taker. I felt like at the time I was really good at what I did, you know, SQL Server, DBA for a long time, and I could not pass the test. So I feel like it’s for people that don’t have experience that are just trying to get their foot in the door. I already had experience, I don’t know that certifications are required at any job when you have as many years of experience as I do but I could not pass the test. I also wasn’t willing to study for these tests. Some of the stuff is just useless information I didn’t need to know. So why add that stuff to my brain?
Brent Ozar: Doug, how about you?
Doug Lane: It depends on how you define useful because is it useful in the sense that it will get you a job or is it useful in the sense that it will make you better at your job? Certifications will tell you what you don’t know as you test for them but apart from their value as actually holding the certification, there’s very little value to it. It’s the kind of thing where you decide if you want it on your resume or not. In most cases, it won’t matter. Again, apart from exposing blind spots in what Microsoft thinks you should know about SQL Server, it’s really not going to help you that much.
Brent Ozar: It does teach you a lot—go ahead.
Richie Rump: As a former hiring manager of both developers and data folks, I never looked at certifications at all. It didn’t help you; it didn’t hurt you. It just never came into play because it’s just a test. It’s not exactly how you work, like Tara said, it’s just a test.
Tara Kizer: I had one job that if you did do the certifications that it was something to put on your review, that this was something that you worked towards. So it was a review cycle thing, a possible extra bonus or promotion, but it was just a bullet point on the review. You had all the other stuff on your review as well.
Brent Ozar: For the record, we don’t bonus our employees that way. If you want to take a test, that’s cool. We’ll pay for it. We also pay for passes or fails, it doesn’t matter because I know from taking them too, I’m like, I walk in, I look at the test, I’m like, “They want you to know what? XML, what?”
Tara Kizer: And there will be more than one right answer. They want the most perfect answer and it’s like, well, there’s three of them here that could be the right answer.
Brent Ozar: Yeah.
Richie Rump: PMP was crazy like that. I mean it was, “Oh look, they’re all right. But what’s righter-er-er?”
Brent Ozar: PMP, Project Management Professional?
Richie Rump: Professional, yep.
Brent Ozar: There we go.
Brent Ozar: Nate Johnson says, “It may be a waste of 15 minutes of company time but I do enjoy these pregame antics.” For those of you who just listen to the podcast, you miss out on when you come in and join live we just shoot the poop—there goes a bunch of old jokes but I’m just going to keep right on, I’m not going down there.
Brent Ozar: Tishal asks, “Is it possible to see the size of the plan cache using T-SQL?” The answer is yes and none of us know how by memory. There is a DMV for it, if you Google for that, there’s a DMV for it. In the show notes, we’ll go find that and track it down for you.
Brent Ozar: David asks, “Replication question.” And then he never types it in. Oh, no, he does later. “We us replication extensively. In this scenario…” What do you mean your scenario? Is this like a game show? Are you trying to test us? “We have a bunch of reference tables that hardly ever change replicated out to dozens of locations. Should we use transactional replication or snapshot replication or an ETL process and just refreshing them once a day would be fine?”
Doug Lane: What are you the most comfortable managing?
Brent Ozar: Oh, look at that Doug. Go on, elaborate.
Richie Rump: Welcome back.
Doug Lane: If you feel really good about setting up some sort of SSIS package to do this, then by all means do and get away from replication. But this is the kind of thing where it really comes down to a comfort level. Replication will never be your best friend. It’s just too finicky and complicated and aggravating to work with. But it can get the job done.
Brent Ozar: When you say finicky and complicated and aggravating to work with, that describes most of my best friends so I’m not sure what you mean by… yeah, Richie is pointing at himself.
Tara Kizer: I had a scenario like this for reference tables. We actually did not replicate them. The only time that these tables changed was during deployment. So if we needed them on the subscriber database, we just deployed to both the publisher and the subscriber for those tables. That way we didn’t have to add them to the publication. There’s not really any overhead as far as transactional or snapshot except when it has changes coming through. But why have them in there if they hardly ever change and it’s part of a deployment process?
Brent Ozar: James asks, “What’s the best practice for setting minimum server memory? There’s a lot of guides out there on how you set max server memory but what should I set min server memory to?”
Tara Kizer: We took the max server memory, best practice, four gigabytes or ten percent whichever is greater, then we divided it by two. That was our min server memory. That was our standard across all servers.
Brent Ozar: I like that. I think in our setup guide it doesn’t even give advice on it because we never—if that’s your biggest problem, you’re in really good shape. I love that you’re asking this question because that’s a really good, detail-oriented question.
Tara Kizer: We had the standard because we were mostly a clustered environment. We had, I don’t even know how many clusters, maybe 100 clusters or so, a lot of them were active active, not a lot of them, but some of them were active active and you want to make sure that when the failover occurs and you’re running on one node that the SQL instances—the one that has to failover can get memory. We would set max down also in the active active environment.
Doug Lane: It also kind of depends on how much you’re piling on that server because if it’s your Swiss Army knife server, you’re probably going to have trouble if you’re trying to run Exchange and other stuff on it, but you know [inaudible 00:09:37]. You’ve got all the BI stack running on it too then you want to make sure that under no circumstances can other stuff steal away from SQL Server to the point where your database engine is actually beginning to starve a little bit. So keep in mind whatever else is on that box. If you really just have a dedicated SQL Server database engine box, then yeah, it’s not going to be as big of deal because it will take whatever it needs and there really won’t be competition for that memory in terms of like it getting stolen away.
Brent Ozar: Mandy asks, “We’ve got SQL Server 2014 and our tempdb files are on local solid-state drives. Recently we’re starting to see higher and higher IO waits on those tempdb files, upwards of 800 milliseconds. I’m new to solid-state, is this normal or is this indication of a problem?” That’s a good question. My guess is, depending on how old the solid-state drives are, their write speed actually degrades over time. It can get worse over time. The other thing that’s tricky is depending on how many files you have, if you spider out tempdb to say one file per core and you’ve got 24 cores, solid-state may not be able to handle that workload as well. So generally speaking, we aim for either four or eight tempdb files when we first configure a server. This is one of those instances where more can actually harm you rather than having fewer but I would just check to see. You can run CrystalDiskMark against those solid-state drives and see if write speed has degraded since they were new. It’s certainly not normal though.
Brent Ozar: Wes asks, “Are any of you speaking at the PASS Summit?” Well, all of us will be speaking, we’re all going to be standing around the hallway talking to all of our friends. Are we going to be presenting? That we don’t know yet. That announcement comes out today. So we’ll find out later today. I keep looking over at Twitter to see whether or not it’s come out and it hasn’t come out. So as soon as it comes out, we’ll say something.
Brent Ozar: Wes says—and I have no idea what this is in reference to—“Use Walmart as a precedent.”
Richie Rump: Enough said. I don’t think we need to say anything more about that.
Doug Lane: For the “adios pantalones” shirt.
Brent Ozar: That’s probably true.
Brent Ozar: Next up, Tim says, “I’m fighting for only using stored procs. I don’t want to use inline SQL even for simple queries. My developers are fighting against this and they want to use things like Entity Framework. Am I wrong for pushing hard for only using stored procs?”
Tara Kizer: I have a lot of experience on this topic. I was very, very pro stored procedures for the longest, longest time. Slowly, as developers changed, they wanted to use prepared statements, parameterized queries from the applications, and we didn’t want to stop them from the rapid development that they were doing so we did allow that. Once we realized that the performance was the same between stored procedures and prepared statements, parameterized queries, it became okay from a performance standpoint. However, from a security standpoint, you’re having to give access to the tables rather than just to the stored procedures. So that was just something that we had to think about. But as far as Entity Framework goes, I know Richie is very pro Entity Framework. Entity Framework, and what’s the other one? NHibernate. There are some bad things that it does that can really, really harm performance. So it’s something that you have to watch out for. They use nvarchar as their datatype for string variables and if your database is using varchar, you’re going to have a table scan on those when you do a comparison, the where clause, and you’ll be able to tell in the execution plan. It will say, “An implicit conversion occurred.” You’ll see that it said nvarchar and you’ll be like, “Whoa, why?” Your table is using varchar. It’s because of the application specifies nvarchar. Something that you can override, but if you’re not overriding it, this is what they’re going to do.
Richie Rump: So this just in, that is not a bug. That is a problem with the developer’s code. They didn’t specify that the column was a varchar so because .NET uses Unicode as their string type, it automatically assumes everything is nvarchar. So there’s a way that we could go in and say, “Hey, this column is nvarchar.” If you don’t do that, that will cause the implicit conversions. That’s only if you’re using code first. If you’re using the designer, the designer does the right thing and doesn’t put the N in front of it so it doesn’t put it as nvarchar so you get that implicit conversion. So that’s only for code first and if the developers aren’t really doing the right things when they’re doing their mappings in code. And just because I have Julie Lerman’s phone number doesn’t mean that I’m pro Entity Framework.
Tara Kizer: You’re pro because you speak about it. You present on the topic.
Richie Rump: Oh, okay. So if you go to the pre-con, you’ll hear me talk more about it but it’s more of a heavy, it’s a balanced way—we’re not going to be able to tell developers not to use. Microsoft is saying to use it. So if you’re saying that, then you’re saying, “Don’t do what Microsoft says,” and that’s a much bigger uphill battle than you probably want to face as a DBA. So the general rule of thumb is usually for most things, it’s okay. But for complex things, if it’s going to be complex in the SQL, it’s going to be complex in the link, and now there’s two hops it’s got to go through to figure out what the actual plan is. One it’s got to change that link into a SQL statement and it’s got to change that SQL statement into a plan. That’s probably going to be 50 pages long, which nobody ever wants. So at that point, cut your losses, then do a stored procedure and everybody is okay. But there’s a big difference between when we have to write SQL as developers, when we’re typically not very good at it, as opposed to, “Oh here, let me just do context.tablename.get” and then it just does it for us. So there’s a speed issue here to development and there’s usually a lot more developers than there are of you. So unless you want to stay up all night writing SQL statements…
Brent Ozar: Is that a threat?
Richie Rump: Yeah. You guys get paid more than us so I don’t understand what that is either.
Brent Ozar: John says, “I just saw your announcement about pre-con training in Seattle. Do you guys offer group discounts?” We do actually. If you’re going to get five or more seats, shoot us an email at email@example.com or just go to brentozar.com and click on contact up at the top. Then you can contact us, tell us which class you want to go to and we can for five or more seats we give group discounts there.
Brent Ozar: Gordon asks, “If I’ve got an Azure VM that replicates data, I want to send it down to an on-premises database. It’s a VM, it’s not using Azure SQL database, what are my HA and DR options?”
Tara Kizer: That’s unusual to go in that direction. I don’t have an answer but I’ve never heard of anyone doing that.
Brent Ozar: Should be able to use replication, log shipping, AlwaysOn Availability Groups, anything that you can use on-premises you can use up in Azure VMs, I’ve got to be really careful when I say that. The hard part of course is getting the VPN connection between on-premises and your on-site stuff. That’s where things get to be a bit of a pain in the rear.
Brent Ozar: Jennifer asks, “Is the MCM still available?” No. They broke the mold after I got mine, thank goodness.
Tara Kizer: Yeah, 2008 R2 was the last version right? I mean it was the only version really. It’s been a while.
Brent Ozar: Whoo hoo.
Brent Ozar: Kyle Johnson asks—I try not to use last names but Kyle, that’s a generic enough last name and your question is so cool, it doesn’t matter. It’s totally okay. You shouldn’t be ashamed of this question. It’s a good question. I’m not just saying that to get you a gold star on the fridge. He says, “I was on a webinar yesterday where you covered sp_BlitzIndex. Are you aware of any scripts or [inaudible 00:17:06] columnstore indexes? Or is there anything I would look at in order to learn whether or not columnstore is a good fit for me?” Everyone is silent. So there’s your answer. The closest I would go to is Nikoport, N-I-K-O-P-O-R-T.com, Niko Neugebauer he’s from Portugal, he’s got a really hard to pronounce last name—has lots of information about columnstore indexes. He’s like Mr. Columnstore.
Brent Ozar: Tim says, “I’ve just inherited a data warehouse project.” Well, you have really crappy relatives, Tim. “With five weekly updated data marts. The largest table is closest to 300 million rows and it’s approaching a terabyte. My loads are taking longer than usual. What’s the best way to diagnose performance tuning on stuff like this?” So, data warehouse that’s got a bunch of data marts, tables approaching a terabyte, and my loads are taking longer. Where should I look for performance tuning?”
Tara Kizer: What is it waiting on?
Brent Ozar: What is it waiting on? And how do you find that out?
Tara Kizer: You run a query, a saved script. I don’t have the DMVs memorized. I mean, you could run sp_WhoIsActive I guess. I assume that would work in this environment and while it’s running, see what it’s waiting on. You know, is it a disk issue? Something else?
Brent Ozar: My favorite of course because I’m me is sp_BlitzFirst. sp_BlitzFirst, if you run it with the SinceStartup equals one parameter, SinceStartup equals one will tell you what your waits stats have been since startup.
Richie Rump: The script formerly known as sp_BlitzFirst.
Brent Ozar: Yeah, we’re running a contest now to figure out a new name for it because we just open sourced a whole bunch of our scripts and I don’t want to have it called sp_BlitzFirst anymore because the strangers are going to start checking in code and I don’t want that answer to be reflecting on me. “I asked Brent and it said I was a moron.”
Richie Rump: That’s already in there.
Brent Ozar: Yeah, it’s in right at the end.
Brent Ozar: Ankit asks, “How do I troubleshoot SQL recompilations increasing on SQL Server?” So there’s a Perfmon counter recompilations a second and recompilations is increasing. What should he go look at it?
Doug Lane: Did anyone recently add option recompile?
Brent Ozar: Oh, okay. I like that.
Doug Lane: Like someone may have tried to solve a parameter sniffing problem on a frequently run query. Just one idea.
Brent Ozar: Some yo-yo could be running update stats continuously. You can capture a profiler trace—this is horrible—you could capture a profiler trace and capture compilation events and that will tell you which SQL is doing it. How else would I troubleshoot that? Oh, no, you know I’m surprised Tara didn’t ask the question—it’s the question that we usually ask, “What’s the problem that you’re trying to solve? What led you to recompilations a second as being an issue that you want to track down?” That’s an interesting thing.
Tara Kizer: I wonder if it’s due to they say that your recompilations should be like 10 percent or under of your compilations. I wonder if that’s something that they’re monitoring, maybe that’s increased. Or maybe it’s a counter that they’re already tracking and the number has gone up.
Brent Ozar: Yeah, I don’t think any of us—have any of you guys run into a situation where that was the problem, recompilations a second?
Tara Kizer: That’s a Perfmon counter that I always pull up along with the compilations and I just take a quick peek at it and then I delete those two counters from the screen. A very, very quick peek.
Brent Ozar: Yes, yeah.
Tara Kizer: Of course, if recompilations are occurring more frequently, you probably have more CPU hits. So if you see that your CPU has risen, maybe it is something to look into.
Brent Ozar: If you put a gun to my head and said, “Make up a problem where recompilations a second is the big issue.” Like if I had a table that was continuously being truncated and then repopulated, truncated, and repopulated where the stats are changing fast enough that was causing compilations, even then I don’t think it’s going to be too many recompilations a second. So that’s a really good question.
Brent Ozar: Tim asks a great question. This isn’t the other Tim, who also asked a great question as well, it’s a different Tim. “Is performance tuning approached differently from a transactional system versus an analytical system? When you approach an online system versus a reporting system do you do troubleshooting for performance any differently?”
Tara Kizer: I don’t as far as reporting but I’ve never really supported a true OLAP environment.
Doug Lane: Yeah. There are so many options that don’t apply to regular OLTP environment that do apply to OLAP, specifically talking about cubes because they’re all different kinds of models that you can do. You can do hybrid. You can do ROLAP OLAP. All different ways of kind of choosing how much of that data you want to compile ahead of time. So the troubleshooting process would be very different if you’re talking about SSAS cubes for example. If you’re talking about the source data, usually people don’t care about the underlying data because that ends up in some other final format, like a cube, so I mean I guess if I were to look at a database that was just straight—what would that be, it’s been a while—ROLAP I think, where you get it right out of the database. Then I suppose I would use some of the same troubleshooting steps, like looking at wait stats, looking at long-running queries, and so on and so forth. But if you’re talking about troubleshooting a cube, that’s a whole different bag from OLTP.
Brent Ozar: Adam asks—not that your question—I just said “Adam asks.” It’s not that your question is bad. I didn’t say it was a good question. It’s still a good question. I can’t say “good question” every time or else people won’t take me seriously. “How would you approach doing replication in AGs?” So if I have the publisher in an Availability Group, do I have to reconfigure replication again when I failover to my DR replica?”
Tara Kizer: So the distributor isn’t supported as far as AG. So if the DR’s environment has its own distributor, the answer is yes you do. Hopefully you’re scripted and hopefully when you have done a failover to DR it wasn’t an automatic event. Usually because DR is so far apart you can’t have automatic failovers occur. So if it was a manual DR, hopefully you were in downtime window, all the applications were down. You made sure that there was no data that was left behind, you know, that hadn’t been fully sent to the subscriber. If that’s the case, you just need to run your scripts to start up replication again right where you left off. You don’t have to reinitialize. This is a topic that I’ve done quite a bit, failover to DR, using replication AGs, pretty much every technology.
Brent Ozar: And we have a bad question from Nate. I’m not going to lie, this question is bad, Nate. You shouldn’t feel bad, but it’s a bad question. He says, “Is a self-referencing linked server as slow as a real linked server? And is it generally a bad idea or not?” How’s that work guys?
Tara Kizer: What problem are you trying to solve? Why are you self-referencing itself?
Brent Ozar: I’ve seen people do this and it wasn’t a good idea then either but I’m just going to repeat what they did. So they had linked servers inside their code so that they could have the same code whenever they moved it from server to server. Then sometimes they would have reporting servers where they changed the linked server to point somewhere else. They thought that somehow doing linked server queries was going to abstract that away. Like they could move some of the tables at some point to another server. So for those of you who are only listening to the audio version and not seeing our faces, none of our faces are happy at this point. We’re all sad. Sadly, it is as slow as a regular “linked server.” SQL Server doesn’t know that that’s a remote server.
Brent Ozar: Let’s see here, what’s up next? All kinds of questions here. Nate says, the context: he has a replicated view that points at two servers. So what you should do because this is kind of a multi-paragraph thing. He’s got a few things inside there. Post this on dba.stackexchange.com. Post in as much details as you can and talk about what the problem is you’re trying to solve. Generally when we talk about doing reporting servers, we’d rather scale up than scale out to multiple servers. You kind of see why here. Managing multiples is kind of a pain in the rear.
Tara Kizer: I think that the answer should be just don’t use linked servers though. If you need to be able to contact another server, do that through the application, not within SQL Server. Programming languages can handle this, joining two different datasets together.
Brent Ozar: Yeah. Conor Cunningham has a great talk at SQLBits when he talks about the difficulties of distributed queries. It’s pretty bad.
Brent Ozar: Nate also asks—Nate redeems himself by asking another question. Nate says, “Finally, a backup software question. What do you guys like/prefer in terms of backup software? There’s a bunch of different versions out there. Whose tools should I buy? Are they all compatible with Ola scripts?” I think Ola scripts work with everything at this point, like Idera, Redgate, and LiteSpeed. In terms of like who we prefer, we’re kind of vendor agnostic since we don’t have to manage anybody’s backup software. But just in terms of experience, we’ll go through and ask. Richie, have you ever used third party backup software and how was your experience and which ones were they?”
Richie Rump: I’ve never used backup software.
Brent Ozar: All right, Richie doesn’t use backup. He just puts things in GitHub and lets the rest of the world backup his source code.
Richie Rump: I let the DBAs handle that.
Brent Ozar: Tara, how about you?
Tara Kizer: I have a long answer. I’ve been using SQL Server a long time and backup compression didn’t exist in older versions, so yes, we started off with Quest LiteSpeed, worked really, really well. It was fairly expensive. We wanted to get the Redgate’s SQL Toolbelt and they gave us a deal and we were able to get the backup software—we were able to completely replace all of our LiteSpeed licenses, which we had already paid for, it’s not like we got a refund from all these and we put Redgate out there instead. The reason why we did that is because all new incoming servers were going to use the Redgate software instead. So it made sense to have one tool rather than multiples. But both of them we did testing, we did a ton of testing on them, they pretty much produced the same compression ratio, the same file size, the same restore time. I mean, absolutely everything was—the difference was so minor. One was just cheaper than the other.
Brent Ozar: Yeah, everything is negotiable. Back like ten years ago, there might have been differences, today, not so much. Doug, how about you? Have you used any third party backup tools?
Doug Lane: I yield my time to the consultant from California.
Brent Ozar: Nice. I’ve used all of them as well. They’re all good. Anything is better than trying to roll your own.
Tara Kizer: And, yes, I mean they definitely are compatible with Ola, especially the two that he’s listed. I know you said that they probably are, these two specifically are.
Brent Ozar: Yeah, absolutely. Well that wraps up our time for today. Thanks everybody for coming and hanging out with us. We will see you guys next week. Adios, everybody.
Tara Kizer: Bye.
Doug Lane: Bye-bye.