Blog

Updated the StackOverflow SQL Server Database Torrent to 2016-03

so-logo
The StackOverflow XML Data Dump was recently updated with 2016-03 data, so I’ve updated our torrent of the SQL Server database version of the Stack Overflow data dump.

Fun facts about the database and its real-world-ness:

  • 95GB in size
  • 29,499,660 posts spanning 2008-07-31 to 2016-03-06
  • 5,277,831 users spanning ages from -972 to 96 (just like real world data, you can’t trust it)
  • 46,306,538 comments (227 of which have the F-bomb)
  • Every table has a clustered key on an Id identity field, and has relationships to other tables’ Ids (again, much more real-world-ish)
  • Lots of lumpy data distribution and sizes, making it fun for parameter sniffing demos
  • Case-sensitive collation (because if you’re going to share scripts online, you want to get used to testing them on case sensitive servers – this stuff exists out in the real world)
  • 1,305% cooler than AdventureWorks

Here’s how I built the torrent:

128GB USB3 flash drives with the StackOverflow database that we use in training classes

128GB USB3 flash drives with the StackOverflow database that we use in our training classes

In our AWS lab, we have an m4.large (2 cores, 8GB RAM) VM with SQL Server 2005. We use that for testing behaviors – even though 2005 isn’t supported anymore, sometimes it’s helpful to hop in and see how things used to work.

I still use 2005 to create the dump because I want the widest possible number of folks to be able to use it. (This is the same reason I don’t make the database smaller with table compression – that’s an Enterprise Edition feature, and not everybody can use that.) You can attach this database to a SQL 2005, 2008, 2008R2, 2012, or 2014 instance and it’s immediately usable. Keep in mind, though, that it attaches at a 2005 or similar compatibility level. If you want 2014’s new cardinality estimator, you’ll need to set your compat level to 2014 after you attach the database.

I downloaded the Stack Exchange data dump on that 2005 VM. It’s a little confusing because the Archive.org page says it was uploaded on 1/21/2014, but that’s just the first date the file was published. The top update date of March 1, 2016 is the current version you’ll get if you use the download links at the top right of the page.

To make the import run faster, I shut the VM down, then changed its instance type to the largest supported m4 – an M4 Deca Extra Large with 40 cores and 160GB RAM for $4.91/hour – and booted it back up. (Don’t forget to revisit your SQL Server’s max memory, MAXDOP, and TempDB settings when you make changes like this.)

I created an empty StackOverflow database, then fired up the Stack Overflow Data Dump Importer (SODDI), an open source tool that reads the XML data dump files and does batch inserts into a SQL Server database. I pasted in a connection string pointing to my SQL Server – ConnectionStrings.com makes this easy – and off it went:

SODDI importing the StackOverflow XML dump

SODDI importing the StackOverflow XML dump

The import finished in about 25 minutes, although it turns out the extra cores didn’t really help here – SODDI is single-threaded per import file:

Using a few threads while we import a few files

Using a few threads while we import a few files

After SODDI finished, I stopped the SQL Server service so I could access the ~95GB data and log files directly, and then used 7-zip set to use ultra compression and 32 cores, and the CPU usage showed a little different story:

Whammo, lots of active cores and 16+GB memory used

Whammo, lots of active cores and 16+GB memory used

After creating the 7z file, I shut down the EC2 VM, adjusted it back down to m4.large. I created a torrent with uTorrent, then hopped over to my Whatbox. Whatbox sells seedboxes – virtual machines that stay online and seed your torrent for you. They’re relatively inexpensive – around $10-$30/mo depending on the plan, and I just go for unlimited traffic to make sure the database is always available.

To double-check my work, I fired up my home BitTorrent client, downloaded the torrent, extracted it, and attached the database in my home lab. Presto, working 95GB StackOverflow database.

Now, you can go grab our torrent of the SQL Server database version of the Stack Overflow data dump. Enjoy!

My Favorite Database Disaster Stories

The statute of limitations has passed, so this week on SQL Server Radio, I get together with Guy Glantser and Matan Yungman to talk about our favorite oops moments.

I talked about my very first database disaster ever – done back when I was in my 20s and working for a photo studio, using Xenix, long before I ever thought I wanted to be a database administrator. (Yes, kids, Microsoft had their own Unix thirty years ago, and suddenly I feel really, really old. No, I wasn’t using it thirty years ago.)

This episode was so much fun because we recorded it in-person, together, gathered around a table in Tel Aviv when I was there for SQLSaturday Israel 2016. I really love talking to these guys, and I think you can hear how fun the chemistry is on the podcast.

Head on over and listen to our disaster stories, and when you’re done, check out my classic post 9 Ways to Lose Your Data.

When Should You Hire a Consultant for Amazon RDS?

Powered By Somebody Else's Database on Somebody Else's Computer

Powered By Somebody Else’s Database on Somebody Else’s Computer

You’re hosting your SQL Server databases in Amazon RDS, and performance has been getting slower over time. You’re not sure if it’s storage IOPs, instance size, SQL Server configuration, queries, or indexes. What’s the easiest way to find out?

Ask a few questions:

Are you using SQL Server Enterprise Edition? The smallest EE server, a db.r3.2xlarge, costs about $4,241 per month on demand (which isn’t the cheapest way to buy, of course). The next smallest doubles in cost, which means if the performance tuning efforts could drop you by just one single instance size, a consulting engagement would pay for itself well within two months.

Are you using mirroring for multi-AZ protection? If so, the delays required for sequential writes between availability zones may be your biggest bottleneck for inserts, updates, and deletes. Check your wait types with sp_AskBrent, and if the top ones are database mirroring, then the data changes aren’t likely to get faster with hardware tuning. Increased IOPs might help – but it takes deeper digging to get to that conclusion. It’s time to look at reducing your change rate in the database. If, on the other hand, your biggest bottleneck consists of select queries, consulting can help.

Are you locked into a long-reserved instance? You can sell reserved EC2 instances on the secondary market, but you can’t sell RDS instances as of this writing. If you’re having performance problems on it, this is definitely a time to call for consulting help fast. You want to avoid dumping the smaller instance and jumping into another commitment if a growing customer base or slowing code base could mean yet another instance type change.

Or are you running a single Standard Edition instance in just one AZ? Try standing up another RDS instance – but this time with the largest Standard Edition instance type you can get, around $12/hour as of this writing. Run the same types of queries against it, and within a couple hundred bucks of experimentation, you can get an idea of whether or not hardware will be a cheap enough solution. Granted – your time isn’t free – but it’s cheaper than a consulting engagement.

These questions help you figure out when it’s just cheaper to throw more virtual hardware at it.

[Video] Office Hours 2016/04/20 – Now With Transcriptions

This week, Brent, Erik, Jessica, Richie, and Tara discuss database modeling tools, how to learn about database corruption, the new cardinality estimator, and the only question that will be on our certification exams.

Here’s the video on YouTube:

You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.

What ERD Tool Should I Use?

Jessica Connors: All right, question from Lee. Let’s jump in there. He says, “We have a system where the vendor went belly up. Now I am tasked with getting an ERD for the database so we can move it to a new system. What tools, free if possible, would you suggest?”

Richie Rump: Oh, uh-

Tara Kizer: I mean, Management Studio has it built in, the ERD, but I’ve always, anytime I’ve used the ERDs, I’ve always used Erwin or whatever.

Richie Rump: Technically, that’s not an ERD. That’s just a diagram, right?

Tara Kizer: Yeah.

Richie Rump: An ERD is an all-encompassing tool that will have, it’s essentially like a case tool. Did I just go case, yeah? You can generate databases from it. You can reverse engineer it. I think Viseo still could do it for free in a pinch, which I’ve done, but my favorite is Embarcadero ER Studio and Erwin, still. They both have their plus and minuses. Both of them have the same minus which is a very large price tag. I have actually purchased my own copy of ER Studio because I do data design a lot, and I’m kind of crazy that way.

Brent Ozar: I want to throw something else weird out there. If the vendor went belly up, the diagramming or ERD tool is going to be the least of your worries. Holy cow, buckle up. You’re going to be developing the bejeezus out of that database. If you’re taking it over from here on out, go buy the tool because this is your new career. This is what you’re going to be working on.

Tara Kizer: Hopefully, they have the source code [inaudible 00:03:29]

Richie Rump: Wow.

Brent Ozar: Oh, that would suck.

Jessica Connors: Yeah, what happens when vendors just die like that? It’s just like, oh, sorry customer. You can have the product, but we’re not iterating on it. We’re done.

Brent Ozar: That was my life in 2000 to 2001. We had a help desk product where the vendor was like, yeah, no, we’re not doing this anymore. I had to build a new web front end to their existing database and gradually like change things over. That was two years of my life that I would really like back. The hilarious part is that the company that I worked for then still uses that crappy web help desk that I built because they lost the source code after I left.

 

How Should I Troubleshoot a Big, Slow Stored Procedure?

Jessica Connors: Oh boy. All right, James says, “I have an SP that has 53 queries in it. What is the best way to approach troubleshooting this slow query?”

Erik Darling: After the execution plan-

Tara Kizer: Execution plan sets statistics I know.

Erik Darling: Then, use it in the SQL Sentry Plan Explorer, it makes it really easy to see which statement in the store procedure takes up the most percentage of work.

Brent Ozar: The other thing you can do if you want to play with it is take it over to development, like restore a copy to production database. Blow your plan cache, run DBCC FREEPROCCACHE. I emphasize, you’re going to do this in development. You’re not going to do it in production. Then, immediately after you free the plan cache, run the query and use sp_BlitzCache. sp_BlitzCache will show you which lines in the stored procedure do the most reads, CBU, run time, whatever.

 

Should I Update Stats When I Change Compatibility Levels?

Jessica Connors: Question from Tom Towns, I haven’t gotten this one. It says, “Is it necessary or advisable to rebuilt indexes/stats when downgrading compatibility level on a database but staying on the same version of SQL Server?” He’s running SQL Server 2014.

Erik Darling: No, not necessarily, but you should be doing regular maintenance on your databases anyway.

Brent Ozar: That’s actually a good question because it comes from the history of when you went up with SQL Server, there was a lot of advice around. You should update your stats, and I don’t think, for a while there, we were trying to find the root of that, like where that advice came from.

Erik Darling: My best guess on that is that there are slight tweaks and twinges to the cardinality estimator, and when you update versions, that doesn’t necessarily kick in just by restoring the database. Updating the stats just kind of helps push things to the whatever, new ideas that cardinality estimator has for your data.

 

Will Brent Ozar Unlimited Have a Certification Program?

Jessica Connors: Let’s see here, you guys are quiet today. In terms of certifications, would we ever think about giving our own Brent Ozar stamp of approval? I know we give out those certificates of completion with the funny little cartoons on there, but Brent, we never actually thought about giving our own certification course, classes, all of those things.

Brent Ozar: It would really just consist of, can you write us a check for 200 dollars, yes? You are a member of a very prestigious club. I bet if we all got together and focused, we could probably put together a test that would do it. The problem is, it’s really expensive. It takes a lot of time to do it because you’ve got to do it in a way, I was part of a team that helped build the Microsoft Certified Master, the next version that never ended up going public. I learned so much from Microsoft around that process where the questions have to be legally defensible. They can’t be biased towards English speaking people. They can’t have cultural issues, like you can’t take in certain business knowledge or assume certain cultural knowledge. It needs to be a fair playing field for anybody who touches SQL Server. That’s really hard. I have a lot of respect for people who write certification exams, but I agree with you. The Microsoft one sucked pretty bad.

Richie and I talked about that in the Away from the Keyboard podcast.

Richie Rump: Yeah, on Away from the Keyboard. I think the name of the episode was Brent Ozar Loses His MVP.

Brent Ozar: Yeah. I trash talked the bejeezus out of that, but yeah.

Richie Rump: And yet, we got renewed. He’s trying people. I don’t understand.

Brent Ozar: We don’t even have tests in our classes. What we end up doing is we have group discussions. I’ll give you a homework assignment, and everybody works together at the same time, like some of the assignments you do by yourself. Some you of them you do in groups. Even that, just the discussions of people saying afterwards, I think this answer is right. I think this answer is right can take hours, so it’s pretty tricky.

 

Should I Install Multiple Instances On One Server?

Jessica Connors: Let’s see here, Steve. Move on to Steve. He says, “We are being given a virtual server to install SQL 2014. Which would be better, install one instance of SQL with all of our databases or several instances of SQL server with fewer databases on each instance?”

Brent Ozar: We’re all trying to be polite, I think.

Tara Kizer: Is this all for one box? If it’s for one box, we don’t recommend stacking instances. One instance per box.

Brent Ozar: Why don’t we recommend stacking instances?

Tara Kizer: I mean, you have to determine max memory setting for them, and you might can figure it not optimal for one instance. Another instance might need more memory. You might be playing that game and just fighting for resources. How many databases are we talking about? Are we talking about just 40 or 50? Are they all critical? Are they all non-critical? Can you have another virtual sever where you can put some databases on that one and other databases on the other?

Erik Darling: A lot of what I would use to sort of dictate where I’m going to put things is RPO and RTO. If you have a group of databases that all have 24/7 uptime, higher level requirements and say like bug tracking or help desk or something along those lines, something you can put off, stuff that needs high availability, stuff that needs better back ups, things like that, things that need more horsepower behind them, I would put those on a server or servers where I’m going to be able to pay special attention to them. I would put some of the less needy stuff on other servers that I can sort of stick in a corner and forget about and not worry about performance and not worry about any performance in one stepping on the toes of another.

 

How Do I Get Rid of Key Lookups?

Jessica Connors: All right, question from [inaudible 00:10:51]. She reads this, “My key lookup is still showing after the column was added in a covered index. Anything I could do to avoid the key look up?”

Brent Ozar: We’ve seen this with a few. You want to look a little deeper with the execution plan. Sometimes there is something called a residual predicate, where there’s other fields that are being looked up, not just the key. When you hover your mouse over the key, look up, look for two things, the predicate and the output. Maybe SQL Server is seeking for some other kinds of fields or it’s out putting other kinds of fields. If there is no predicate and no output, use SQL Sentry Plan Explorer and then anonymize that plan and post it online. There are known issues, I’ve seen them where I can’t get rid of the look up, and sometimes there’s gurus on there like Paul White who can take a look at the plan and go, oh, of course, it’s the bug right here.

Erik Darling: That happened to me sort of recently. I was helping a client get a computed columns together for a table, and for some reason, we added the computed columns. We started referencing the computed columns, but in the execution plan, there was still a key look up for all of the columns that made the computed column work, right? We had five or six columns that added up, made a computed column. We added the computed column to the included columns of the index, but it still wanted all of the columns that made up the computed column for some reason. That was a very weird day.

Tara Kizer: What did you do to fix it?

Erik Darling: Added the columns it was asking for in the output columns and the key lookup went away. It was very weird though. It was something that I tried to reproduce on 2012 with 2014, and I couldn’t get it to happen.

Richie Rump: He stared at it, and-

Erik Darling: Intimated it.

Richie Rump: Changed.

Brent Ozar: Flexed.

 

How Do I Learn About Database Corruption?

Jessica Connors: Database corruption, we haven’t talked about that.

Richie Rump: Ew.

Jessica Connors: That sounds fun.

Brent Ozar: It is fun. It is actually fun.

Jessica Connors: Is it? It reminds me- [crosstalk 00:13:05] Now I’m thinking about that song by the Beastie Boys.

Brent Ozar: Sabotage?

Jessica Connors: Yeah, that’s the one. Let’s see, it’s from Garrett. He says, “What’s the best way to lean about how to resolve database corruption with zero data loss?”

Brent Ozar: Steve Stedman’s database corruption challenge. If you search for Steve Stedman, it’s just S-T-E-D-M-A-N, database corruption challenge, he had like 10 weeks of challenges with different sample databases that were corrupt. You would go download them, and you would figure out how to fix them. The quizzes are kind of two part. The download sample database, and then you go try to figure out how to recover as much data as you can, with zero data loss. Then, you go read how other people did it. They share their answers and show how fast they were able to get the data back.

Erik Darling: Get real comfortable with some weird DBCC commands.

Brent Ozar: Forcing query hints to get data from some indexes and not others, oh it is awesome. Even if you don’t do it, just reading how hard it is is awesome.

Erik Darling: You know what always get me is when you have to find binary, but it’s byte reversed, so there’s like stuff with DBCC right page, and oh forget it.

Brent Ozar: Screw that. No way.

Erik Darling: Call someone else.

Brent Ozar: It’s really-

Jessica Connors: All of you, you’ve all dealt with this before? Database-

Brent Ozar: I’ve never dealt with it in real life. I don’t like-

Tara Kizer: I’ve done it two or three times. Do I like it? No. It was probably about five years ago. I was at the on call DBA. Two DBAs got woken up, and this was on a weekend. By seven in the morning, you hadn’t slept yet. They couldn’t even think clearly at this point, so that’s when they called me to take over for the next several hours. We ended up having to restore the database, and we had data loss, and it was all due to the san. You had some san hardware issues. It was bad. It was really bad.

Brent Ozar: It’s one of those where, like you can go your whole career and never see it, but if you have storage that isn’t bullet proof reliable, or if you have a team that like screws around and cuts around corners, then you spend years of your life dealing with that.

 

Should I Feel Bad for Not Using the Current Compat Mode?

Jessica Connors: Question from Mandy. She says, “Hi. We upgraded to 2014 Standard Edition on new hardware a couple of months ago, but left our databases in 2012 compatibility mode. A few weeks ago, I upped the compatibility mode to 2014 and afterwards had such major contention and blocking problems, we had to change it back to 2012 after about 30 minutes. Is this common? What can we look for in our database to resolve this?”

Brent Ozar: It would be more common if more people were upgrading.

Erik Darling: Everyone’s still on 2008.

Tara Kizer: Is that the cardinality estimator, probably doing that.

Erik Darling: Yeah, it sounds like that kicking in.

Tara Kizer: Is there a different way to turn it off and you’ll still have your compatibility level be 2014?

Brent Ozar: This is tricky, you see, price flags, and you can turn it off at the query level, but Mandy you were bang on. You’re perfect to wait a couple of weeks. That’s exactly what you want to do. Then, you want to flip it on a weekend, like on a Saturday morning. Let it go for just 10 minutes or 15 minutes, but as soon as you can, run sp_BlitzCache and gather the most CPU intensive query plans, most read-intensive query plans, and the longest running query plans with sp_BlitzCache. You’ve got to save those plans out to files, and then switch back to 2012 compatibility mode so that you’re back on the old CE. Then, you take the next week or two troubleshooting these query plans to figure out, what is it about these queries that suck. Is it the new cardinality estimator? So often, people just test their worst queries with the new 2014 CE, not understanding you need to test all of the queries, because the ones that used to be awesome can suddenly start sucking with the new CE.

Erik Darling: Or, in a couple of months, just upgrade to the new 2016 and use the query data store and then just bang that right off. Done.

Brent Ozar: What’s the query data store?

Erik Darling: It’s this neat thing. It’s like a black box for SQL Server. What it does, is it basically logs all of your queries. It’s a good way for cases like you where you had good query plans that were working, and then all of a sudden, something changed and you had bad query plans. It gives you a way to look at and find regressions in query plans and then force the query plan that was working before so that it uses that one rather than trying to mess around and just use the new one. Otherwise, you’re kind of stuck, like you are on 2014, where you could run, switch to the new cardinality estimator, but then you would have to use some trace flags on the specific queries to force the old cardinality estimator, which is not fun. I Don’t recall those trace flags off the top of my head. I never do.

Richie Rump: Can you hit paste flight recorder for queries? Is that about right?

Brent Ozar: Listening in on the cockpit catches people doing things they shouldn’t be doing.

Erik Darling: I was having a drinking contest.

 

Does the New Cardinality Estimator Get Used Automatically in 2014?

Jessica Connors: Let’s see, question while we’re on the topic of the cardinality estimator. Question from Nate, we may have answered this. “Speaking of the new CE for 2014, does it automatically get used for everything or only for the DBs in the 2014 compatibility level, or is it off by default, and it’s up to the DBA to turn it on at their discretion?”

Erik Darling: Or indiscretion.

Brent Ozar: Yeah, whenever you change your, if you’re upgrading a SQL Server, or if you’re attaching databases to a 2014 SQL Server, the new cardinality estimator doesn’t get used by default. It does get used if you create new databases, and they’re in the 2014 compat mode. Yeah, it’s totally okay to leave your databases in the old compat mode. You can leave them there as long as you want. There’s no botches with that. Totally okay, totally supported.

Jessica Connors: What if it’s in a 2005 compatibility mode?

Brent Ozar: What’s the oldest one that supported? I think it’s 2008.

Tara Kizer: I just looked yesterday, because a client had a question, and 2014 does have the 2005 compatibility level in the list. I was surprised. I was very surprised.

Brent Ozar: So generous.

 

Should I Use “Optimize for Ad Hoc Workloads”?

Jessica Connors: Question from Robert, he says, “Optimized for query workload’s instant setting, good to turn on by default or only when mad at hot queries are frequent?”

Brent Ozar: This is tricky. You will read a lot of advice out there like you should turn it on by default, because it doesn’t have a drawback. I’m kind of like, if I see that, it doesn’t bother me. It just doesn’t usually help people unless they truly have a BI environment where every query comes out as a unique delicate snow flower. Have any of you guys ever seen problems with optimized for ad hoc?

Tara Kizer: Problems? No.

Richie Rump: No.

Erik Darling: I’ve never seen problems. You know, every once in a while, you’ll see someone with like 80 or 90% of their plan caches is single use queries. At that point, that’s not really efficient use of the memory or what you’re storing in the plan cache.

Jessica Connors: Let’s see, question from Nate. We are still on compatibility mode questions. Last one, “Is there anything wrong with leaving all DBs in lower compat levels when you do a SQL Server migration, server 2014? Like, leave all the DBs at 2005 or 2008 or two compatibility levels and then start upping them later when we have breathing room?

Erik Darling: No.

Tara Kizer: You won’t ge the new features.

Brent Ozar: Totally okay. Totally okay.

Tara Kizer: Eventually, you’re going to have to up them, because you’re going to be upgrading and you can’t go down to that level. I imagine that 2016 doesn’t go down to 2005.

Erik Darling: I mean, just doing that before you up the compatibility level, just make sure that you check any of the Microsoft’s broken features and deprecated stuff and new reserve key words. If your old databases are using keywords that have now become reserved, stuff could break pretty easily.

 

When Will We Add More Training Videos?

Jessica Connors: James is getting sick of our same old training videos on our website. He is wondering when we will be adding more videos to the training library. If so, what? When? How?

Erik Darling: As soon as you pay us to make them.

Brent Ozar: Doug is doing a new one on advanced querying and indexing. It has a game show theme, back from a ’70s game show. We are aiming to have that one online in June. To give you a rough idea, it takes about three months worth of work to do a six hour class to the level of Doug’s videos. If you’ve watched T-SQL Level Up, it’s phenomenal production value. It takes three months worth of work to do a six hour video.

Brent Ozar: For my level of production values, where it’s a guy talking in front of a green screen, it’s usually about a month. I’m working on performance tuning, when you can’t fix the queries. That one will be out probably in June or July, somewhere in there. The trick with that is, the Everything Bundle price will go up at that time. If you’re interested in getting an everything bundle, I would do that this month. It will include any videos that we add during the course of your 18-month ownership. This month, the everything bundle is on sale, even further the half off. Now, it’s just $449 for 18 months to access to all of our video. That is a one month only sale to celebrate our 5th year anniversary. After that, you are right back up to 899.

Jessica Connors: Then, what’s it going to go up to?

Brent Ozar: It depends. I think we have two more 299 videos. I wouldn’t be surprised if we pushed it to 999.

Jessica Connors: Ah.

Richie Rump: 999? That’s still a deal.

Brent Ozar: Richie says in his sales person voice.

 

Why Does sp_BlitzIndex v3 Have Less Output?

Jessica Connors: All right, question from Justin. Hello, Justin. He says, “sp_BlitzIndex isn’t recognizing new indexes on a table where I deleted all of the indexes and ran replay trace against, but SSMS is recommending some. So is a third party software. Any idea what would cause this?”

Brent Ozar: I bet you’re running a brand new version of sp_BlitzIndex, version three, that just came out where we ignore crappy little tables that don’t really have that much of a difference in terms of performance. If you want the crappy little tables version, you can run the older version of sp_BlitzIndex, which is included in the download pack, or use it with @Mode = 4, which does the fine grained analysis that we used to do. Just know that you’re probably not going to have that much of a performance improvement, not if they’re tiny little tables, tiny little indexes.

 

Should I Use Somebody’s HA/DR Software on My SQL Server?

Jessica Connors: Yeah, he said that he was using the new one. Brent, or anyone, has anyone here heard of DH2I enterprise contain HA solution, and what is your opinion? Have you heard of that? It’s from-

Brent Ozar: I have heard of it. I don’t know anybody using it. I don’t know if you guys have either. I have this whole thing where if I’m going to install software that supposed to make my stuff more highly available, my whole team better know how to use it, and it better be like really well tested. I need to be able to talk to other people and get training on how it works. I would just talk to other people who use it and ask for hey, can you give me a run down of what issues you’ve had over the last 12 months. Make sure they’re a similarly sized customer to you, like if you’ve got 100 servers, the person you talk to should have 100 servers.

Erik Darling: What I find helpful is that a lot of vendors like this will have help forums. What I like to do is just read through the help forums and see what kind of problems people are facing, kind of common questions and see if there’s any answers there that kind of help me figure out if this is right for me.

Jessica Connors: At any of the help forums, have any of you guys seen people giving feedback like, don not buy this product. This is terrible.

Erik Darling: You know, you see some questions that are like X product stopped working and this broke and this stopped working and had major outage. What do I do?

Brent Ozar: And you look at the time lag between answers, like how long it takes to get it fixed.

Erik Darling: And then like, no actual company support people are chiming in. It’s all other users that had the same problem.

Tara Kizer: I just don’t know if I would want to use a product that isn’t widely used in industry. Do you want to be the first customer using this product or the first five? I want to use the products everyone else is using.

Jessica Connors: We are doing some upgrades to our CRM right now, and there’s little things I want to change to make it work, and the engineer is sending me, basically these forums of people like, I really want this feature. It’s never going to be turned on. This piece doesn’t work, and then like it’s from five years ago, four years ago.

Brent Ozar: Utter silence. There’s also a follow up question from someone that says, oh, so and so did an online session on that. Just know that often, consultants who are paid by vendors, will do sessions going, hey, this tool is amazing. I would be glad to help you install it. They are selling the software and they are selling their consulting. Look at it just the same as you would an infomercial. I would ask to talk to the customers using it, not to the people you are going to pay money to.

Jessica Connors: Have we done that? Has anyone ever come up to us with another product like, hey, can you do a webcast for their product?

Brent Ozar: Oh, all the time. I’ll be like, because we do webcasts too for like Dell and Idera and all this. I’m going to talk about the native way of doing it, like the pain points of how you do it without software. If you want to talk about how your software does it, that’s totally cool. I can’t talk about your software, because I don’t use it. I just have to jump form one client to another. Every now and then, people will say, “Here’s a bucket of money. How much of this bucket would you like in order to say our product is awesome?” I’m like no. I only have one reputation. What happens if the product sucks butt wind? I will review privately if you want to spend three or four days hammering on your software and see how it works. We would be glad to do it and then give you a private review. I’m not going to go public and say that it’s awesome when it smells like…

Richie Rump: Erik and I will do it. That’s not a problem.

Brent Ozar: Yes.

Erik Darling: My reputation stinks anyway.

 

What’s the Ideal HA/DR Solution for 5-10 SQL Servers?

Jessica Connors: Let’s see, just for fun. Question if you run out of things to answer. This is really open-ended. What is your ideal HA and DR solution for a SQL Server environment with five to ten instances. That depends.

Brent Ozar: No, we should each answer that. Tara, what’s your ideal for five to ten instances? Say you’ve got one DBA.

Tara Kizer: I don’t know how to answer that. I’ve never worked in an environment where I was the only DBA. I have always worked in an environment where there was probably three to eight DBAs Availability groups is my answer for almost everything. You say HA, I say availability groups. I know you guys don’t like that, but that’s what we implemented. We had large DBA teams. We had larger server teams that understood Windows, clustering, all that stuff. It works well if you have the people that know the clustering, you know, all the features.

Brent Ozar: Tara doesn’t get out of bed for less than 10,000 dollars. Erik, how about you?

Erik Darling: For me, if you just have one to two DBAs, but you may have some other support staff, I would say a fair level clustering and then something like either async mirroring or log shipping. It’s usually pretty decent for those teams. It’s pretty manageable for most people who don’t have the DBAs or the football jerseys on and the schedules and the tackle cards on almost everything.

Brent Ozar: Yeah, how about you, Richard?

Richie Rump: From a developer’s perspective, because I joined this team, and all of a sudden, I have a developer’s perspective. I love that. It’s Azure SQL Database, right?

Brent Ozar: Oh, you’re cheating.

Erik Darling: Wow.

Richie Rump: It’s all kind of baked in there, and I don’t have to think about it. A lot of it’s done fore me. As a developer, I’m going to do the laziest, simplest way out. That would be it.

Brent Ozar: Man, you win points on that. That’s genius. That is pretty freaking smart. I would probably assume that they’re not mission critical if there is stuff that I could stand some down time on. I actually would probably go with just plain old VMware. I would just make them as single instances in virtualization. Then, that way do something like log shipping for disaster recovery, or VMware replication. Now, this is not high availability. It’s just availability. It’s just the A. If you cut me down to just like five SQL Server instances and no DBA or like one DBA who’s kind of winging it and spending most of his time on a free webcast, then I kind of like that. If not, I’m still kind of a fan of async mirroring too. Async mirroring is not bad.

Erik Darling: Much less of an albatross than its sync brother.

Brent Ozar: Yeah, that thing blows.

Jessica Connors: Cool.

Erik Darling: I’ll say one thing though, not replication.

Tara Kizer: Yeah.

Jessica Connors: So many people using replication out there.

Tara Kizer: Replication is mostly okay, but not as an HA or DR feature.

Jessica Connors: Ah, they’re using it for the wrong thing.

Tara Kizer: It’s really reporting feature.

Brent Ozar: It’s funny to see, too, like all of us end up having these different answers, and this ends up being what our chat room is like. In the company chat room, like we all have access to everybody else’s files. We know what everybody is working on. Somebody can be like, Hey, I’m working with this client. Here is what they look like. Here’s what their strengths and challenges are. It’s really fun to bounce ideas off people and see what everybody comes up with.

Jessica Connors: Mm-hmm – well, all right, guys. It’s that time.

Erik Darling: Uh-oh.

Brent Ozar: Calling it an episode. Thanks everybody for hanging out with us, and we will see you next week.

Breaking News: Query Store in All Editions of SQL Server 2016

Bob Ward talking Query Store at SQL Intersection

Bob Ward talking Query Store at SQL Intersection

Onstage at SQL Intersections in Orlando this morning, Bob Ward announced that Query Store will be available in all editions of SQL Server 2016.

This is awesome, because Query Store is a fantastic flight data recorder for your query execution plans. It’ll help you troubleshoot parameter sniffing issues, connection settings issues, plan regressions, bad stats, and much more.

I’m such a believer in Query Store that sp_Blitz® even warns you if Query Store is available, but isn’t turned on.

Wanna learn what it is and how to use it? Books Online’s section on Query Store is a good place to start learning, and check out Bob’s slide deck and resource scripts.

And oh yeah – Argenis Fernandez and I had a little bet. He bet that Query Store would be fully functional in Standard Edition, and I bet that it wouldn’t. I’ve never been happier to lose a bet, and I made a $500 donation to Doctors Without Borders this morning. Woohoo!

Update 4/21 – note the comment below from Bob Ward, who clarifies that this wasn’t quite ready for release yet, and feature decisions may not have been made yet.

[Video] Free Training of the Week: Statistics and Memory Grants

This week is Stats Week here at Brent Ozar Unlimited, so let’s kick things off with a module from How to Think Like the Engine. Why does one query get wildly different execution plans? In this 17-minute video, learn how statistics influence your query plans, discover how to see your own statistics, and understand how stats help build memory grants.

UPDATE: expired!

(If you don’t see the video above, you’re reading this somewhere that doesn’t support video embedding. You’ll need to read the post on our blog.)

This video is part of our How to Think Like the Engine class, and along with the rest of our videos, it’s on sale this month! Use coupon code HighFive for half off our training videos and bundles.

[Video] Office Hours 2016/04/13

This week, Jessica, Richie, and Tara discuss whether you should skip SQL Server 2014 and jump to 2016, our Performance Tuning When You Can’t Fix the Queries class, and whether you should detach a database in order to drop the connections to it. (Wow!)

Here’s the video on YouTube:

You can register to attend next week’s Office Hours, or subscribe to our podcast to listen on the go.

Announcing Our New Pre-Con: Performance Tuning When You CAN Fix the Queries

The team resting up before tuning queries

The team resting up before tuning queries

Going to be in the Seattle area in October?

Are your users frustrated because the app is too slow, and you can change the SQL queries – but you’re not sure how? Or which ones? Or where to start?

Take heart – there’s hope. The Brent Ozar Unlimited team does this every week, and we’ll share our proven methodologies to performance tune code, execution plans, Entity Framework, and more.

This is a two-day pre conference session in Seattle on Monday-Tuesday, October 24-25, 2016. Lunch is included with your admission. (You’re on your own for breakfast and dinner.) Class is held at the Big Picture Theater in comfy seats from 9AM to 4:30PM.

You’ll meet the consultants – Angie, Brent, Doug, Erik, Richie, and Tara – for two days of learning and fun, plus t-shirts, magnetic poetry kits, and more.

To celebrate the launch, it’s $100 off – it’s one heck of a deal at just $299 for 2 solid days of training! Register now – our pre-cons usually sell out. See you in Seattle!

P.S. – there’s t-shirts, magnetic poetry kits, stickers, and swag involved.

Looking for a New Challenge? kCura is Hiring a DBA.

I’m a big kCura Relativity fan – it’s an application that really pushes SQL Server hard, written by people who are a ton of fun to work with.

If you’re looking for a challenge in a really cool environment, check out what they’re looking for:

A Production Database Administrator with a deep demonstrated knowledge of MS SQL administrative tasks and the ability to consult on design, development, and automation improvements.  Having a passion for maintaining MS SQL databases that meet or exceed internal and client contracted production SLAs for availability and performance. The right candidate will ideally have past hands-on experience administering MS SQL databases running in a public cloud environment such as AWS or MS Azure.

Responsibilities

  • Install and configure SQL Server 2012 and higher versions. Configurations built and supported should include scenarios that include leveraging SQL Always On, windows fail over clustering, and transaction replication.
  • Document complex installation, configuration, and optimization procedures so they can be automated.
  • Provide support 24/7/365 for any troubleshooting or corrective actions related to incidents impacting application availability within the production environments.
  • Take proactive measures to monitor, trend, and tune SQL databases, such as running maintenance jobs (backups, DBCCs, apply indexes/re-indexing, etc.), to meet or exceed baseline stability and performance SLAs on large databases (1 TB+) and large volumes of databases (100+).
  • Create, implement, and maintain SQL DB Health Checks, and have a demonstrated ability to automate SQL health reporting/event notification, and corrective actions.
  • Configure SQL Server monitoring utilities to minimize false alarms, and have a demonstrated ability to monitor/trend SQL environments to determine and implement enhanced monitoring thresholds to prevent incidents and reduce mean time to recovery (MTTR).
  • When performance issues arise, determine the most effective way to increase performance including scaling up or out, server configuration changes, index/query changes, etc.
  • Identify code defects and enhancements and develop a detailed root cause analysis that can be leveraged by the product management and development teams to improve application availability and decrease the total cost of ownership.
  • Ensure databases are being backed up and can be recovered in a manner that meets all BCDR objectives for RPO and RTO.
  • Perform all database management responsibilities in Microsoft Azure for production and non-production workloads.

Qualifications:

  • At least 4 years’ experience working as a Microsoft SQL DBA leveraging versions 2008r2 or later.
  • Experience working in a 24/7/365 operation.
  • Bachelor’s degree in computer science or information systems.
  • Familiar with basic Azure IaaS capabilities, and some experience designing and building MS SQL databases within Azure or AWS.
  • Microsoft certifications such as MCSE, MCSD, etc.
  • Experience operating in an ISO certified and/or highly regulated (SSAE, PCI, HIPPA, etc.) hosting operation.
  • Familiar with Dev/Ops concepts, and ideally experience working with a Dev/Ops team focused on implementing and enhancing continuous delivery capabilities.
  • Experience automating SQL Server deployment and configuration through PowerShell, Chef, Puppet, etc.
  • Background designing, building, and managing a search and indexing solution such as Elastic Search, Apache SOLR, etc.
  • Previous Relativity system administration experience.

 

(You should read that “qualifications” list as a perfect candidate, and don’t be dissuaded from applying if you’re not the perfect candidate.)

Interested? Get on over there and apply.

How to Run DBCC CHECKDB for In-Memory OLTP (Hekaton) Tables

tl;dr – run a copy-only full backup of the Hekaton filegroup to nul. If the backup fails, you have corruption, and you need to immediately make plans to either export all your data, or do a restore from your last good full backup, plus all your transaction log backups since.

Yeah, that one’s gonna need a little more explanation, I can tell. Here’s the background.

DBCC CHECKDB skips In-Memory OLTP tables.

Books Online explains that even in SQL Server 2016, DBCC CHECKDB simply skips Hekaton tables outright, and you can’t force it to look at them:

Hey, that's the same way I handle merge replication - I just look away

Hey, that’s the same way I handle merge replication – I just look away.

However, that’s not to say these tables can’t get corrupted – they have checksums, and SQL Server checks those whenever the pages are read off disk. That happens in two scenarios:

Scenario 1: when SQL Server is started up, it reads the pages from disk into memory. If it finds corruption at this point, the entire database won’t start up. Even your corruption-free, non-Hekaton tables will just not be available. Your options at this point are to restore the database, or to fail over to a different server, or start altering the database to remove Hekaton. Your application is down.

Scenario 2: when we run a full (not log) backup, SQL Server reads Hekaton’s data from disk and writes to the backup file. If corruption is found, the backup fails. Period. You can still run log backups, but not full backups. When your full backup fails due to corrupt in-memory OLTP pages, that’s your sign to build a Plan B server or database immediately.

Here’s the details from Books Online:

Beep beep beep

Beep beep beep

The easy fix: run full native backups every day, and freak out when they fail.

Backup failures aren’t normally a big deal, but if you use in-memory OLTP on a standalone server or a failover clustered instance, backup failures are all-out emergencies. You need to immediately find out if the backup just ran out of drive space or lost its network connection, or if you have game-over Hekaton corruption.

Note that you can’t use SAN snapshot backups here. SQL Server won’t read the In-Memory OLTP pages during a snapshot backup, which means they can still be totally corrupt.

This works fine for shops with relatively small databases, say under 500GB.

The harder fix: back up just the In-Memory OLTP data daily.

With SQL Server 2016, the Hekaton limits have been raised to 2TB – and you don’t really want to be backing up a 2TB database the old-school way, every day. You could also have a scenario where a >1TB database has a relatively small amount of Hekaton data – you want to use SAN snapshot backups, but you still have to do conventional backups for the Hekaton data in order to get corruption checks.

Thankfully, Hekaton objects are confined to their own filegroup, so Microsoft PM Jos de Bruijn pointed out to me that we can just run a backup of just that one filegroup, and we can run it to NUL: to avoid writing any data to disk:

Miami Vice Versa

Miami Vice Versa

Oops, did I say we could just back up that filegroup? Not exactly – you also have to back up the primary filegroup at the same time.

If you’re doing great (not just good) database design for very large databases, you’ve:

  1. Created a separate filegroup for your tables
  2. Set it as the default
  3. Moved all the clustered & nonclustered indexes over to it
  4. Kept the primary filegroup empty so you can do piecemeal restores

If not, hey, you’re about to. An empty primary filegroup will then let you do this faster:

Checking for corruption by backing up to NUL:

Checking for corruption by backing up to NUL:

Tah-dah! Now we know we don’t have corruption.

This comes in handy if you’ve got a large database and you’re only doing weekly (or heaven forbid, monthly) full backups, and doing differential and log backups the rest of the time. Now you can back up just your in-memory OLTP objects for corruption.

Note that in these examples, I’m doing a copy_only backup – this lets me continue to do differential backups if that sort of thing is your bag.

For bonus points, if your Hekaton data is copied to other servers using Always On Availability Groups, you’ll want to do this trick on every replica where you might fail over to or run full backups on. (Automatic page repair doesn’t appear to be available for In-Memory OLTP objects.)

If you’d like CHECKDB to actually, uh, CHECK the DB, give my Connect item an upvote here.

css.php