See, AlwaysOn Availability Groups as well as good ol’ Failover Clustered Instances both rely on Windows clustering in order to manage uptime. If you’re going to do an AG or an FCI, you gotta know Windows.
Edwin Sarmiento, a well-respected cluster expert, has brought out an online course, and it’s big. I’m hearing great things about it from folks I know who have gone through it.
For the next five days, it’s $385. This is his new launch special that also includes his HA/DR deep dive course, Personal Lab course, and Azure HA/DR Hybrid Solutions course. That’s a heck of a deal. If you’re doing clustering, you should go take a look.
One of the most popular things in our First Responder Kit is our HA/DR planning worksheet. Here’s page one:
In the past, we had three columns on this worksheet – HA, DR, and Oops Deletes. In this new version, we changed “Oops” Deletes to “Oops” Queries to make it clear that sometimes folks just update parts of a table, or they drop an entire database. We also added a column for corruption (since your protection & recovery options are different than they are for Oops moments).
When people first see this worksheet, they usually scoff and say, “The business is going to tell me we never want to lose data, and we’re never allowed to go down.” No problem – that’s where the second page of the worksheet comes in:
Find the amount of data you’re willing to lose on the left side, and the amount of downtime you’re willing to tolerate across the top. Where the boxes match up, that’s a rough price range of the solution.
In this version, we added an asterisk to a lot of supposedly synchronous solutions aren’t – for example, Always On Availability Groups don’t actually guarantee zero data loss. I still keep that sort of thing in zero data loss because most of the time, it’s zero data loss, but you just need to understand it’s not a guarantee.
I like printing those two pages front and back on the same piece of paper because it helps management understand that requirements and costs are two sides of the same coin. It’s management’s job to pick the right box (price range), and then it’s IT’s job to build a more detailed estimate for the costs inside the box. The third and final page of the worksheet breaks out the feature differences for each HA/DR option.
If you’re one of the tens of thousands of folks who’s signed up for email alerts whenever we update our First Responder Kit, then you’ve already got an email this week with these new changes. If not, head on over and pick it up now.
So you’d like to fix a bug or contribute code to the First Responder Kit, but you’re new to Github. Let’s get started.
1. Open a new issue on Github.
Go to the Github issues list for this repo, and start a new issue. Here’s the example that I’m going to work on today:
In the issue title, put the name of the script, and some basic details so someone can see at a glance what’s going on.
In the body of the issue, put as much details as you can including:
- Description of the symptom
- What you wanted the code to do
- What it actually did instead
- Ideally, how you want to fix it
2. Get the Github issue number.
After you add the issue, the number will be in the title, and at the end of the URL:
Now it’s time to work on some code!
3. Download the repository.
Then go to the First Responder Kit’s page, and click the Clone or Download button at the bottom right of this screenshot:
That will open your Github app and let you pick where to save the files locally.
4. Make sure you’re in the dev branch, and sync it.
There are multiple active branches – you want to start in the dev branch where pre-release fixes are happening. If you start in the master branch (release), you’re likely to be working with older code, and your changes will get rejected.
Here’s my Github app:
At the top left of my screenshot, there’s a dropdown for branches – mine says dev. Make sure yours does too, and then click Sync at the far right:
5. Create a branch for your code.
In the Github desktop app, click on the new-branch button right next to the dev dropdown:
For the name, use the issue number from Github, plus a slash and your name. This indicates that it’s your code for that particular issue – remember that other people may end up working on this same issue.
Make sure you’re branching from dev, not master.
6. Write your code, test it, and check it in.
For code guidelines, check out the CONTRIBUTING.md file in the repo.
Test your code against a case-sensitive instance of SQL Server 2008 and 2016, the oldest and newest supported versions. Other testers will also repeat your work, and automated testing will kick in later, but trust me, you want to do a sanity check first. Don’t get your pants pulled down in public.
Then click on the “1 Uncommitted Change” button at the top of the Github desktop (and there may be multiple changed files) and review your changes. They’re color-coded as to which lines have been removed from scripts, and which lines have been added.
Only check in changes and files that relate to your issue. If you accidentally changed a file that you didn’t mean to, in the Github app, right-click on that file name and click Discard Changes.
At the bottom left is your check-in title and description.
The title should:
- Start with the issue number, like #324 – this is Github magic to link to an issue
- Include the names of the files that were changed – this is helpful to others when they’re scanning a long list of titles of check-ins
- A brief description of the issue
Then click Commit, and click Publish at the top right to sync your changes with the rest of the world. This doesn’t merge your code with the toxic sewer that is the dev branch just yet – you’re still on an island by yourself, but at least other people can get to your code.
7. Announce that your code is ready for testing.
Go back to your Github issue, and add a comment (don’t close it) that says you’re ready for testing. Include any notes on how people can reproduce the issue, or know that your code works.
In the Labels dropdown on the right side, add the label ReadyToTest so folks can find your work easier.
Someone else has to test your code before it gets merged into the main branch, so it’s in your best interest to make it as easy and friendly as possible for someone to know that your code works.
8. Ready to merge? Add a pull request.
After someone else has tested your code, and you’re both happy that it works, open the Github app and click Pull Request at the top right:
The check-in title and description will flow into here automatically, and click Send Pull Request. The maintainers will take a look at it and make sure things are kosher.
Stuff you don’t have to do
Release notes and changelog – you’ll notice that the top of each script has a changelog, version number, date, and release notes. Those are added when we consolidate a group of pull requests into a single release.
Documentation – if you don’t feel like writing it, that’s totally okay. We’re happy just to get code in at least, and we understand that the kinds of folks who write code don’t always overlap with the kinds of folks who like to write documentation. If you write things like a new check for new SQL Server problems, just know that we may not merge your code in with the dev branch until it’s also been documented.
This August, we’re goin’ back to Texas.
Last year, we flew the team out to Round Rock, Dell’s HQ, and ran all kinds of SQL Server experiments in their hardware labs. We broadcasted a whole bunch of webcasts live on the net for you to watch and see what we learned.
This year, we’re going back again, and the lineup includes sessions on:
- How to Measure SQL Server
- The Unbearable Lightness of BEGIN
- Downtime Train
- Performance Overhead of TDE, Query Store, BPE
- And everybody’s favorite from last year: Watch SQL Server Break and Explode
I always wanna be honest with you, dear reader, and let you see how running a lil’ consulting shop goes. It’s been fun sharing a lot of our happy growing moments along the way. This one, though, is a lot less fun to share.
Yesterday, we had to let go of Angie, Doug, and Jessica.
The background: we basically sell emergency room services for SQL Server, and we’ve consistently been backlogged with work. In early 2015, we figured that me doing sales was holding the company back. If we hired a salesperson, we believed we’d be able to acquire more customers faster, and sell more work to past customers. So we hired Jessica, and staffed up on consultants.
This year, we’ve learned that a salesperson can’t really bring in more emergency room work (seems obvious in retrospect, but we were hoping for followup work), so I had to make a tough decision. I had to right-size the company back to where we’re regularly busy, maybe even turning some work away, just to make sure that we’re profitable overall. The training side of the business is still doing really well, and the company overall isn’t in trouble – but it would have been soon.
I’m bummed out, obviously, because it’s a failure on my part. These people are my friends, and I wanted to build a place where they could thrive for the rest of their working careers if they so chose. I’d always heard it’s tough to bridge the chasm between a 3-FTE consulting shop and a 10-FTE one, and now I really understand why.
Wanna keep in touch with them? Here’s their emails:
- Angie Rudduck in Portland, Oregon – if you’re looking for an incredibly upbeat, fast-learning junior DBA or SQL Server support person, I’d highly recommend Angie. She did a great job streamlining our backup/recovery/CHECKDB education process.
- Doug Lane in Cedar Rapids, Iowa – want to make your SQL Server faster, fast? Doug has been doing high end performance tuning for the last few years, and he’d be a huge asset to any SQL Server shop that needs a DBA who can build bridges between developers and SQL Server.
- Jessica Connors in Chicago, Illinois – in the database world, salespeople often have a reputation for being slimy, but Jessica is totally different. She listens to your pains and matches you up with the right relief. She was a pleasure to work with.
SQL Server 2012 SP3, 2014 SP2, and 2016 users are going to find a lot of stuff to love in here. The new per-query memory grants fields in the plan cache are exposed in sp_BlitzCache, and sp_BlitzFirst also shows memory grants for live running queries in @ExpertMode = 1. Here’s the details:
- Show cost for stored procedures. #339
- Warn about trace flags added at the query level, and global trace flags. #361
- Add warnings about Remote Queries. #315
- Do not show Forced Plans warning if the real cause is forced parameterization. #343
- Fix divide-by-zero error if Cost Threshold for Parallelism is 0. #358
- Fix warning for unparameterized query. #334
sp_BlitzFirst v25 (The Artist Formerly Known as sp_AskBrent):
- Add new memory grants columns to 2012-2016 live queries output. #362
- Add SQL login to live queries output. #354
- Filter Perfmon counter display to skip counters with zeroes. Still logged to table though. #356
- Warn about 2016 Query Store cleanup bug in Standard, Evaluation, Express. #352
- Updating list of supported SQL Server versions. #344
- Fixing bug in wait stats percentages. #324
- Compression information in @Mode = 2. #18
- Use recently-modified check to improve indexes-not-in-use recommendations. #220
- Alphabetical sort for @GetAllDatabases = 1, @Mode = 2 output. #351
- Remove per-day cost filter for missing indexes in @Mode = 4. #338
- Missing index benefit is now labeled per-day to make it more obvious. #330
To get involved with the development & testing, head over to the Github repository, or download the full First Responder Kit with our checklists & guides.
Our class inventory has been growing lately – especially with Doug’s awesome new video on statistics – and it’s time to make things easier:
The DBA Bundle – $399 – includes our classes on HA/DR, VM/SAN/HW, and job interviews. (Because hey, you might get this stuff wrong and need a do-over.) A $627 value.
The Performance Bundle – $499 – includes execution plans, indexes, statistics, T-SQL, and more with a list price of $984.
The Everything Bundle – $699 – our legendary package has learned a new trick: now, you also get early access to our currently-in-production training classes. Right now, that’s our upcoming Performance Tuning When You Can’t Fix the Queries class, and it’s only available in the Everything Bundle. Total value – over $1,700.
And this month, to celebrate the new lineup, the Everything Bundle is just $599. Enjoy!
Update: if you have questions about your training videos, please email us – click the Contact button at the top of this page. We don’t really do support in blog comments. Thanks!
Using the StackOverflow database, let’s check out Krock’s query. He’s a competitive fella, and he’s looking to find users who signed up for StackOverflow after he did, but who have a higher reputation than he does.
I’m going to simplify the query a little here:
SELECT me.Id, u.Id FROM dbo.Users me JOIN dbo.Users u on u.CreationDate > me.CreationDate and u.Reputation > me.Reputation WHERE me.Id = 557499;
The Users table has a clustered index on the Id field, so the initial seek on “me” (Krock’s row, Id = 557499) is super-fast. However, to find all the users with a higher creation date and reputation – man, that sucks scanning the clustered index for that, and it brings back a lot of rows. Here’s the execution plan:
Hover your mouse over the clustered index scan, and you get:
- Estimated number of rows = 5,277,830
- Actual number of rows = 5,277,831
Nice job, SQL Server, close enough for government work. Now it suggests a missing index – rightfully so – and let’s add it, because we want this query to go faster.
After adding the index, here’s the execution plan:
The query runs faster, make no mistake – but check out the estimates:
- Estimated number of rows = 1
- Actual number of rows = 165,367
Those estimates are built by SQL Server’s cardinality estimator (CE), and there have been major changes to it over the last couple of versions. You can control which CE you’re using by changing the database’s compatibility level. This particular StackOverflow database is running in 2016 compat mode – so what happens if we switch it back to 2012 compat mode?
It’s a differently shaped plan – 2012 decided to go parallel with this query. Its estimated subtree cost is 7.37 – higher than my SQL Server’s Cost Threshold for Parallelism.
The motivating factor can be discovered by hovering over that index seek:
SQL Server 2012’s cardinality estimator guessed that 475,005 users would have a newer creation date and higher reputation than Krock.
This query has so many fun lessons to share.
- Your database options matter.
- What database you run a query in matters.
- Neither 2012 nor 2016’s cardinality estimator is “right” in this case, they’re just differently wrong
- Indexes create statistics on their columns, but that doesn’t mean SQL Server has perfect information on what’s inside those columns
- Besides, SQL Server can’t know Krock’s creation date or reputation until the plan is built and the query’s already running
I find statistics to be one of the coolest topics inside the engine, and that’s why I love Doug’s newest course, Statistics: SQL Server’s Guessing Game. I bet you’re gonna love it too – go check it out.
You have to make queries go faster, and you suspect that outdated or inaccurate statistics are hurting your execution plans.
Doug Lane is here to help with his new $29 video course, Statistics: SQL Server’s Guessing Game.
It’s phenomenally detailed – every single person on our team learned stuff from this class (me for sure!), and I know you will too.
Here’s the modules he covers:
- A Tale of Two Plans (7m) – We look at two execution plans for the same query, talk about why they’re different, and figure out which one makes the most sense.
- Getting to Know Statistics (15m) – We pop open SSMS to look at what statistics drove an execution plan’s estimates. First, we use a query to find which statistics focus on which columns, and then bust out DBCC SHOW_STATISTICS to understand statistics contents.
- Combining Multiple Single-Column Statistics (8m) – In the last module, we looked at statistics independently. Now, what happens when our query references multiple columns, and we have to combine multiple statistics to guess how many rows will return. You’ll also see how both the old (pre-2014) and new cardinality estimators shape up.
- What is the Cardinality Estimator? (5m) – The CE’s job is to use statistics to determine how many rows will come back for any given operation in a query plan. SQL Server’s Cardinality Estimator changed a lot in 2014, and you need to understand which CE you’re using before you analyze your statistics.
- Multi-Column Statistics and the Cardinality Estimators (14m) – In the real world, your indexes often cover multiple columns. We’ll see how the density vector has multiple rows now, and see how the pre-2014 and 2014+ cardinality estimator have huge differences.
- Filtered Statistics: ENHANCE! (15m) – Filtered statistics help you get ENHANCED statistical coverage of small parts of your table.
- The Ascending Key Problem (11m) – In data warehouse scenarios where you constantly load in new data every night, and then go query the data you just loaded, you often get really bad execution plans.
- 10 Rules for Managing Statistics (12m) – Now that you understand how SQL Server uses statistics, we need to cover how to manage your statistics.
And just like Doug’s T-SQL Level Up course, it’s beautifully shot, and explains some really challenging topics with great visual examples.
Head on over and check out the trailer.
When a man loves a woman very much, he goes to Connect.Microsoft.com and upvotes her feature requests. I was curious to see what requests were resonating the most with folks, so I went through the list of feature requests sorted by votes:
1. Please fix the “String or binary data would be truncated” message to give the column name – by DWalker, 1136 upvotes. This one is a classic, and good news! On June 17th, Microsoft’s David Shiflet left a comment that developers are motivated to make improvements, and they’re discussing options for the new error message.
2. Add native support for JSON, as in FOR JSON or FROM OPENJSON – by Bret_M_Lowery, 1111 upvotes. This one’s actually already added in SQL Server 2016, but the Connect item isn’t closed yet.
3. Add Visual Studio 2013 support to SSDT-BI – by David McClelland, 731 upvotes. I don’t know enough about SSDT-BI to know where this is at.
4. Add full justification in Reporting Services – by inDigeniCa, 653 upvotes. Lots of angry comments on this one, and doesn’t appear to be making headway.
5. New virtual table “errors” like the deleted and inserted tables – by danholmes, 593 upvotes. Given that the MERGE statement is pretty rough, a virtual “errors” table that we could use in triggers would be really handy.
6. OVER clause enhancement request – DISTINCT clause for aggregate functions – by Itzik Ben-Gan, 514 upvotes. Started back in 2007, there haven’t been a lot of comments on this one, just folks upvoting it.
7. Scalar user-defined function performance is bad – by Andrew Novick, 510 upvotes. I’ve seen so many companies burned badly by this when they encapsulated code in reusable functions without knowing it causes queries to go single-threaded.
8. CREATE OR REPLACE syntax – by BAnVA, 463 upvotes. Unfortunately, the details page link errors out for me on Connect at the moment, but I’m guessing the comments point to SQL 2016’s new drop-if-exists syntax. I would argue that that’s no replacement, but I gotta hand it to Microsoft that they did take a relative action on it.
9. Expand synonym to other entities (database, linked server) – by Aaron Bertrand, 409 upvotes. Synonyms are slick tools that let you move a table to another database, leave a synonym behind, and let your app keep right on trucking without noticing.
If the highly-voted Connect items are what shape the next version of SQL Server, it’s time to exercise your voting finger. Check out the rankings and make your voice heard. Microsoft is listening – and if you want proof, check out this month’s release of SSMS. Those developers are on fire fixing Connect requests.