How to Track Performance of Queries That Use RECOMPILE Hints

Say we have a stored procedure that has two queries in it – the second query uses a recompile hint, and you might recognize it from my parameter sniffing session:

The first query will always get the same plan, but the second query will get different plans and return different numbers of rows depending on which reputation we pass in.

I’m going to do some setup, free the plan cache, and then run the proc:

When I run it for @Reputation = 2, the second query returns 9,149 rows.

How a recompiled query shows up in the plan cache

If I check sys.dm_exec_query_stats right now, there are a few columns with interesting results – I’m going to move them around a little so you can see ’em:

Line 1 is the first query in the stored procedure. It’s a COUNT(*), and it only returns 1 row.

Line 2 is the second query, and it returns 9,149 rows for reputation = 2.

So right now, if I’m inclined to do math, I can add up the total number of rows for both statements and I can see the total number of rows returned by the query. It might seem odd to use rows as a measurement right now, but sys.dm_exec_query_stats’s columns – CPU, reads, writes, duration, etc – all behave the same way as I’m about to show you here, and the rows numbers are more repeatable than some of the others, so let’s use rows.

If I execute the stored procedure twice – for two different parameters – and then check metrics, things look different:

The results:

The first query’s plan stuck around in memory, so it now shows 2 executions, and 2 total rows returned. Its row metrics are correct through the life of the stored procedure’s time in cache.

However, the second query – the one with the recompile hint – has a brand new plan in the cache, but also new metrics. You’re not just recompiling the execution plan, but you’re also not getting query plan metrics here. (That’s fine, and that part I was also kinda aware of.)

But the part that I keep forgetting is that when I’m looking at the stored procedure’s totals in sp_BlitzCache, the total, min, and max values are useless:

In sp_BlitzCache, we add up the totals for each statement in a proc, and we present those as the total numbers for the proc. The problem is that they’re just not true whenever there’s a recompile hint: the totals are underreported, and the avg/min/max just reflect the last execution of any query with a recompile hint.

The plan cache just isn’t a good fit for tracking queries with RECOMPILE hints – but Query Store is.

How a recompiled query shows up in Query Store

I’m going to turn on Query Store in the Stack Overflow database, and then run the two queries again:

And then query Query Store:

I get a much better picture of the number of times that the recompiled queries have run, AND the number of rows they’ve returned each time. (And of course you get other much more useful query tuning metrics too, like CPU, reads, duration, etc.)

When you decide to use RECOMPILE hints,
you probably wanna enable Query Store.

Query Store first shipped in SQL Server 2016, and about half of all production servers are 2016+. Today, in 2020, when your query tuning efforts require you to put in RECOMPILE hints, then you should probably stop for a moment to consider Query Store too. It’ll make your performance troubleshooting easier down the line.

Query Store does require a little more planning than just putting in a RECOMPILE hint, though. Here are the things you wanna think through:

Are these things more work? Yep, absolutely, and the combination of RECOMPILE hints and enabling Query Store logging is going to have a performance impact on your server. You don’t wanna flip these switches unless you’re convinced it’s the best way to solve a parameter sniffing problem. There are other solutions, too – as a quick refresher, if you’re not familiar with any of the other ways, check out my video from SQLDay Poland:

How COVID-19 Affects Conferences

SQLBits just postponed their event, and I’m stepping out of SQL Saturday Iceland as well.

I’ve had the unusual position of being a conference attendee, speaker, and sponsor, all at different times of my life, so I wanted to do a quick brain dump explaining some of the gotchas involved behind the scenes.

Canceling a conference can financially ruin the organizers. The organizers spend a ton of money leading up to the conference – doing marketing, paying staff salaries, paying deposits (event space, food, hotels, printers.) Some of this stuff may be refundable, but … a lot of it isn’t, like staff salaries and supplies that are already purchased. Organizers can buy insurance, but to save money, they often don’t buy insurance to cover this kind of thing, especially diseases. As a result, when they have to cancel an event, they can be financially destroyed instantly. The SxSW festival just canceled their event, laid off 1/3 of their staff, and is considering bankruptcy.

Who cancels the conference can make a big difference. If the government cancels all events, then it can be easier to get money back for hotels, flights, event spaces, and make insurance claims. As a result, organizers can be tempted to play a game of chicken with government, trying to see who cancels first.

People are getting infected at conferences. For example, two people at RSA’s conference got infected, and it just kinda makes sense: people are flying in from all over the world, spending time in close proximity with strangers in airplanes, eating food from buffets, and of course, shaking hands with each other because it’s just hard to break that habit. (I’m still laughing at the Dutch prime minister who, after announcing a no-shaking-hands rule, promptly shook hands to close the announcement.)

Speakers and volunteers have to deal with more people. People come up afterwards and ask questions in close proximity, they want to shake hands, take selfies, hug, you name it. We’re at higher risk for infection, plus we’re especially dangerous if we’re the infected ones, and we spread it rapidly to other people.

(Personally, I have asthma, which means that if somebody infects me, I’ve got a much harder time fighting off the infection.)

Attendees are sensitive to the situation, too. They’re often packed in elbow-to-elbow with complete strangers in varying degrees of health, all breathing on each other for hours on end. Once an attendee starts sneezing and coughing, other attendees will start to feel uncomfortable, leading to awkward situations. For example, plane passengers became disruptive when an attendee had an allergy attack, and another plane ran into even stranger issues.

Sooner or later, conference attendees will ask organizers to remove someone of questionable health. For that to work, we all need to be on the same page about what’s accepted behavior at events, and attendees need to be told long ahead of time that they shouldn’t show up if they have symptoms that even look related to COVID-19. Sure, I get it – you don’t think you’re infected – but that doesn’t mean other people are going to be comfortable with you coughing into your elbow every five minutes, and wiping your mouth on your shirt. Conference organizers likely aren’t going to be sanitizing chairs and tables between sessions.

Organizers are already stretched to their limits. Leading up to an event, the organizers and volunteers do a heroic amount of work just dealing with regular conference issues. The Center for Disease Control has a prep document for event organizers, but just looking at the tasks in that list, I can tell that it’d take multiple full time bodies to check off all those tasks – and events often just don’t have the spare staff available.

Vendor staff don’t want to get infected. Companies make decisions to sponsor or attend a conference, and then they send their sales & marketing teams to the conference as well. Those employees may not have had much of a choice about whether they wanted to attend – they may not have the passionate feelings that you have about attending a conference to learn, because they’re just there to do sales and network. Their families ask tough questions about, “Why exactly is your company sending you to this event? Can’t you sell things from home?”

Everyone’s less likely to attend events right now. Companies are setting no-travel policies to protect their staff, which means the conference has less attendees, speakers, and sponsor staff available to attend.

When you add all this up, it’s a dark time for conferences: they have less attendees & revenue, but they have higher expenses to put on the event (because there are more health concerns to tackle, and all this costs money.) I don’t have easy answers – but as a speaker with asthma, I’m keenly interested in how events protect me and the other attendees. We’ve already got one person in the community being quarantined – we don’t need more. (Send your good vibes & prayers to Tim Radney, @TRadney.)

Footnote: this post might have even more typos & errors than usual because of its hurried nature. I’m scribbling it down quickly before Erika wakes up. We’re on vacation this month, driving around Iceland’s Ring Road, posting stuff on TikTokInstagramFacebook, and Twitter wherever you wanna follow along, and I’m trying not to do “work” work – but I wanted to get this out while I was thinking about it.

I know, it’s an odd time to go on a traveling vacation! We’ve been keeping a close eye on the news, washing our hands a lot, and frankly, avoiding other people, hahaha. The good news is that Iceland’s a great country for this kind of thing – plenty of beautiful wide open spaces for sightseeing – and the tourist places are even emptier than their usual winter season.

Things to Consider When SQL Server Asks for an Index

One of the things I love about SQL Server is that during query plan compilation, it takes a moment to consider whether an index would help the query you’re running. Regular blog readers will know that I make a lot of jokes about the quality of these recommendations – they’re often incredibly bad – but even bad suggestions can be useful if you examine ’em more closely.

In SQL ConstantCare®, we analyze SQL Server’s index recommendations and come up with our own suggestions. Here are some of the things we think about as we’re doing the analysis:

  • Did SQL Server ask for the same index yesterday?
  • Would the index have been used at least 100 times per day?
  • Does the table involved already have a clustered index? If not, we should probably think about adding one of those first.
  • Does the table have 5 or less indexes? If it has more, it might be time to think about deduping those with my D.E.A.T.H. Method first.
  • Does the table have <100M rows and under 10GB in size? If so, the indexes are easier to pop into place. At higher sizes, you want to approach the tables with a little more care and planning since even adding an index can require a maintenance window depending on the speed of your hardware.

SQL ConstantCareOnce we’re past those initial hurdles, we think about the index recommendation itself:

  • Does it have only one key and one included column? If so, it probably makes sense to promote the include into the key, as long as the first key makes sense too.
  • What are the datatypes on the columns? If they’re big (say, over 200 characters), consider removing ’em, especially if they’re included columns.
  • How many columns are in the index? If it’s more than 5 elements, consider removing the included columns.
  • Is the recommendation a narrower subset of an existing index? If so, it’s probably not a good fit.
  • Is the recommendation a wider version of an existing index? If so, you may want to remove the narrower (old) version after you replace it with the wider new version.
  • After your tweaks based on the guidelines above, does the index exactly match an existing index on the table? If so, set it aside – it’s time to review the high-read queries instead rather than looking at the missing index recommendations.

Whew. It’s a lot of stuff to think about – but with SQL ConstantCare®, you don’t have to. We’re gathering the metadata daily, watching what SQL Server does, gradually rolling out index recommendations to help, and then circling back to the index usage DMVs to make sure that the index actually helped.

Making TRY_CAST and TRY_CONVERT Queries Faster with Indexed Computed Columns

I know this is gonna sound crazy, but let’s say you had a table where people stored all kinds of things in one column: dates, integers, file names, sale prices, file names, you name it. And let’s say your application frequently ran a query looking for dates in that column, like this:

Even if you create an index on DisplayName, SQL Server ignores it because it believes so many rows are going to match, and it doesn’t want to do all the back & forth key lookups between the DisplayName index and the clustered index (to get SELECT *):

Yes, believe it or not, there are actually users in the Stack Overflow database whose names can be converted to dates:

So anyhoo, I’m not allowed to change the query, but I need to make it go faster. I have 3 problems:

  1. The estimates are wrong, so
  2. SQL Server refuses to use the DisplayName index (due to an artificially high number of key lookups)
  3. SQL Server can’t seek specifically to the rows that match (it has to do the cast on every row)

I can fix problems #1 and #2 with a computed column, like I explain in the Artisanal Indexes module of my Mastering Index Tuning class:

Presto, the estimates are more accurate, and SQL Server is now doing less logical reads (because it’s scanning the DisplayName index rather than the entire clustered index, which includes all the columns.) Now, to solve problem #3, I would usually add a nonclustered index on our newly created column, but…

You might think, “Wait, how can a cast as a date be non-deterministic? Does it have something to do with the fact that it might sometimes return null?”

No, it’s because your session’s date format can cause the same string to return different dates. Here, I’m using the Books Online example for SET DATEFORMAT to set my default date formats to different styles, and then when I do a TRY_CAST on the same value, I get different dates:

Diabolical. So I can’t use an index on a computed column to make this thing crazy fast, and I’ll just have to settle for an index scan.

But TRY_CONVERT *does* work,
as long as you specify a format.

If I try this same trick with TRY_CONVERT and I specify the exact date format I want – obviously ISO 8601, as any XKCD fan will tell you:

SQL Server is willing to create a nonclustered index on that, AND it’ll automatically do an index seek on it rather than scanning & computing the whole thing. That’s awesome! However, if I try that same trick without a specific format, I’m back to being unable to create the index on it:

Dang. And don’t even get me started on filtered indexes with that stuff – that’s a whole ‘nother ball of wax. For that kind of thing, hit the Artisanal Indexes module.

How to Find Cool Questions and Answers on is a Stack Overflow site that’s specifically targeted at database administrators. There tend to be some pretty interesting questions in there – not just for SQL Server, but all kinds of databases – but at the same time, there’s also a lot of noise.

To help find fun questions, I use The Power of SQL™. is a web-based front end, kinda like SSMS, that lets you run queries against a recently-restored copy of the Stack Overflow databases. It’s different from the downloadable Stack Overflow database because it’s updated much closer to real time, and you don’t have to hassle with downloading it each time it’s updated.

Here are some of my favorite queries:

Recent Unanswered Questions Sorted By Views – I find views to be a good way to gauge questions: if other people find it interesting enough to click on, then I’m probably gonna feel that way too. There are parameters to let you choose how far back you want to look (say 9999 for tons of history if you wanna unearth an artifact), and a parameter to ignore specific users (like my own questions, or people whose questions I don’t really find that interesting.) Columns include:

  • Score – how popular the question is
  • AnswerCount – how many answers have already been posted (but not accepted as correct)
  • TopAnswerScore – because sometimes the community really likes an answer, but the asker hasn’t bothered to accept it yet
  • BodyLength – because I’m often interested in questions where the asker put in a lot of effort. Similarly, I’m interested in long answers, which brings me to…

Recent Long Answers – A long answer isn’t necessarily good, so I also include the score in the output column. However, I’m usually curious about what drove someone to type tens of thousands of characters into the answer box. I usually don’t even care what database it’s about – when someone writes that much, I want to at least glance at it to see what’s going on. Here are a few recent examples:

Most Interesting Recent Comments – because sometimes the real fun is in the peanut gallery, like when Andy Mallon says he once saw a table named dbo.tblTable.

You can use all of these queries on any Stack Overflow network site. When you’re looking at the query, just look in the parameter section and start typing in the “Switch sites” box.


SQL ConstantCare® Population Report: Winter 2020

Ever wonder what’s “normal” out there for SQL Server adoption rates, hardware sizes, or numbers of databases? Let’s find out.

SQL ConstantCareOur SQL ConstantCare® service lets users send data in about their SQL Servers and get health & performance advice via email. Users can opt in to share their anonymized data to get better recommendations overall.

This isn’t a full picture of the SQL Server ecosystem. Lemme be really clear: this is by no means a random sample of the full population. It’s just a picture of the kinds of servers where people want an inexpensive second opinion. It doesn’t include giant corporations, hospitals with PII, or tiny businesses who just shove a SQL Server in the corner and then never take a second glance at it (and if my experience is any indication, never back it up.)

Even though it isn’t a full picture, with 3,570 database servers sharing data as of March 1, there’s still a lot of useful information in here for the community. Understanding what servers are out there will help you put context around where your own servers rank, and helps the community better build training material that’s relevant for most users.

This population report is just my attempt to help clear up the fog of war. Microsoft simply doesn’t give us any adoption information (even though SQL Server phones home by default.) If you believed what you saw onstage at conferences and in Microsoft’s changing certification programs, you’d think no one still bothers with self-hosted SQL Server, but the few who do have already migrated to SQL Server 2017 and 2019. This report is my humble attempt to keep Microsoft honest.

Let’s start with a simple question:

What versions are the most popular?

Or if you’re into percentages:

Thoughts on reading those charts:

  • The adoption rates haven’t changed much since the SQL ConstantCare Fall 2019 Population Report. SQL Server 2019 is almost at 1% adoption, with 25 intrepid users running it in production. (About 1/3 are on 2019 RTM, 1/3 CU1, and 1/3 CU2.)
  • SQL Server 2008 still has more adoption than Azure SQL DB and Azure Managed Instances combined.
  • If you purely compared SQL Server 2019 vs Azure, you’d think that Azure is the future because it’s outpacing SQL Server 2019 adoption. I just don’t see that as true because the vast majority of shops I know on SQL Server 2012/2014/2016 aren’t making adoption plans for Azure SQL DB or Managed Instances yet. Instead, they’re either comfortable where they are, or they’re migrating to VMs in the cloud. (I do think the default new database platform will be PaaS several years from now, but not for migrations of existing databases. We’re not there yet.)
  • SQL Server 2016 & 2017 market share has gone up a little, with these two capturing 52% of the market. 2014 is still outpacing 2017 though (21% vs 18%.)
  • About 1 in 10 SQL Servers are no longer under support. (The number is slightly higher since we don’t monitor 2000 or 2005.)

I’m going to continue publishing the numbers in this format (only showing the current population) for the first year of the survey, but on the first year’s anniversary, I’ll switch to graphs that show trending and year-over-year changes.

How about development? Are people maybe testing their apps on newer versions?

What versions are people using in development?

Both 2017 and 2019 got nice bumps here over last quarter – people are starting to develop and test more against 2017 and 2019. However, 2019’s low showing here is a pretty good leading indicator that SQL Server 2019 has a tough road ahead if it’s going to crack a 10% adoption rate in the next couple/few quarters.

I think about this chart a lot when I’m designing training. When is the right time to start building developer training material for SQL Server 2019? It’s a tricky timing act: you need to be skilled-up before you adopt the new version, and I need to have the training material ready to go before then. However, if I focus on building 2019-specific training material right now, it could be 6-12-18 months before it really starts to pay off. Judging by this chart, I’d be better off building 2017-specific training material since there are a lot of people who still haven’t adopted it yet (roughly 80%.)

For the rest of the census report, I’m going to focus on just the production servers.

How much data do servers have?

I think this chart really helps to set the rest of them in perspective:

Data sizes have actually grown a little since the last report. Y’all aren’t doing archiving, ha ha ho ho. There’s been a slight shift up from the 0-25GB market up to the 25-125GB market. Not huge, though.

A lot of the SQL ConstantCare® population have what I would consider to be a fairly small server: 30% of them are hosting <25 GB data, and 69% are smaller than the Stack Overflow demo database. That’s why I like using that database so much for training: it’s large enough that it represents (and challenges) a good chunk of the audience.

About 15% have 1TB of data or more, but even though that number sounds low, the population is still large. That still gives us about 500 servers’ worth of metadata to help guide folks who are in those larger tiers. There are a over a dozen instances in the population with over 10TB – good times there.

Is Enterprise Edition normal for larger databases?

Forgive the awkward visualization – I set myself a 2-hour goal to do this analysis and crank out the blog post, so I’m on a bit of a tight timeline – but I think the numbers are pretty neat:

At 0-125GB of hosted data, Standard Edition is twice as common as Enterprise Edition.

Around 200-750GB, the numbers even out.

Up in the 1TB+ tier, Enterprise Edition is twice as common as Standard Edition. (But the fact that folks are even using Standard Edition at all in this tier might surprise some of you in the audience.)

How much hardware do production servers get?

This really helps to reset folks’ expectations around what’s normal for SQL Servers out there. There are a huge, huge amount of production SQL Servers that just don’t need a lot of CPU power. About 70% have 1-8 cores, while only 10% of servers have more than 24 CPU cores.

This seems low at first: about 60% have less memory than my laptop. However, remember that almost 50% of the servers out there have 4 CPU cores (or less), so let’s try excluding them:

That makes sense: much, much less of the 5+ core servers are using 0-32GB RAM. Good to see folks investing in the resource that has an amazing impact on performance, especially given the SQL Server licensing costs involved.

How many databases are hosted per server?

In the last survey’s comments, sonalpathak28 asked about this number out of curiosity from an application perspective and a DBA maintenance one: how many databases are folks hosting per server?

And because I know you’ll be curious, over a dozen servers have over 1,000 databases on ’em. In all cases, they’re a lot of small databases (under 2TB total data size per server.)

What questions would you ask the data?

If there’s something you’d like to see analyzed, leave a note in the comments. To make it more likely that I’ll actually run your experiment, include why you want to know it, and even better, your guess as to what the data’s going to say. (That’ll make it a more fun game to play, heh.)

Breaking News: SQL Server 2019 CU2 Breaks Agent.

That's it, I'm rolling back to SQL Server 6.5

That’s it, I’m rolling back to SQL Server 6.5

You reported it here in the comments, and over at SQLServerUpdates, and in this question.

The official word is in: yep, SQL Server 2019 Cumulative Update 2 breaks Agent jobs on some servers, causing them not to run.

That official word is only in that linked post, though: there’s still no word in the official CU2 page about the bug, nor does the CU2 bug fix list even mention SQL Server Agent, so we were all pretty surprised that Agent would break when it wasn’t even supposedly changed in this CU.

For now, the official word is to uninstall CU2 if you’re experiencing the issue.

Updated First Responder Kit and Consultant Toolkit for February 2020

We spent Valentine’s Day removing the bugs from your chocolates. You’re welcome.

To get the new version:

Consultant Toolkit Changes

Updated the First Responder Kit with this month’s script updates, plus:

  • Improvement: the Databases tab now has all columns from sys.databases, making it easier to troubleshoot obscure stuff and new features.
  • Improvement: the Deadlocks tab now only has the victims, and we only save each deadlock graph as a file once (not once per query involved in the deadlock.)

sp_Blitz Changes

  • Improvement: removed spaces from documentation file names for easier publishing in an Azure DevOps Server wiki. (#2273, thanks PierreLetter.)
  • Improvement: added new priority 250 data size check. Only shows when @CheckServerInfo = 1. Gives you a quick idea of the size of the server you’re dealing with in terms of number of databases and total file size. (#2265)
  • Improvement: updated unsupported-build checks now that SQL Server 2014 SP2 is out of support. (#2293)

sp_BlitzCache Changes

  • Fix: plans will no longer trigger both the nearly-parallel warning if they’re already parallel. This would happen when you had a parallel query whose cost had dropped under the Cost Threshold for Parallelism. (#2234, thanks Erik Darling.)
  • Fix: “selects with writes” warning may fire incorrectly for stored procedures. (#2290, thanks Erik Darling.)

sp_BlitzIndex Changes

  • Improvement: suggested index names no longer start with IX_. (#2292, thanks Greg Dodd.)
  • Improvement: indexes with 0 reads and 0 writes are no longer shown by default in mode 0 and 4. Just lowering the distractions so you can focus on the indexes that are really causing problems. (#2280, thanks Erik Darling.)
  • Improvement: unused index warning now also shows the index’s number of writes. (#2266, thanks Erik Darling.)
  • Fix: some parts of the output were doubling the size of nvarchar columns, like telling you a column was nvarchar(80) when it was really nvarchar(40). (#2285)
  • Fix: now handles identity values even larger than BIGINTs, going all the way to DECIMAL(38,0). (#2268, thanks JBarnard84.)

sp_BlitzLock Changes

  • Improvement: new @VictimsOnly parameter. If you set it to 1, you only get the deadlock victims. Since all of the queries in a deadlock have the same deadlock graph, this can tighten up your result set by a lot if all you’re troubleshooting is the deadlock graph, which is what we’re focused on in the Consultant Toolkit. (#2219, thanks Erik Darling.)
  • Improvement: better support for identifying when heaps are deadlocked. (#2219, thanks Erik Darling.)
  • Fix: object names not always populating correctly. (#2282, thanks Erik Darling.)

sp_DatabaseRestore Changes

  • Improvement: removed dependency on Ola Hallengren’s maintenance scripts. Ola recently released a version with a new mandatory parameter for his CommandExecute stored proc, which broke sp_DatabaseRestore, and we weren’t depending on it for anything complex. (#2269, thanks mKarous for the bug report and Eric Straffen for the code.)

sp_ineachdb Changes

  • Improvement: added @exclude_pattern parameter so you can skip databases like the pattern. Accepts wildcards just like the SQL Server LIKE. (Pull #2272, thanks Aaron Bertrand.)

For Support

When you have questions about how the tools work, talk with the community in the #FirstResponderKit Slack channel. If you need a free invite, hit Be patient – it’s staffed with volunteers who have day jobs.

When you find a bug or want something changed, read the file.

When you have a question about what the scripts found, first make sure you read the “More Details” URL for any warning you find. We put a lot of work into documentation, and we wouldn’t want someone to yell at you to go read the fine manual. After that, when you’ve still got questions about how something works in SQL Server, post a question at and the community (that includes us!) will help. Include exact errors and any applicable screenshots, your SQL Server version number (including the build #), and the version of the tool you’re working with.

Interesting Undocumented Tidbits in SQL Server 2019 CU2

SQL Server 2019 Cumulative Update 2 is out, and Microsoft snuck in a few things that they didn’t mention in the release notes.

Contained Availability Groups continue to get new investments. In the lead-up to SQL Server 2019’s release, we got a few presentations at various conferences where Microsoft folks said we’d get the ability to put system databases inside a Contained Availability Group. However, things got a little weird leading up to the release, and right now, the only official references I can find are dedicated to Big Data Clusters only. Microsoft published a KB article about some of the fixes in CU2 for contained AGs, but they’ve already pulled the KB articles from public view. Fortunately, Bing and Google still have it cached:

Hmm. I get a little nervous when Cumulative Update documentation pages get updated after their release. Either the CU has a fix for it, or the fix didn’t work, or Microsoft isn’t ready to talk about the feature yet.

Anyhoo, in CU2, there are a few new related CAG goodies:

  • sys.dm_exec_sessions has a new column for contained_availability_group_id
  • 2 new entries in sys.configurations:
    • 1593 – version high part of SQL Server – version high part of SQL Server that model database copied for
    • 1594 – version low part of SQL Server – version low part of SQL Server that model database copied for
  • New message 47147: Creating contained availability group ‘%.*ls’ failed. When creating contained availability group, neither master database nor msdb database can be included in the CREATE AVAILABILITY GROUP statement. They will be automatically included in the availability group. Remove master database name and msdb database name in CREATE AVAILABILITY GROUP statement and retry the operation.
  • New message 47148: Cannot join database ‘%.*ls’ to contained availability group ‘%.*ls’. Before joining other databases to contained availability group, contained availability group master database has to be joined and recovered. Make sure contained availability group master database has been joined and recovered, then retry the operation.

2 new undocumented DMVs: sys.dm_db_data_pool_nodes and sys.dm_db_storage_pool_nodes. Both are empty in my SQL Server 2019 testbeds, and I don’t know whether they refer to Big Data Clusters or something in the cloud.

1 Linux DMV now shows up in Windows: I’m not sure if this is intentional, but sys.dm_pal_net_stats is now shipping with the Windows version.

Some CU2 KB articles have been pulled. For example, there was a KB article about Accelerated Database Recovery silently corrupting data, but that KB article now returns a 404. Fortunately, Bing and Google have caches of it:

Again, I don’t know whether Microsoft pulling the KB article means that they haven’t fixed the bug after all, or that they’re embarrassed about the bug. Either way…this isn’t a great look.

WHERE GETDATE() BETWEEN StartDate AND COALESCE(CancelDate, EndDate) Is Even Harder to Tune.

In my last post, we started with a memberships table, and each membership had start & end dates. I’m going to create the table and populate it with everyone having an active membership – their StartDate is the same as their Stack Overflow account creation date, and their EndDate is around a year or two from now:

If I run either of these queries to check how many active users we have:

The actual plans show that SQL Server does a great job of accurately estimating that all 9 million of the rows in the table will match:

But now let’s introduce that CancelledEarlyDate.

I’m going to modify my reporting query to include a COALESCE, taking the first non-null column between CancelledEarlyDate and EndDate. If the user cancelled their membership early, we’ll use that cancellation date – otherwise, use EndDate. This is similar to the ISNULL function, but COALESCE lets me pass in multiple values rather than just two. (ISNULL performs the same in this case.)

Note that I haven’t updated anyone’s CancelledEarlyDate yet – they’re all still null. SQL Server’s actual execution plans do a beautiful job of estimating that all 9M rows will still match our search. Note that we’re getting index seeks here (without key lookups) because I included the CancelledEarlyDate in my indexes this time around.

And if I update 50% of the rows to have a CancelledEarlyDate of yesterday:

Now our estimates go off the rails.

Then the actual plans show that SQL Server believes all of the rows will match our filter – even though only half of them do. He estimates 9M rows will come back, but in reality, only 4.5M do:

In the last post, I had to introduce a more real-world data distribution to show how estimation went off the rails as your population started to shift. Here, I don’t – even a very simple population causes our query plans to go wildly wrong.

Now let’s take a breather here for a second because I can almost hear the comments rolling in.

First, you might be thinking that this particular illustration isn’t really all that bad: SQL Server uses the right access method (an index seek rather than a table scan), and it shouldn’t have approached this problem in any other way. However, in the real world, this incorrect row estimation has a huge impact on things like memory grants, how SQL Server approaches other tables in the query, and the order of operations in a query.

Second, you might be typing, “Brent, how might different indexing strategies fix this problem?” That’s why I use the free Stack Overflow database for my demos, and why I give you all of the scripts. You can go test out different approaches for yourself to learn.

Back to the stuff I wanna teach you here. I don’t think it’s fair for us to ask SQL Server to estimate this particular part of the query on the fly:

That’s just too hard. Sure, SQL Server may have statistics on each of those columns individually, but it doesn’t understand how those columns relate to each other on particular rows. He just has a rough idea of each one’s date distribution. In my simple scenario, it might be theoretically doable, but in real-world data distributions with memberships & policies spread across decades, it’d be really tough. You would want whole separate statistics based on the relationships between columns (and no, multi-column stats aren’t anywhere near that powerful.)

We can help by combining the two columns.

While SQL Server isn’t able to do this itself automatically, we can build our own computed column that performs the calculation for us:

And suddenly – with no other changes, like no indexes or no query modifications – the actual plan’s estimates are way better:

That’s because when we add a computed column, SQL Server automatically adds a statistic on that column, and it can use that statistic when estimating rows.

It’s still not perfect because the plan has a variety of issues:

  • We’re doing an index scan here, not a seek
  • The scan reads all 9M rows of the index by doing 37,591 logical reads
  • We’re calculating the COALESCE every row, every time we do the query, burning about 1 second of CPU time

In this case, indexing the computed column solves all 3 problems.

I’ll create two indexes – one that leads with StartDate, and one that leads with the computed column – because SQL Server may choose different ones depending on our data distribution:

The new actual plans of our reporting queries show that we fixed ’em:

We get an index seek, we only read the rows we need to output, and we don’t have to compute the coalesced dates on every row. Yay!

But only in this case.

All of our reporting queries better be exactly the same. Remember how I said earlier that I’m using COALESCE because it takes multiple arguments, and how ISNULL performed the same in that query when I was discussing it? Now, not so much. If I try ISNULL here, SQL Server ignores the computed columns altogether.

And even if they’re all the same, I’m still back to the same problems I had in the last post: as our data distribution becomes more real-world, with a mix of expired, current, and future memberships, I’m going to have estimation problems.

For more about the computed column technique, including when to persist them, when not to, and how to level up with indexed views, check out the Mastering Index Tuning class module on filtered indexes, indexed views, and computed columns.

WHERE GETDATE() BETWEEN StartDate AND EndDate Is Hard to Tune.

Say you’ve got a memberships (or policies) table, and each membership has start & end dates:

If all you need to do is look up the memberships for a specific UserId, and you know the UserId, then it’s a piece of cake. You put a nonclustered index on UserId, and call it a day.

But what if you frequently need to pull all of the memberships that were active on a specific date? That’s where performance tuning gets hard: when you don’t know the UserId, and even worse, you can’t predict the date/time you’re looking up, or if it’s always Right Now.

Let’s say – just to say – all of our memberships are current. They’ve all started in the past, and they all expire in the future. I’m going to synthesize the data by using the Stack Overflow Users table, taking the user’s CreationDate as the start of their membership, and making up an EndDate in the future:

I’m going to index the bejeezus out of both StartDate and EndDate to give SQL Server the best possible chance:

When you’re querying for active memberships, you’ll probably run one of these two query design patterns:

In this case, estimation is easy. The actual plans show that SQL Server knows that all of the rows are going to match:

So when you just start building a membership or policy system, the query plans look nice and simple: estimation is bang on. Over time, though, your data starts to skew, and so do your estimates.

That imaginary case is easy,
but in real life, some folks have expired.

To show how bad it can get, I’m going to create another contrived extreme: instead of everyone’s membership being current, I’m going to say that NO one is current. 50% of our members’ plans have expired, and 50% haven’t started yet:

Now, when I run the query to check our active members again, there are a few tricky problems hidden in a plan that looks deceivingly simple. It looks like a pair of simple index seeks, but they don’t perform worth a damn:

The first problem:
these seeks read a lot of data to produce 0 rows.

While this looks like a pair of lightweight index seeks, both of these queries did a heck of a lot of work. Hover your mouse over either index seek, and check out the number of rows read:

They read 4.5M rows – half of the table – in order to produce a whopping 0 rows. I can almost hear you screaming, “How is that possible?!? It says SEEK right on there, so we’re seeking to the rows we need, right?”

Not exactly: look at the Seek Predicates at the bottom of that tooltip. SQL Server is opening up the EndDate_StartDate index, and it’s turning to the first row where the EndDate >= GETDATE(), and then it starts reading. But from that point forward, it has to examine every row that it finds.

To simulate what it’s doing, let’s visualize the index by crafting a query that accurately simulates what SQL Server will find when it opens the index. (To learn more about this technique, check out the very first module in my Fundamentals of Index Tuning class.)

We can verify that it’s an accurate visualization of the index’s contents by looking at the contents of the execution plan, and making sure that there are no sorts and no residual predicates on the index seek:

When SQL Server opens up that index on EndDate, StartDate and starts reading, it’s jumping to EndDate = GETDATE(), and it reads forward. It’s examining all future-expiring memberships, and checking to see if their StartDates are in the past:

And it has to read through all future-expiring memberships just hoping to find one that started in the past. Until it reads all of them, every single future-expiring membership, it can’t guarantee that one of them hasn’t started in the past.

In terms of the good ol’ white pages of the phone book, it’s as if I told you to find everyone whose LastName >= P, and FirstName <= AAA. Sure, you could seek to the P’s and start reading, but you’re going to have to read through a lot of pages checking to see who has a FirstName like AAAAaandy. In the phone book example, you would jump to the P’s, find that no one matches, and then skip ahead to the Q’s. You can do that because there are a relatively limited number of letters in the alphabet, but SQL Server can’t do that because there aren’t a limited number of EndDates. There might be millions of distinct EndDates.

That sucks. But it gets worse.

The next problem:
SQL Server’s estimates were way off.

SQL Server estimated that 27.5% of the table would match. He thought this query would return 3.3M of the table’s 8.9M rows:

That’s a problem for me if my index doesn’t fully cover the query. In my contrived example, I completely covered my contrived membership query:

But eagle-eyed readers might have caught that my Memberships table has 4 columns: Id (the primary clustered key), StartDate, EndDate, and also, UserId, which links back to the Users table to show whose membership it is. If I add that UserId column in as well:

Then SQL Server ignores my nonclustered index altogether! He thinks that 3.3M key lookups would cost too much money, so he ends up scanning the table, and doing even MORE logical reads than he was doing before!

Thankfully, real-life data distributions aren’t quite THAT bad.

In real life, your own business has its own skew of expired, current, and future memberships. Based on your own skew, you might need to cluster a table like this on either EndDate, StartDate, or StartDate, EndDate. I can’t write a demo to show your exact data skew (well, I mean, I can, I’m a consultant for hire, and that’s how this whole thing started.) For the sake of this blog post, I wrote a script to populate the table with a bunch of data so that about 25% of my memberships haven’t started yet, 25% have started but already expired, 25% are current, and then 25% are kinda wacko.

For that demo script, you’ll note a few unusual table design choices:

  • The table is a heap
  • It has a nonclustered primary key on Id
  • It has a nonclustered index on EndDate, StartDate that includes Id, UserId
  • It has another on StartDate, EndDate that includes Id, UserId

With that data mix populated, let’s run our reporting query, check out our estimates, and we’ll even hint it with both of the nonclustered indexes so we can see which one does less reads:

The good news is that both query plans get decent row estimations:

In addition, both pick the same IX_StartDate_EndDate index. The bad news, though, is that even though the estimate is decent, the residual scan predicate means that we seek to one point, but then we read 2x more rows than the query actually produces:

If I use index hints for testing which index actually reads less data:

In my case, with my demo’s data distribution, the StartDate index reads less 8KB pages to produce the same results:

So if I needed to tune my demo data’s scenario, and if we did a lot of querying for active memberships, and I couldn’t predict the date/time we were going to query for nor could I know which UserIds we were looking for, I might:

  • Put a clustered index on on StartDate, EndDate
  • Put a nonclustered index on UserId (and probably no other fields, although I’d be open to changing that if individual users could have a lot of memberships)

I know this is one of those blog posts that’s going to provoke a lot of “why don’t you do ___” comments, but I’ll be honest: I’m not going to answer them here because you can answer ’em yourself. Download a Stack Overflow database, run my script to populate the UsersMemberships table, and then run your own experiments to test your ideas. When you find something that seems like it works, you’re welcome to share it in the comments. You can also use a Github gist to share your source code.

One approach I expect folks to suggest is to build a child table like this with a row for each day that a subscription is active:

  • UserId
  • MembershipId
  • ActiveDate

So if your subscription was active for 300 days, you’d have 300 rows in that table. Unfortunately, solutions like that can be tough to scale too: using our UsersMembership table, the number of rows in it would be pretty ugly if we had to maintain all history. I’d be at 7 billion rows even with this simplified example where each user only has one membership:

Razor-sharp readers like yourself may have noticed the column that I didn’t mention in this post. That particular column makes this exercise deliciously difficult – more on that in the next post.

Recommended SQL Server Books, 2020 Edition

If you’re the kind of person who likes to read books, here are some that I think would expand your horizons. These links are Amazon affiliate links, so I make a few cents when you buy through ’em. It’s your way of tipping me for the book recommendations, so thank you!

T-SQL Books:

  • Level 1: T-SQL Fundamentals by Itzik Ben-Gan: with a name like “Fundamentals”, you’re gonna think this book will be easy. It’s not. It’s 361 pages of the stuff I should have learned before I wrote my first query, but didn’t.
  • Level 2: T-SQL Querying (Developer Reference) by Itzik Ben-Gan: after you’ve finished Fundamentals, here’s 803 pages of pivoting, windowing, dates/times, BI, graphs, in-memory OLTP, and more.
  • Great Post, Erik by Erik Darling: if you’ve enjoyed Erik Darling’s blog posts here, then you’ll love a book full of ’em, lavishly illustrated in full color. (The first time I held this book, I was amazed at what a difference full color makes when viewing stuff like queries and plans.)
Learn T-SQL Querying by Lopes and Lahoud

Learn T-SQL Querying by Lopes and Lahoud

Performance Tuning Books:

These first two are both overviews of performance tuning, covering wide swaths of the tools built into SQL Server to assess its existing performance and make it go faster. I do have to caution you, though: they’re not step-by-step cookbooks that give you recipes to take an existing server, understand what’s wrong, and make it go right. They’re more of a collection of techniques, and it’s up to you to put things together in the right order. They’re still both good resources though:

  • Learn T-SQL Querying by Pedro Lopes and Pam Lahoud: probably the worst-named book out there, and the sales are probably being hurt even more by the publisher not allowing you to read the table of contents on Amazon. Trust me though, the authors cover a lot of ground in 450 pages of performance tuning tips.
  • SQL Server 2017 Query Performance Tuning by Grant Fritchey: don’t worry about the version number in the name. Some publishers just insist on putting version numbers in titles to make books seem new, which on the flip side, also makes them seem dated a few years later. This 823-page monster is mostly evergreen.

Then the below are a little more specialized, talking about specific features that you may wanna learn:

  • Query Store for SQL Server 2019 by Tracy Boggiano and Grant Fritchey: again, ignore the version number in the title and just focus on the topic. Query Store is like a flight data recorder for SQL Server 2016 & newer, and now that most of you are running 2016 in at least one of your servers, it’s probably time to dig in to learn how this works. This is one of those features that sure as hell isn’t self-managing, and I know a lot of clients who’ve turned it on without learning it – only to have their workloads stop workloading. Learn it first.
  • SQL Server 2019 Revealed by Bob Ward: go read the documentation page of What’s New in SQL Server 2019. If you want an entire book of that, that’s what this book is. In February 2020, I’m gonna be honest: most of you aren’t deploying SQL Server 2019, let alone using the new features like Always Encrypted with Enclaves, Kubernetes, and Polybase. However, if like me, you’re just generally curious about the investments Microsoft is making, this is an enjoyable read.

I don’t have current book recommendations for production database administrators: folks who specialize in installation, configuration, high availability, disaster recovery, and automation. If you’ve found current (and currently available) books on that topic that you enjoy, drop ’em in the comments for others to find.

How to Pass a List of Values Into a Stored Procedure

Say we have a stored procedure that queries the Stack Overflow Users table to find people in a given location. Here’s what the table looks like:

And here’s what my starting stored procedure looks like:

It works:

And the actual execution plan isn’t bad, although it manages to overestimate the number of people who live in San Diego (est 1264 rows for the top right index seek), so it ends up granting too much memory for the query, and gets a yellow bang warning on the SELECT because of that:

No big deal though – it’s fast. But now my users say they wanna find soulmates in MULTIPLE cities, not just one. They want to be able to pass in a pipe-delimited list of users and search through a few places:

Method #1, no good:
Joining directly to STRING_SPLIT.

SQL Server 2016 added a very nifty and handy STRING_SPLIT function that lets us parse a string with our delimiter of choice:

The good news is that it compiles and produces accurate results. (Hey, some days, I’ll take any good news that I can get.) The bad news is that the execution plan doesn’t look great:

  1. SQL Server starts by estimating that the STRING_SPLIT will produce 50 values. That’s a hard-coded number that has nothing to do with the actual contents of our @SearchLocation string.
  2. SQL Server estimates that it’s going to find 388 people in our locations – and that also has nothing to do with the contents of our string. Eagle-eyed readers will note that this 388 estimate is lower than the original estimate for San Diego alone!
  3. SQL Server does thousands of key lookups, and this plan gets even worse fast when you use bigger locations. It quickly reads more pages than there are in the table itself.
  4. Because of the original low 388 row estimate, SQL Server didn’t budget enough memory for the sort, which ends up spilling to disk.

The root problem here: STRING_SPLIT doesn’t produce accurate estimates for the number of rows nor their contents.

Method #2, better:
dump STRING_SPLIT into a temp table first.

This requires a little bit more work at the start of our proc:

By dumping the string’s contents into a temp table, SQL Server can then generate statistics on that temp table, and use those statistics to help it better estimate the number of rows it’ll find in the various cities. Here’s the actual plan:

Now, the bad 50-row estimate for STRING_SPLIT has a small blast radius: the only query impacted is the insert into the temp table, which isn’t a big deal. Then when it’s time to estimate rows for the index seek, they’re much more accurate – within about 5X – and now the Sort operator has enough memory budgeted to avoid spilling to disk.

Method #3, terrible:
dump STRING_SPLIT into a table variable.

I know somebody’s gonna ask, so I have to do it:

In SQL Server 2017 & prior, the query plan on this manages to be even worse than calling STRING_SPLIT directly:

SQL Server now only estimates that 1 row is coming out of the table variable, so now its estimate on the number of users it’ll find is down to just 55, and the sort spills even more pages to disk.

Thankfully, as you may have heard, SQL Server 2019 fixes this problem. I’ll switch to 2019 compatibility level, and here’s the actual plan:

Okay, well, as it turns out, no bueno. We just get a 4 row estimate instead of 1 for the table variable, and we still only get 110 estimated users in those cities. Table variables: still a hot mess in SQL Server 2019.

The winner: pass in a string, but when your proc starts, split the contents into a temp table first, and use that through the rest of the query.

What Should I Submit to the PASS Summit?

Our industry’s biggest conference, the PASS Summit 2020, just opened their pre-conference workshop call for speakers.

I’m going to submit a couple of sessions, but I want to hear from you first: if you were going to try to convince your boss to buy you a pre-conference workshop at Summit, what would be the topics that your boss would be the most interested in having you learn?

  1. What task do you need to learn more about? (Not concept – task, as in, what’s the specific thing you’re going to do.)
  2. What level of knowledge are you walking in with? (Have you done the task before, or is this your first time?)
  3. What’s your favorite mix of lecture vs lab? If you had the choice between learning more with just lectures, or learning less but getting some hands-on lab time, which would you prefer? (There are only so many hours in a workshop, and every hour you’re working on a lab is one less hour of lecture, so I want to build the right mix for you)

Answer in the comments, and let’s see if I can build a session that works for you.

#TSQL2sday: My Life Hack is an Hourglass. Yes, an Hourglass.

For this month’s T-SQL Tuesday, Jess Pomfret asked for our favorite life hacks.

My life hack is a half-hour hourglass. (Halfhourglass?) Seriously, an actual physical object:

I keep it on my desk. Whenever I start working on a task, I flip the glass. After 30 minutes, I need to stop and assess the progress I’ve made so far.

When I’m tuning a query, for example, I’ll stop there and ask myself, “Are the results here good enough?” If so, then I need to switch gears and start writing documentation to explain the changes I made to the query, or start testing the output.

I’ve tried digital timers, but they add a feeling of nervous pressure, especially when the alarm goes off. An hourglass feels calming to me, and it’s so much more flexible and forgiving. I can just glance down at the hourglass and get a rough idea of what kind of progress I’m making. I only glance down at natural breaking points, so that way I can stay focused for a little bit longer to finish a train of thought.

I use it a lot during my Mastering Query Tuning class, and students often end up buying their own and posting pictures in the Slack chat.

They don’t make my particular model anymore, but if you’d like to buy your own, these are my favorite new ones because you can pick your sand color and your duration.

There’s also Marc Newson’s Hourglass. Quite fetching. Very reasonably priced at just $12,000 – which sounds expensive, until you realize that it’s about the same as two cores of SQL Server Enterprise Edition. And then it sounds REALLY expensive.

Yeah, um, no.

How to Pick a 2020 Microsoft Database Conference

As a consultant and speaker, I’m lucky enough to go to a crazy number of SQL Server conferences. Here’s what I think of each mainstream conference out there – who they’re for, and who they’re not for.

SQL Saturdays

  • 2020 dates: all over the world, size varies by city, typically 200-300 attendees
  • Length: Friday pre-conference paid training day, Saturday free training
  • Pre-cons available: usually only 1-2
  • Videos available online: no
  • Tracks: Microsoft data platform only

My thoughts for attendees: there simply isn’t a better bargain than this. The one-day pre-conference workshops are a hell of a deal too, typically around $200-$300. I know some folks who prefer to travel to a few different SQL Saturdays per year (even internationally) rather than go to one of the bigger annual conferences.

For speakers: because this is a free community-run event managed by local volunteers, there are no payments for travel or appearances. Folks do this for the love of it.

PASS Summit

  • 2020 date: Nov 10-13, Houston, TX – huge, thousands of attendees
  • Length: 3 days
  • Pre-Cons available: Yes, many
  • Videos available online: current year or two are only available to paid attendees, but older sessions are available in the PASStv YouTube channel
  • Tracks: Microsoft database only

For Attendees: The Summit is a mid-size (thousands of people) conference dedicated to the Microsoft data platform. A dozen or more sessions run simultaneously at all times, so no matter what niche topic you’re interested in, somebody’s running a presentation about it.  The “somebody” is a key here – the presenters are from all walks of the SQL Server life, some professional presenters and some people presenting in front of a crowd for their very first time.  Quality is all over the map here, but the community is forgiving: this feels like a big family event.  Twitter is lively with attendee chat during the day, and the fun continues after hours with several big parties to choose from on any given night.

For years, the Summit has focused mostly on Seattle, but this year it’s in Houston. I’m so excited: Erika and I lived in Houston for years, and it’s a great place to visit in the fall. The restaurants are spectacular, and I’ll put together a list of recommendations and dates/times as we get closer. (I thought about doing a tour bus around to a few, but Erika talked me out of that for liability reasons, hahaha.)

For Speakers: The conference doesn’t pay any expenses or speaking fees, but competition’s still tough for slots here just because so many community members are encouraged to submit abstracts.  Along with the lack of payment comes a lack of work, though – presenters aren’t required to spend time in various booths or attend a bunch of pre-conference meetings.


  • 2020 date: March 31-April 4, London
  • Length: 2 pre-con days, 2 deep dive days, 1 free day
  • Pre-Cons available: Yes, several
  • Videos available online: yes, but only for the free days
  • Tracks: Microsoft data platform only

For Attendees: Maybe it’s just because of their accents, but I think the presenters at SQLbits are some of the best ones out there.  I’ve been consistently impressed by the SQLbits sessions I’ve seen, and like the PASS Summit, the sense of community at this event is just outstanding.  Even the format is unique: if you just attend the free day, it’s much like a SQL Saturday, but if your budget allows then you can get much more in-depth training too. The price is really, really affordable given how much training you get.

For Speakers: SQLbits does pay presenters for doing pre-cons, but not for the free days.  It has an interesting mix between a professional-feeling conference and a community one, and I think it strikes a really good balance.  If you can get a pre-conference session approved, then this conference can pay for itself – they offer some of the best revenue sharing of any of the conferences.

Microsoft Ignite and Microsoft Build

  • 2020 date: Build is May 19-21 in Seattle, WA, Ignite is Sept 21-25 in New Orleans
  • Length: 3-4 days
  • Pre-cons available: sometimes, but very limited for SQL Server
  • Videos available online: Yes, free – Microsoft Ignite videos, Microsoft Build videos
  • Tracks: SQL Server, O365, Windows, Development, Cloud, basically anything Microsoft

Microsoft Keynote

For Attendees: These very large (>10,000 person) conferences have multiple tracks for almost all Microsoft technologies. If you wear multiple hats like developer AND database pro, sysadmin AND accidental DBA, or DBA and SharePoint admin, then this is the biggest conference for you. Biggest doesn’t necessarily mean best, though: I’ve sat through a few painful vendor sessions and Microsoft marketing sessions. Microsoft and the sponsors clearly run this event, and it shows – they’re pushing their own messages here. The tradeoff: in exchange for putting up with a lot of marketing spam, you also get access to a lot of Microsoft technical staff. The vendor expo is huuuuuge, too, so if you need to compare multiple vendor products quickly, this is the place to go.  If you can’t go, though, you can watch any of the session videos for free.

For Speakers: Microsoft pays for speaker airfare, hotel, and a speaking fee, and there’s some prestige involved with speaking at Microsoft’s own event, so competition is extremely fierce for speaking slots. However, because the conference is run by Microsoft, the competition doesn’t necessarily mean the sessions are the best. Sessions have to conform to Microsoft’s speaking (marketing) standards, and presentations are vetted by Microsoft employees. You won’t see a lot of anti-Microsoft sentiment in the slide decks here. Many attendees even believe that all presenters are Microsoft staff, so I start my sessions by explaining that I’m an independent consultant.

Dev Intersection and SQL Intersection

Vegas. Because, y’know, Vegas.

  • 2020 date: April 7-9, Orlando, FL and December 8-10, Las Vegas, NV
  • Length: 3 days
  • Pre-cons available: yes, multiple dedicated to SQL Server
  • Videos available online: no
  • Tracks: SQL Server, Windows, Development, Cloud, AI

For Attendees: This small conference whose data track is run by Paul Randal and Kimberly Tripp (SQLskills) feels very much like a professional training class more than a conference, and that’s a good thing. There are just a few simultaneous sessions per track, but they’re absolutely top notch sessions. Even though there’s only a few sessions, I still usually have a tough time picking which sessions I want to attend – there are really good expert-level speakers that do a lot of presenting for a living.

For Speakers: The conference pays for speaker airfare, hotel, and a speaking fee, so competition’s pretty tough for the few speaking slots. As a result, the sessions tend to be more curated and fairly high quality. This a super easy conference as a speaker because the organizers work hard to make it as easy as possible for you.

How I Choose Which Conferences to Attend

If you’re an attendee, I’d recommend basing your decision on:

  • Is it more important to you to learn, or to build relationships? (And are you sure?  You can watch a lot of the conference videos for free online right now – so what’s holding you back?)
  • Is it important to you to have conversations with Microsoft employees?
  • Is it important to you to meet peers who have similar problems?
  • Do you want a conference located somewhere that you can do sightseeing before and after?

When I’m an attendee, the most valuable things are conversations and relationships.  I want to be able to talk to experts who have solved the same problems I’m facing, hear how they worked past it, and get their contact information so I can ask them questions later.  That’s why I prefer conferences with plenty of time for side talks – if everyone’s rushing from session to session, I don’t get as much value as an attendee.

I try to budget some money for lunch and dinner because food at these conferences is frankly horrendous.  The presenters don’t want to eat that stuff either, so when I needed help, I tempted presenters by saying, “Hey, can I take you out to lunch or dinner and pick your brain about something?”  You’d be surprised by how often that works.

Me at the PASS Summit Bloggers’ Table

I also care about the conference’s tracks.  I have to do more than just SQL Server, so sometimes I need to talk to experts from other fields.

This is going to sound crazy, but I don’t really care about the sessions or the keynotes. If I want sessions, I can watch free videos from SQLbits or TechEd anytime I want. I don’t have to wait for the conference to fix problems I’m having right now. I also don’t care about product announcements because Microsoft doesn’t really do them at conferences anymore. If tech news is important to you, get it from blogs and news sites rather than conferences, because the Microsoft conference scene has stale news. (That’s not to say these conferences don’t have good information about new features – you just won’t find it at the keynote sessions, which have turned into a boring rehash of old news and marketing demos.)

If you’re a presenter, here’s what I think about when choosing my conference calendar:

  • Where I can help the most people
  • How I can bring in business in the future
  • How I can minimize expenses (after all, as a consultant, I’m not getting paid when I do this stuff)

Each conference has its own pros and cons, and it sucks to decide, because I wish I could attend conferences all year!  I have to cut back to honor my family commitments, though.

Here are the conferences I’m hitting in 2020.

Which SQL Server Enterprise Edition Should I Download?

When you’re downloading SQL Server, it’s important to choose Enterprise Core, not Enterprise. The plain “Enterprise” one is limited to just 20 CPU cores.

But let’s say you didn’t know that, and you waltzed over to the download page. There are two Enterprises listed, and it’s not really clear what the differences are between the two:

If you click the “Info” link on the Enterprise version, there are no hints to indicate what’s about to happen. But if you download that SQL Server Enterprise one, install it, and then look in the error log after startup, there’s a tiny message hidden in a sea of text:

You see it right there? No? Of course you don’t. ENHANCE!



SQL Server is only using 20 physical CPU cores, no matter how big your machine is. For example, here’s a 128-core VM where this “Enterprise Edition” was installed:


This is especially problematic when the DBAs are seeing tons of SOS_SCHEDULER_YIELD waits, indicating there’s massive CPU pressure, but the sysadmins are saying, “The server’s only 31% busy. Must be a SQL Server problem.” And technically…it is.

This gets better in SQL Server 2019.

If you try to install the plain “Enterprise” version on a big box, you get a warning:

I would argue that the warning is hidden in a small sea of text that a lot of people are going to skip, but still, it’s better than what we had before. Plus, the warning comes up again later during the install:

And if you click on the Warning hyperlink:

Yay! That’s a great improvement.

Wanna Learn How I Use the First Responder Kit?

How I Use the First Responder KitMy newest class, How I Use the First Responder Kit, has been going great – but there’s a catch. I designed this class as a mix of training and consulting, so I’m giving you advice on your server as we go.

Because of that, I can only fit so many people (around 50) per class.

But I’ve been getting really good feedback from attendees who said that they even got value out of just sitting in the class, even when they weren’t getting advice on their servers. They just liked seeing how I use the scripts, and seeing how I interpreted other folks’ data.

So if you can get value from just the recordings, then great – I’ve added this class to your Recorded Class Season Pass. If you’re a pass holder, you can now jump right in to the instant replay recordings.

And it’s free. Enjoy!

If you’re not a pass holder yet, you can use coupon code StartWatching to save 50% off the Recorded Class Season Pass or the Level 1 Bundle for the next 10 days. See you in class!

What Are Soft Deletes, and How Are They Implemented?

Normally when you run a DELETE statement in a database, the data’s gone.

With the soft delete design pattern, you add a bit column like IsDeleted, IsActive, or IsArchived to the table, and instead of deleting rows, you flip the bit column. This can buy you a few advantages:

  • Easier/faster undeletes
  • History tracking (keeping deleted rows around for auditing purposes, although the method I’m going to show here doesn’t do anything fancy like track who did the delete)
  • Easier reconciliation during disaster recovery failovers (refer to the Senior DBA class module on recovering from failovers for more info)
  • Lower workload for Availability Groups secondaries (neat scenario from Aaron Bertrand)

I’ll illustrate it by implementing soft deletes on the Stack Overflow Users table. (Any version/size will do, but if you want to replicate my exact query metrics, I’m using the 2018-06 version.)

First, we need to add the IsDeleted column:

No matter how big the Users table is, as long as there’s no one else in the table at the moment, that change takes effect instantly – no matter how large the table is! Hot dog, I really like SQL Server. Makes these kinds of things easy – it’s implemented as a metadata-only change.

Stop deletes with a trigger.

We can use an “instead of” trigger that will fire when someone tries to delete a row, and we’ll do something different. I’m using the “deleted” virtual table to grab the rows that are supposed to be deleted.

When someone tries to delete a row, we can see that it wasn’t actually deleted:

The slick part of this approach is that it doesn’t require changing your existing DELETE code – it just thinks it’s successfully deleting rows. However, we still have a pretty big problem: apps are still querying the table, and they’re not expecting to see deleted rows.

Filtering selects is a little harder.

Ideally, if you can change the queries, you add “WHERE IsDeleted = 0” everywhere. The hard part there is changing every query, obviously, so a workaround is to rename the Users table to something new, and then put a view behind in its place:

(Quick note: you probably don’t want to use SELECT * in a view like this. Under the covers, SQL Server builds the code for the view once, and then if you add more columns to dbo.Users_tbl, the new columns won’t show up in queries until you call sp_refreshview. I’m just using the * here for simplicity.)

Doing the table-to-view switcheroo means the soft-deleted rows are instantly filtered out from my queries.

Index design is a little harder, too.

Whether use the view approach or add IsDeleted = 0 to all your queries, your execution plans are going to have something new: a filter on IsDeleted = 0. That also means that your missing index recommendations are going to ask for IsDeleted in the key of your index:

When you first implement soft deletes, this isn’t a huge concern for most queries because none of your data can be excluded (because no rows have IsDeleted = 1 yet.) However, the worst case for performance is a query that used to have a covering index, but now has to do a key lookup for every single row in order to double-check that the row hasn’t been deleted yet.

One workaround is to add IsDeleted to every index. In the beginning, it doesn’t matter if IsDeleted is part of the key or the includes – because it’s not filtering anything out yet. Over time, it starts to become more important. You’re going to think that someday, if the majority of your records are deleted, then you might even want IsDeleted to be the leading key in all of your indexes. That would be incorrect – you’d be better off with the next solution.

Another workaround is to use filtered indexes. A filtered index is just an index with a WHERE clause, like IsDeleted = 0:

When a majority of your rows have IsDeleted = 1, this filtered index has a nifty benefit: it’s physically smaller because it doesn’t contain the deleted rows. It takes up less space, meaning shorter backups/restores/CHECKDB, and it also has less concurrency overhead if some kind of bulk operation needs to be done to the deleted rows.

Yet another workaround is to split the table up. If you swap a view in, there’s no reason you can’t have that view union across a couple of tables: one with active rows, and one with deleted rows. Well, there’s one reason you can’t do it: this style of storage becomes harder, and whaddya know, would you look at the time? I’ve spent an entirely long time writing out this blog post, and I’ll leave that split scenario for another post.