Should You Use SQL Server 2022’s GREATEST and LEAST?

SQL Server

If you’ve been following along with this week’s posts on DATETRUNC and STRING_SPLIT, you’re probably going to think the answer is no, but bear with me. It’s Christmas week, right? The news can’t all be bad.

GREATEST and LEAST are kinda like MAX and MIN, but instead of taking multiple rows as input, they take multiple columns. For example:

Produces 3 and 1. This actually has really useful real-world implications.

Let’s take the Stack Overflow database, and let’s say I want to find any posts (questions or answers) that had recent activity. This is surprisingly difficult because Posts has 2 date columns: LastEditDate, and LastActivityDate. You would think that LastActivityDate would be the most recent, but you would be incorrect – when posts are edited, the LastEditDate is set, but LastActivityDate is not.

So if I was looking for any Posts that were active – either via reader activity, or edits – in a date range, I used to have to build supporting indexes, and then write queries like this:

I’m using 2018-05-27 because my copy of the Stack Overflow database’s last activity is in 2018-06-03. Depending on which version you’re using, if you’re trying to reproduce these results, pick a date that’s within the last week of activity

So, what’s better – the old way or the new way? Like your hips, the actual execution plans don’t lie:

The old way smokes the SQL Server 2022 way. I mean, it’s not even close. The old way splits up the work into two index seeks, one on LastActivityDate and one on LastEditDate. It finds the recent stuff, does the appropriate key lookups, and it’s done in one second.

The new way does a table scan and takes a minute and a half.

Its estimates and its index usage are just garbage, and I don’t mean like Shirley Manson.

Again – they’re not necessarily bad functions, but they have no business in the FROM clause and below. Don’t use them in WHERE, don’t use them in JOINS, just use them to construct output, variables, and strings.

Should You Use SQL Server 2022’s STRING_SPLIT?


SQL Server 2022 improved the STRING_SPLIT function so that it can now return lists that are guaranteed to be in order. However, that’s the only thing they improved – there’s still a critical performance problem with it.

Let’s take the Stack Overflow database, Users table, put in an index on Location, and then test a couple of queries that use STRING_SPLIT to parse a parameter that’s an incoming list of locations:

The two queries produce slightly different actual execution plans, but the way STRING_SPLIT behaves is the same in both, so I’m just going to take the first query to use as an illustration:

That red-highlighted part has two problems:

  1. SQL Server has no idea how many rows are going to come out of the string, so it hard-codes a guesstimate of 50 items
  2. SQL Server has no idea what the contents of those rows will be, either – it doesn’t know if the locations are India, China, or Hafnarfjörður

As a result, everything else in the query plan is doomed. The estimates are all garbage. SQL Server will choose the wrong indexes, process the wrong tables first, make the wrong parallelism decisions, be completely wrong about memory grants, you name it.

Like I wrote in this week’s post about DATETRUNC, that doesn’t make STRING_SPLIT a bad tool. It’s a perfectly fine tool if you need to parse a string into a list of values – but don’t use it in a WHERE clause, so to speak. Don’t rely on it to perform well as part of a larger query that involves joins to other tables.

Working around STRING_SPLIT’s problems

One potential fix is to dump the contents of the string into a temp table first:

And the actual execution plan is way better than the prior examples. You can see the full plan by clicking that link, but I’m just going to focus on the relevant STRING_SPLIT section and the index seek:

This plan is better because:

  • SQL Server knows how many rows are in #LocationList
  • Even better, it knows what those rows are, and that influences its estimate on the number of users who live in those locations, which means
  • SQL Server makes better parallelism and memory grant decisions through the rest of the plan

Woohoo! Just remember that temp tables are like OPTION (RANDOM RECOMPILE), like I teach you in this Fundamentals of TempDB lecture.

Should You Use SQL Server 2022’s DATETRUNC?


SQL Server 2022 introduced a new T-SQL element, DATETRUNC, that truncates parts of dates. For example:

Truncates everything in that date other than the year, so it returns just 2017-01-01 00:00:

You might ask, “Well, why not just use YEAR()?” That’s a good question – there are times when you need a start or end date for a date range, and this could make it easier than trying to construct a full start & end date yourself.

Easier for you, that is – but not necessarily good for performance. Let’s take the Stack Overflow database, Users table, put in an index on LastAccessDate, and then test a few queries that are logically similar – but perform quite differently.

And check out their actual execution plans:

The first one, passing in a specific start & end date, gets the best plan, runs the most quickly, and does the least logical reads (4,299.) It’s a winner by every possible measure except ease of writing the query. When SQL Server is handed a specific start date, it can seek to that specific part of the index, and read only the rows that matched.

DATETRUNC and YEAR both produce much less efficient plans. They scan the entire index (19,918 pages), reading every single row in the table, and run the function against every row, burning more CPU.

SQL Server’s thought process is, and has always been, “I have no idea what’s the first date that would produce YEAR(2017). There’s just no way I could possibly guess that. I might as well read every date since the dawn of time.”

That’s idiotic, and it’s one of the reasons we tell ya to avoid using functions in the WHERE clause. SQL Server 2022’s DATETRUNC is no different.

So why doesn’t Microsoft fix this?

YEAR and DATETRUNC are tools, just like any other tool in the carpenter’s workshop. There are lots of times you might need to manipulate dates:

  • When constructing a dynamic SQL string, and you want to build a date – sure, using a function to build the WHERE clause string is fine. Just don’t use the function in the WHERE clause itself.
  • When constructing the contents of variables
  • When constructing the output of the query – sure, using a function like this in the SELECT is fine, because it doesn’t influence the usage of indexes in the query plan

DATETRUNC in the SELECT isn’t so bad.

Let’s use it in the SELECT clause to group users together by their last access date. Say we want a report to show trends over time. Here are two ways to write the same basic idea of a query:

The two queries do show the date in two different ways, but the UsersInvolved count is the same – it’s just different ways of rendering the same data:

When you review their actual execution plans, the first one (YEAR/MONTH) is much more complex, and goes parallel to chew through about 4 seconds of CPU time:

Whereas the new DATETRUNC syntax has a cool benefit: it only produces one value (the date), and the data in the index is already sorted by that column. Because of that, we don’t need an expensive sort in the execution plan. And because of that, we don’t need parallelism, either, and we only chew through about two seconds of CPU time. Nifty!

So should you use DATETRUNC? Like with most functions, the answer is yes in the select, but probably not in the FROM/JOIN/WHERE clauses.

[Video] Office Hours: Great Questions, Part 2


Every time I think, “There can’t be any more SQL Server questions left,” y’all post more great ones at!

Here’s what we covered:

  • 00:21 MattC: Hi Brent. Have you check out ChatGPT’s ability to write SQL code. VERY impressive and in some cases faster than a stackoverflow question. In fact stackoverflow have had to ban people using GPT to answer peoples questions
  • 02:28 Haddaway: Does high VLF count matter when it is in TempDB? sp_blitz reports 1000 VLF’s for TempDB.
  • 04:33 Yousef: Is GUID data type a good clustered index when concurrent inserts are high?
  • 05:51 Prohiller: Hi Brent, is there any reason why not enable automatic page repairs in my AGs? Any gotchas MS documentation isn’t mentioning?
  • 06:21 DavDBA: Hey Brent, I want to do some maintenance work on my server and was wondering Does one of the blitz scripts finds invalid objects at a Server/DB level?
  • 06:56 Wendigo: When modifying columns in SSMS table designer, why does “Generate change script” like to copy all the existing data to a new temp table, then swap the tables as opposed to just altering the column directly?
  • 07:54 The Purge Police: Any tips for identifying all tables that are growing unbounded without retention policies?
  • 08:50 Hrafnhildur: What SQL Prompt features would you like to see included in future versions of SSMS / Azure Data Studio?
  • 10:42 Dr. Ruth: What are some recommended techniques for identifying sprocs that are no longer used? Have hundreds of sprocs but not all still used and would like to clean up.
  • 12:07 Fyodor: Is it ever OK to use the leading columns of the clustered index as the leading columns for some of the non-clustered indexes? Any gotcha’s with this?
  • 13:03 Culloden: I keep seeing MS invest in Storage Spaces Direct. Why would one decide to put SQL Server on commodity hardware? I’ve only worked at organizations with large prod servers that use SANs for storage.
  • 14:31 coffee_table: Hello! what are the gotchas with doing hourly incremental loads to a data warehouse rather than a single overnight load? Would statistics always be misleading? Doug Lane Videos on your channel suggest that a filtered stats might be a solution. Thoughts?

Updated First Responder Kit and Consultant Toolkit for December 2022

Santa’s elves took a break from building toys and shipped a new version of the First Responder Kit. There are some great improvements in here, like a makeover for sp_BlitzLock and much better PSPO compatibility for sp_BlitzCache.

How I Use the First Responder Kit
Wanna watch me use it? Take the class.

To get the new version:

Consultant Toolkit Changes

I updated it to this month’s First Responder Kit, but no changes to querymanifest.json or the spreadsheet. If you’ve customized those, no changes are necessary this month: just copy your spreadsheet and querymanifest.json into the new release’s folder.

sp_AllNightLog Changes

  • Fix: if a database exists and it’s not being log shipped, skip it rather than overwrite it. (#3187, thanks David Wiseman.)

sp_Blitz Changes

sp_BlitzCache Changes

  • Enhancement: show a statement’s parent stored procedure where available in SQL Server 2022 to work around the PSPO problem. (#3176, thanks Cody Konior.)
  • Fix: if you asked for the output of @SortOrder = ‘unused grant’ to be written to table, it wasn’t working. (#3160, thanks Petr-Starichenko.)

sp_BlitzFirst Changes

  • Fix: Managed Instances were getting errors on Perfmon counters. (#3184, thanks Cody Konior.)
  • Fix: lock timeout errors from sp_msforeachdb. (#3180, thanks Cody Konior.)

sp_BlitzIndex Changes

  • Enhancement: added support for output to table for all modes. (#2774, thanks Tor-Erik Hagen.)
  • Fix: the columnstore index visualization wasn’t filtering out tombstone rowgroups. (#3189)
  • Fix: statistics with oddball names would throw errors. (#3162, thanks Jay Holliday.)

sp_BlitzLock Changes

  • Enhancement: in Erik’s words, he gave the T-SQL a mommy makeover, including the ability to read from the ring buffer, improve performance, fix data duplicatoin bugs, clean XML to avoid character parsing errors, etc. (#3166, thanks Erik Darling.)

sp_BlitzQueryStore Changes

sp_BlitzWho Changes

  • Enhancement: move session_id and blocking_session_id next to each other for easier blocking troubleshooting. (#3159, thanks David Hooey.)

sp_DatabaseRestore Changes

Bonus changes: Anthony Green kept up the tireless work of keeping the SQL Server versions file up to date.

For Support

When you have questions about how the tools work, talk with the community in the #FirstResponderKit Slack channel. Be patient: it’s staffed by volunteers with day jobs. If it’s your first time in the community Slack, get started here.

When you find a bug or want something changed, read the file.

When you have a question about what the scripts found, first make sure you read the “More Details” URL for any warning you find. We put a lot of work into documentation, and we wouldn’t want someone to yell at you to go read the fine manual. After that, when you’ve still got questions about how something works in SQL Server, post a question at and the community (that includes me!) will help. Include exact errors and any applicable screenshots, your SQL Server version number (including the build #), and the version of the tool you’re working with.

[Video] Office Hours: Great Questions, Part 1


Wow, y’all have been posting some great questions at lately!

Here’s what we covered today:

  • 00:59 Gustav: Hi Brent We recently migrated to SQL MI General Purpose, and I noticed that the default blob storage per .mdf /.ndf file is 128Gb container with 500 IOPS! This is terribly slow! Would splitting the database into multiple files, increase read / write performance?
  • 02:39 PAAREES: Is it normal getting corruption like errors if you stop a running dbcc checkdb and getting no errors if you let it finish in the same db/vm ?
  • 03:07 MancDBA: Hi Brent, Over the years, have you ever felt blogging and giving back so much to the community (for free) has ever been a thankless task especially if people are mean to you? Cheers!
  • 04:40 Len: What criteria do you use for deciding which tables need more/less statistic updates, which tables need full scan vs sampled scan?
  • 07:30 Namor: How does the PostgreSQL DBA learning track compare with SQL Server (width, depth, difficulty)?

Who’s Hiring in the Microsoft Data Platform Community? December 2022 Edition

Who's Hiring

Is your company hiring for a database position as of December 2022? Do you wanna work with the kinds of people who read this blog? Let’s set up some rapid networking here. If your company is hiring, leave a comment.

The rules:

  • Your comment must include the job title, and either a link to the full job description, or the text of it. It doesn’t have to be a SQL Server DBA job, but it does have to be related to databases. (We get a pretty broad readership here – it can be any database.)
  • An email address to send resumes, or a link to the application process – if I were you, I’d put an email address because you may want to know that applicants are readers here, because they might be more qualified than the applicants you regularly get.
  • Please state the location and include REMOTE and/or VISA when that sort of candidate is welcome. When remote work is not an option, include ONSITE.
  • Please only post if you personally are part of the hiring company—no recruiting firms or job boards. Only one post per company. If it isn’t a household name, please explain what your company does.
  • Commenters: please don’t reply to job posts to complain about something. It’s off topic here.
  • Readers: please only email if you are personally interested in the job.

If your comment isn’t relevant or smells fishy, I’ll delete it. If you have questions about why your comment got deleted, or how to maximize the effectiveness of your comment, contact me.

Each month, I publish a new post in the Who’s Hiring category here so y’all can get the latest opportunities.

Slow “Having” Query? Try Pre-Filtering.


I was helping a client with a query, and I’m going to rework the example to use the Stack Overflow database for easier storytelling.

Say we need to:

  1. Find all the locations where users have logged in since a certain date, then
  2. Return the total count of people who live in those locations

One way to write the query would be:

And in order to expedited it, I’ve added a couple of indexes:

When the query runs, its actual execution plan is a scan of all the locations:

Because to SQL Server, it has to group the locations together, finding the max LastAccessDate in every location, before it begins the filtering process.

The client said, “Wait, that doesn’t make any sense – why isn’t SQL Server using the index on LastAccessDate? It has the Location on there too. SQL Server could just jump to 2022-01-01 on that index, make a list of the Locations that match, and then use those for the next part of the query.”

Well, it could. But the query optimizer wasn’t written that way.

Instead, if you want that behavior, you have to rewrite the query yourself:

The query runs nearly instantaneously because its actual execution plan is much more efficient in cases where few locations match:

It only does 3 logical reads instead of about 30,000.

A few things to keep in mind:

  • I used a CTE, but other ways can work as well, like subqueries or temp tables.
  • This technique only works well when a minority of rows match the filter.
  • If the filter is parameterized, and its value changes dramatically, this is a recipe for parameter sniffing.
  • Temp tables can be a solution to that, but you have to watch out for temp table stats reuse.

[Video] Office Hours: Speed Round Edition


I took a break from my Black Friday sale customer support emails to check in on the questions you posted at and answer the highest-voted ones:

  • 00:00 Start
  • 00:15 Meshulam: What are the top use cases for running SQL Server in a container? Do many of your customers run SQL Server in a container?
  • 00:28 Eduardo: Do you also help customers migrate from SQL Server to PostgreSQL?
  • 00:39 RoJo: We have an AG secondary in Async mode, that gets behind due to Index rebuilds at night. Should I be concerned? I’d like to move to Sync mode, but not sure what would happen with the Index rebuild. DB is 1TB.
  • 01:01 Candy: What is your opinion of trace flag 834 (large memory pages) for possible performance gains? Running bare metal SQL 2019 enterprise / 512gb RAM.
  • 01:36 Monkey: You mentioned that SQL Server does not phone home Microsoft to check whether license key is already used on another server. Why do you think Microsoft allows that?
  • 02:00 Haydar: Have any of your clients been fined big $$$$ for improper SQL licensing?
  • 02:08 Jack: Our codebase has (NOLOCK) statements everywhere. I’m certain that most were added by devs thinking that it means “don’t cause any blocking.” Would switching to RCSI mode allow us to remove those hints?
  • 02:20 DBA Champion: If company wants to have 3 SQL Servers 2022 on-prem (Ent. Ed.) with over 99.9% uptime, and use them 3-5 years straight, would it be money saving to use new pay-as-you-go billing model, or it will be cheaper to buy three EE licenses once and use them instead?
  • 02:58 franklwambo: Who is the Brent Ozar for all things MYSQL production database support and query tuning
  • 03:11 New folder: How do PostgreSQL scaling options compare with SQL Server?
  • 03:27 Ricardo: Would you rather work remotely (again) from Iceland or a cruise ship?
  • 04:00 David: Who is the Brent Ozar of SSRS?
  • 04:06 Luthen: How often do you recommend checking the suspect_pages table?
  • 04:21 Negative Max: What was your favorite PASS session this year?
  • 04:49 Syril Karn: Would you consider teaching an “Unmastering top SQL anti-patterns” course?
  • 06:38 Mr. SqlSeeks: What would be the determining factor(s) to suggest a client use Azure Data Factory (or Databricks) for their data warehouse as opposed to SQL Server?
  • 07:16 NotCloseEnough2RetirementToStopLearning: Hi Brent, just got a Data Architecture job. I think a key to success will be being more well read about technology\data trends. Typically, my reading has been problem-solution stuff. What do you read or subscribe to for staying up on larger technology trends outside SQL Server?
  • 07:53 Todd : What is your opinion on using things like VEEAM for databases over 500 GB? Clients seem to be married to it, but t-logs keep growing. For this size, is it better to use traditional SQL backups and get VEEAM to backup the VM drives?
  • 08:16 Dopinder: Is DBA a recession proof tech job?
  • 08:49 franklwambo: I have worked with maintenance plans and Windows batch script to perform full and differential SQL server backups prior, both with trusted connections. In your experience, what approach would you tread for the other, especially on matters security.
  • 09:13 Sigrún: One of the older DB’s on our SQL 2019 server has two transaction log files for some unknown reason. Both files on the same volume but one file has autogrow while the other doesn’t. Is there any risk / performance gotcha’s in continuing with two log files vs one?

[Video] Office Hours: Black Friday Edition


My Black Friday sale is in its last days, so most of my time at the moment is spent keeping an eye on the site and answering customer questions. I’m happy to say it’s our best year so far, too! Y’all really like the new access-for-life options.

I took a break from the online frenzy to check in on the questions you posted at and answer the highest-voted ones:

Here’s what we covered:

  • 00:00 Start
  • 00:30 Marian: The database that my company developed for over 20 years has a lot design problems, with very obvious normalization and performance issues. The business approach is keep fixing whatever issues are and will appear. Would you rather recommend developing a new one, or keep fixing?
  • 02:53 George : Hi Brent, I’m a sql dev /learning performance dba (enthusiastic amateur). We have an external consultant who has stated that sql server is self tuning and reevaluates after 3 runs. I cannot find anything about this online, do you have any idea of what they might mean?
  • 04:05 ChiHarley: Microsoft is pushing my client to activate the SQL IaaS Extension on their Azure VMs. Do you like to enable the extension, or do you have an opinion on it?
  • 05:42 Brentosaur: How about creating “Funtamentals of Locks and Blocking” classes? I would love to watch them
  • 06:31 Carlo: Hi Brent, any updates regarding the problem of ‘AlwaysOn’ cluster errors due to heart-beat timeout based on Veeam VM snapshots? Thanks!
  • 07:36 Jessica: Hey brent, have you recently done a talk on / know a good recent talk on youtube for setting up Log Shipping for DR? Its been a while since the last time I had to deal with it (2008R2 in 2012 or so) and wanted to make sure there aren’t any new gotchas
  • 08:20 Cassian: How do you know if auto growth events are benign or problematic?
  • 09:33 Double espressos for breakfast: Hi Brent – love all that you have done for the SQL community. You’re clearly a bright guy and could do anything you put your mind to. Why did you choose database administration of all things?

Contest: Guess the Next SQL Server Release Date

SQL Server 2023

When will Microsoft officially release SQL Server 2023 (or 2024, or whatever it’ll be) for download? The dust is just barely starting to settle on the 2022 box, so it’s time to guess the next one.

  • Leave one – and only one – comment here in YYYY/MM/DD format with your release date guess. If you leave multiple comments, only the first/earliest one is going to count.
  • “The release date” is the date that Microsoft announces that the final RTM bits will be downloadable to the public from (Not the date they make the announcement, and we’re only talking the release-to-manufacturing public version, not a preview, CTP, RC, CU, or SP.)
  • Closest to win, without going over, wins lifetime access to my Fundamentals & Mastering recorded classes.
  • In the event of a tie (multiple people guessing the same date), the earlier comment wins.
  • Only comments more than 48 hours before than Microsoft’s public release announcement will count. If Microsoft makes their announcement, and you run over here trying to leave a fast comment with the release date, not gonna take it.
  • If Microsoft announces two release dates – like one for Windows, and one for Linux – then we’ll pick a separate winner for each. (But you only get to leave one date in the comments.)

Place your bets!

Are You Underpaid? Let’s Find Out: The 7th Annual Data Professional Salary Survey is Open.


Whew. This is a crazy time, isn’t it? The aftermath of the pandemic is kicking in, and there are layoffs happening in Silicon Valley companies, but… the data profession is on fire. I don’t know anyone who’s been out of work for too long because companies are so desperate to get data help.

So it’s time for our annual salary survey to find out what data professionals make. You fill out the data, we open source the whole thing, and you can analyze the data to spot trends and do a better job of negotiating your own salary:

We pay Richie in query bucks

Take the Data Professional Salary Survey now.

The anonymous survey closes Sunday, Jan 1. The results are completely open source, and shared with the community for your analysis. (You can analyze ’em now mid-flight, but I’d wait until the final results come in. I’ll combine them into a single spreadsheet with the past results, and publish those on January 3rd.)

Thanks for your help in giving everybody in the community a better chance to talk honestly with their managers about salary.

Update Jan 2: this year’s salary is closed, and the results are here.

Looking Back at the 2022 #PASSDataCommunitySummit in Seattle


I’m back home in Vegas, back down to Inbox Zero, and I’ve had some time to think about last week’s conference in Seattle. I liveblogged the keynotes, but I wanted to talk about the conference overall.

This was Redgate’s first time running an in-person Summit after the professional organization folded in December 2020. Redgate bought the Summit’s assets, ran a virtual event last year, and then this year was a hybrid event. You could either attend in person, or online.

Overall, the in-person event went well. The rooms were well-sized, the AV worked fine, lunches were served quickly. Given that it was the Summit’s first return to “normality” in a while, things could have been much worse. We got really lucky with the weather: sun every day, and not a drop of rain.

In-person attendance was down, but still big. I heard total in-person attendance was around 1,500 folks, which probably includes the vendor & support staff, down from a pre-COVID height of around 4,000. Pre-conference workshop attendance was way down – peaked around 80 in-person, down from 350+. In any given pre-pandemic year, this still probably would have been the biggest Microsoft data platform event in the US.

The virtual side had a lot of glitches. Online attendees couldn’t join all sessions, or missed the first part of sessions, and some recordings didn’t work. Running a live in-person event is hard enough, and I salute Redgate for trying to make the hybrid thing work, but it seemed like there was still a lot of work to do on that. I wonder if next year will be hybrid again, or in-person only. If you’re considering attending the hybrid version, you should set your expectations appropriately: stuff may not work.

I was happy not attending sessions. Leading up to the event, I shared with y’all that I was going to skip sessions altogether and do the hallway track instead, meeting my friends and having little family reunions all over the place. I didn’t take any photos, but the event photographers were all over the place, and I’m guessing there will be a public photo album somewhere. I heard there were a lot of good Microsoft-delivered sessions about SQL Server 2022 and Azure Managed Instances, so I’ll go back and peruse the recordings of those later.

There weren’t many health precautions. Very, very few folks wore masks, there was no social distancing, no mandatory testing, and lots of people (myself included) were shaking hands and hugging. I’m telling you this, dear reader, because I know that a lot of y’all are still at home, and haven’t gone to public events yet. If and when you return to in-person events again, I just want you to have a rough idea of what to expect.

I will likely return in 2023. I had a good time, I loved reuniting with my friends, and Redgate did a good job of stewarding the event. I can’t think of any reasons why I wouldn’t go back. It went well!

#PASSdataCommunitySummit Keynote Day 3 Live Blog

SQL Server
1 Comment

Today at the PASS Data Community Summit in Seattle, Kimberly Tripp is doing the first-ever in-person community keynote. Here’s the abstract:

Over the 30+ years I’ve been working with SQL Server, I’ve seen an incredible amount of innovation and change. How do we keep up with so many changes and how do we know how and when to implement them when there are so many options? I’ve always said that my favorite thing about SQL Server is that you can solve virtually any problem – by knowing the RIGHT way to solve it. Each feature has options and tweaks that make it better suited for some scenarios and inappropriate for others. And that’s led us to the “it depends” response. However, “it depends” is NOT the answer, it’s just the beginning. The beginning of a process where you understand your goals, design your strategy / prototype, and apply the appropriate technology after you’ve defined your specifications. But how do you keep up with the innovation? How do you keep your team happy? Learning, Sharing, and Growing – these are the mainstays of a cohesive and productive team not to mention, the best way to leverage such a powerful platform.

Kimberly TrippI’ll be live-blogging the keynote, sharing my thoughts about what happens. You can refresh the post to follow along – the most current notes will be at the bottom, so if you’re reading later, you can read through the recap as it happened.

7:55AM: Folks are trickling into the keynote room to the tunes of smoooooth jaaaaazzz again. This music is a perfect fit for the chill Friday. Attendance will be lower today as folks start to travel home.

8:02AM: Steve Jones took the stage to talk about how Redgate wanted the 3rd day of the Summit to have a community keynote.

8:06AM: Steve talked about how SQL Saturday grew from 1 event in 2007, to over 100 events a year in 2019. PASS acquired that brand several years ago, and then when Redgate acquired PASS, Redgate gave the SQLSaturday brand away to a non-profit 501.3c. Steve hopes to see more events in 2023.

8:07AM: Thanks to AWS for sponsoring the Day 3 Community Keynote.

8:08AM: KillaDBA (Homer McEwen) took the stage to sing his song Hug a DBA. I believe this is the first song in history to correctly use the word ‘tuple.’

8:11AM: Another one! His new song about SQL Server 2022. Dang, he’s fast with the words.

8:15AM: Steve Jones brought Kimberly Tripp onstage: “I started working with SQL Server when I was 2, and it’s been a fun 33 years.” I laughed out loud. Nicely done.

8:20AM: Kim brought Paul Randal onstage to help.

8:21AM: Kim reviewed the history of SQL Server. In May 1989, SQL Server 1.0 for OS/2 was released. It was a single process that ran on a single thread.

8:22AM: Kim & Paul showed boxes of SQL Server 4.21 for Windows NT. It looks like Paul’s about to run the world’s worst used software auction onstage, hahaha.

8:24AM: Kim did a show of hands of people who started with SQL Server 6.5, and a lot of hands went up. (Me too.) She talked about how it’s amazing that so many of us have stuck with the same product over the years, and she said it might be due to the quality of the community.

8:30AM: Kim said circa 2019, Twitter was handling 500 million tweets a day, and email was handling 294 billion emails a day. (Most of which on both platforms was spam, heh.) In 2021, Google was handling about 8 billion searches per day.

8:34AM: In Kim’s survey of the community, 48% said they always use a search engine, and another 46% frequently use search engines. We kinda take that for granted now, but when us old people got started in databases, that simply wasn’t an option.

8:43AM: Kim covered a lot of changes in hardware, storage, and the data we’ve handled over the last 30 years. Her message is that big data isn’t a fad: it’s the same as it’s always been. It just keeps growing.

8:48AM: The SQL Server storage engine was rewritten in 7.0, and most of the basic design ideas (like 8KB pages) have lasted 20 years. Kim pointed out that even though there are new storage structures (In-Memory OLTP, columnstore), most of the time, the 20 year old stuff is still the most appropriate.

8:53AM: Kim said that when the folks in the audience chose a career, they chose wisely. Data is growing, the field is growing, businesses are realizing its worth. Kim said that on Career Builder right now, 13,073 jobs show up for Database Administrator.

8:56AM: “There’s enough work out there for all of us.” Kim encouraged everyone to learn, share, and grow in the community.

9:03AM: Kim gave everyone homework. Grab the title slide from every session they attended, create 1-2 slides with highlights in their own words from each session, and present that to their manager and their team. Great idea, because it proves the ROI of sending people to a conference, and helps you get budgetary approval for next year.

And hopefully I’ll see YOU here next year, dear reader! Off to the Community Zone to hang out with folks.

#PASSdataCommunitySummit Keynote Day 2 Live Blog


I’m in Seattle for the PASS Data Community Summit, and the day 2 keynote is about to start. This year, now that Redgate owns the Summit event, they’re doing the day 2 keynote. Gotta pay the bills, I suppose!

Redgate’s CEO, Jakub Lamik, is leading a team of speakers for the keynote, and here’s the abstract:

These are challenging times for every business everywhere. So how can you ensure you get the most from your investment in IT, to meet your needs now, while also preparing you for an uncertain future? Join Jakub Lamik, Steve Jones and Kathi Kellenberger in this keynote which looks at how the world is changing for data professionals, and the areas to focus on which will bring the best return on your investment in the long term.

Given that it’s essentially a vendor keynote, it would be safe to assume that it’s advertising for Redgate’s products. Over the last couple of years, they’ve focused more and more on DevOps tooling.

I’ll be live-blogging the keynote, sharing my thoughts about what happens. You can refresh the post to follow along – the most current notes will be at the bottom, so if you’re reading later, you can read through the recap as it happened.

7:57AM: as folks enter, the soundtrack is very smoooooth jaaaaaaaazzzzz. Easy listening. I think they know how hung over I am. In related news: I had a great time last night at Matt’s in the Market, and then Rumba.

8:04AM: people slowly filing in. The picture looks empty, but it’s just because it takes quite a while to upload photos over the wifi, so I started early.

8:06AM: Cecilia Judmann, events manager for Redgate, took the stage for her first Summit too. I wanna say I saw that 40% of the attendees are here for their first Summit, period. That’s amazing! So cool to get fresh blood in.

8:10AM: Steve Jones took the stage. Summit and Redgate both started in 1999. Steve was there at the very first Summit! That’s wild. He talked about how attendees used to dash from session to session, taking notes on paper, because back then laptops weren’t really feasible and the battery life was terrible.

8:14AM: Steve really misses the Casino Night party they used to put on. I miss that too! I only got to attend one of ’em – my schedule was a zoo – but I would love to attend that again.

8:16AM: In 1999, the conference was 75 sessions with 1,200 attendees. 2019, it had grown to 200 sessions, 21 pre-conference workshops, and 4,000 attendees.

8:18AM: SQLPrompt was Redgate’s first DevOps attempt. They started pair programming, trying to release fixes within a week. They were able to push 100+ releases in a year. That might sound like a lot of patches, but it’s totally fair – people do TERRIBLE things with their T-SQL and tables, and I’m sure SQLPrompt had to deal with an insane number of edge cases.

8:19AM: Steve Jones and a couple of other folks founded SQLServerCentral about 20 years ago, and Redgate acquired that as well (like they acquired SQLPrompt) in order to make the community better. Hey, I see a theme here! Redgate’s been making smart acquisitions for a long time. I like the theme of this.

8:21AM: When the business organization behind the PASS Summit imploded due to the pandemic, Redgate acquired the assets because the Summit was too good and too big to fail. Plus, Redgate spun off SQLSaturday, the community events, to a separate non-profit.

8:22AM: Redgate CEO Jakub Lamik took the stage to talk about why Redgate believed that the Summit was important to preserve. Data has been important for 20 years, and it’ll continue to be important for the next 20 years, too.

8:26AM: Lamik discussed how open source is increasingly popular across all segments, including the enterprise. Small teams can get traction quickly with open source tools and generate value. This may not sound like a revolutionary point, but remember where it’s being discussed: from the keynote stage at what used to be a very Microsoft-specific data conference. It took a long, long time before things like AWS and GCP were discussed here, let alone alternate databases. Redgate hosting this event means they can be more frank about the reality of other databases – after all, they sell tooling for other databases.

8:28AM: The top databases used by developers in the Stack Overflow survey were PostgreSQL, MySQL, SQLite, and then #4, Microsoft SQL Server. Again, this is the kind of discussion we gotta have. It’s not that you can’t make a career out of Microsoft SQL Server alone – you can – but it helps to remember your context, and data lives everywhere.

8:32AM: Data work involves a lot of steps – coding, testing, deployment, troubleshooting, etc. If you multiply that across lots of different databases, it means everything’s a hot mess of variations, probably all manual footwork. Redgate wants to treat all databases the same way to reduce errors, improve time to deliver solutions, and let less people manage more stuff.

8:33AM: David Bick, head of product management, took the stage to discuss how Flyway, the open source migration tool Redgate acquired in 2015, supports 25+ databases including PostgreSQL, SQL Server, Snowflake, MySQL, Oracle, etc. It’s still open source, but there’s now a Flyway Enterprise layer atop it as well.

8:41AM: Video demo of Flyway watching a database server, noticing a table change, and checking that change into source control. The pull request kicked off a series of automated tests to make sure things compiled, didn’t throw errors, etc. Then in Github, the DBA gets a list of changes and deployment scripts. (I love the idea of this, but I’m probably biased because I actually love Github, and I know it’s foreign to a lot of DBAs.)

8:42AM: Demoing Redgate Clone to restore the 400GB Stack Overflow database for deployment purposes. If you actually have to do this restore, it’s slow as hell (I know, because I have to do this regularly) – and it’s only going to be worse if you’re doing it in a low-powered container being used for automated testing. You don’t wanna wait 20-30-40 minutes for a restore to run in order to test a 1-line T-SQL script. Clone looks like a no-brainer for automated testing, and I can see how it adds value to Flyway.

8:44AM: Flyway Enterprise supports SQL Server and Oracle today. PostgreSQL, MySQL, and Redgate Clone will hit in Q1 2023.

8:45AM: Jakub came back onstage to talk about how DBAs are now supposed to be datacenter-agnostic. We’re expected to support on-premises and cloud databases, and data that flows back & forth.

8:47AM: Arneh Eskandari, Solutions Engineering Manager for Redgate, came onstage to say, “On-prem databases aren’t going away anytime soon.” Again, this is good to hear from the keynote stage. Microsoft speakers often make it sound like Azure is the only thing anyone is deploying today, and that simply isn’t true – and it sends the wrong message to attendees. This is the honest message, and this is good.

8:55AM: wifi bombed out there for a few minutes. For tomorrow’s keynote with Kimberly Tripp, I’ll do the liveblogging differently – I’ll just tweet it, and then edit the tweets together into a blog post after the event. (Plus, I don’t wanna carry my laptop around the conference tomorrow – my last session is today, a lightning talk about fragmentation in the afternoon.)

8:56AM: Jakub back onstage. “Everything is changing. The role of database professionals is evolving. The need for data professionals is bigger than ever.”

8:58AM: Kathi Kellenberger, Redgate Customer Success Engineer and the Queen of SQL Karaoke, took the stage. Of the 377 speakers at this event, more speakers than ever are first-timers. Kathi and Redgate are committed to help your career – by getting you up onstage speaking, helping keep Summit and SQLSaturday alive, SQLServerCentral and Simple Talk help you solve problems.

9:06AM: Kathi explained how writing articles for SQLServerCentral and SimpleTalk was the entry point in her career back in 2005, helping her sharpen her skills by helping others, and it opened lots of doors for her. She eventually went to work for Redgate as SimpleTalk’s editor, and now that role has been passed on to Louis Davidson. (He’s back home in Tennessee working on the hybrid side of the Summit.)

9:09AM: Kathi’s now on the Customer Success team, making sure customers can successfully leverage the software. With something as complex and ever-changing as DevOps, you actually need that layer of consulting. A lot of complex apps end up becoming shelfware, sitting on the shelf without anyone using ’em, because the tool was just too hard, not intuitive enough, or not documented well. Customer Success teams help mitigate that problem, which means companies are more likely to get hooked on that complex software, and then stick with it, paying maintenance over the years. Everybody wins.

9:10AM: The Redgate 100 is their list of top database community influencers. (Hey, I’m in there!)

It’s a wrap! Off to the conference.

SQL Server 2022 Paid Editions Require a Licensing Key.

SQL Server 2022

For the final release of SQL Server 2022, Microsoft popped a surprise that wasn’t in the release candidates:

Your choices are:

  • Install a free edition: Evaluation (which times out after 180 days), Developer (which isn’t allowed to be used for production purposes), or Express (which is limited to small databases & hardware resources)
  • Use pay-as-you-go-billing, which requires you to connect the SQL Server to your Azure account
  • Put in a product key if you want to use Standard, Enterprise, or Web Edition


Microsoft deserves to get paid for their work, just like you, dear reader.

This is a significant change from past behaviors, taking us back to the days of SQL Server 2000 (or thereabouts?) when we had to put in license keys. Back then, the setup process simply validated the key, but didn’t phone home to Microsoft to see if the key was being used repeatedly across lots of SQL Servers.

Logistically, this is a little tricky to enforce. You can buy SQL Server licenses in 2-core packs, each of which would presumably have its own key. The setup doesn’t appear to allow you to put in multiple keys, so it would seem that you could buy a 2 CPU pack of Enterprise, then install it on a 128-core server and put that key in.

Baby steps.

Join #TeamHugo and Help Fight Leukemia.


If you’ve been reading this blog for a while or subscribed to my email newsletters, you’ve learned something from Hugo Kornelis.

Hugo is the wildly productive and selfless guy behind the Execution Plan Reference,, tons of SQLBits sessionsvideos on YouTube, and he’s @Hugo_Kornelis. He’s unbelievably upbeat and positive. He loves what he does, and he loves sharing it with others. Every time I’ve had the blessing to interact with Hugo, I’ve come away grinning.

Today is the opening day of the PASS Summit in Seattle, and on any other opening day, Hugo would be right here with us, grinning and greeting everybody. We’d be talking excitedly about whatever was happening onstage.

This year is different: Hugo is fighting Acute Myeloid Leukemia (AML.)

Right now, as you read this, a bunch of us are wearing #TeamHugo shirts in Seattle to raise awareness of Hugo’s fight. Here’s what you can do to help:

I’m disabling comments on this post because I want all of your positive energy heading in Hugo’s direction. He’s usually the one who brings the rest of us up – and now it’s time for us to lift him up in return.

SQL Server 2022 Release Date: November 16, 2022

SQL Server 2022

Microsoft released SQL Server 2022 today, November 16, 2022.

The footnotes indicate:

SQL Server 2022 free editions (Developer edition, Express edition) are available to download starting today. SQL Server 2022 paid editions (Enterprise edition, Standard edition) will be available in Volume Licensing (Enterprise Agreement, Enterprise Agreement Subscriptions) customers starting today, which represents the majority of SQL Server customers. Customers purchasing via CSP, OEM, and SPLA can begin purchasing SQL Server 2022 in January 2023.

Learn more about what’s new in SQL Server 2022.

#PASSdataCommunitySummit Keynote Day 1 Live Blog: SQL Server 2022 is Out Now.


I’m in Seattle for the biggest annual gathering of Microsoft data platform professionals, the PASS Data Community Summit. This is the first in-person Summit since the pandemic, and the first since Redgate took over ownership from the old PASS organization. I’m really excited – this is like a family reunion for me.

Microsoft’s Rohan Kumar is kicking things off this morning as we speak with the Day 1 keynote. Rohan is the corporate vice president of Microsoft Azure Data, and he’ll likely be introducing lots of other Microsoft folks to cover things during the keynote.

Rohan KumarHistorically, the Summit’s day 1 keynote has covered:

  • Reminders of recent Microsoft data features
  • Release dates for upcoming products
  • Customer success stories from companies who’ve been using the products prior to their official release

There’s less than 60 days left in the year 2022, and we still don’t have a release date, pricing, or Standard vs Enterprise feature lists for SQL Server 2022. It’s reasonable to expect these things to finally be revealed – or if not, perhaps a rename to SQL Server 2023.

You shouldn’t expect outright surprises for SQL Server 2022 – as in, “Hey, here’s an all-new feature for SQL Server 2022 that we’re announcing for the first time!” After all, we’ve already got Release Candidate 1. Instead, the Day 1 keynote is usually used to remind folks who don’t stay quite as in-tune with the news as you do, dear reader – not everybody’s savvy enough to read my blog.

I’ll be live-blogging the keynote, sharing my thoughts about what happens. You can refresh the post to follow along – the most current notes will be at the bottom, so if you’re reading later, you can read through the recap as it happened.

7:30AM: Registration is open! So cool to see it back in the same spot, like coming home.

7:40AM: Azure Data Studio v1.4 brings Apple Silicon support. It’s now compiled natively for ARM, so it’s faster. The keynote hasn’t started yet, but I just noticed that got published today, and I wouldn’t be surprised if it was mentioned onstage. I’ve used Azure Data Studio on my Mac for quite a while, but the speed improvements will be very welcome because startup time has been pretty slow. (I know, I know, I should just leave it running.)

8:02AM: The doors opened, and attendees are pouring in.

8:10 Kate Duggan, Chief Marketing Officer for Redgate, took the stage to welcome everyone and talk about how big the event is. (The WiFi is overloaded, so I’m probably not going to be able to upload photos of things as they unfold.)

8:15: Amusing behind-the-scenes videos of people traveling to their first in-person event in years. (Seriously, this is more true than y’all might know – several speakers I’ve talked to have said, “I totally forgot things I used to routinely bring to events, and I forgot how I used to pack.”

8:20: Rohan Kumar, Microsoft’s Corporate Vice President of Azure Data, takes the stage.

Rohan: “While all of you are familiar with the star player, SQL Server…” Not a bad way to segue into saying, “I’m gonna talk about Azure stuff because you already know SQL Server.” That’s fair. Keynotes are useful for vendor education, and as rapidly as things keep changing, people need Azure education.

8:26AM: SQL Server 2022 is generally available now. That’s not coming from the keynote – that’s from a blog that probably went live a few minutes too early. Here’s the full release post.

8:30AM: The winner of my Guess the Release Date Contest is interesting. Nobody actually guessed 2022/11/16 in the proper date format, at least. Scott Holiday was the only person to guess 2022/11/15 (closest without going over), so he wins.

8:36AM: Failover to Managed Instances isn’t ready yet. In the footnotes of the release post, Microsoft notes:

The bidirectional disaster recovery capability of the Link feature for Azure SQL Managed Instance is available in limited public preview.?General availability will occur at a future date.

That’s a bummer because it’s one of the most widely anticipated features, but at the same time, you don’t wanna use something that complex until it’s ready. It reminds me of how database mirroring wasn’t supported until a service pack came out.

8:39AM: Rohan acknowledges that some companies aren’t moving to the cloud yet, or can’t due to country regulations.

8:40AM: Announcing a new pay-as-you-go SQL Server licensing model. You can already do this in the cloud (AWS/Azure/GCP), but now Microsoft will rent you the licenses on premises, too.

8:41AM: Bob Ward and Anna Hoffman took the stage to talk about Azure-connected SQL Servers. God bless ’em for trying to make a security demo amusing.

8:46AM: Anna demoed the pay-as-you-go license choice built into the SQL Server 2022 installer. Nobody clapped, but I seriously should have clapped because that’s bad ass. Assuming the Azure authentication is pretty quick & easy – and I bet it would be – then this makes managing your licenses way easier. Let Microsoft do the accounting.

8:48AM: Anna demoed the failover and failback feature of SQL Server 2022 to Azure Managed Instances. (This is the part that is only in preview now, not generally available yet.)

8:51AM: Video about how Mediterranean Shipping Company (yay, Javier Villegas) is leveraging the Microsoft data platform.

8:53AM: Bob Ward and Conor Cunningham taking the stage in cowboy hats, to the tune of country guitar music. Since 2022 is code named Dallas, they wanted Rohan Kumar to wear a cowboy hat.

8:55AM: Bob Ward demoed Degree of Parallelism Feedback. He used Perfmon (there wouldn’t be a keynote without Perfmon) to measure parallelism, demoing 2022’s new Degree of Parallelism Feedback. Query Store shows how query performance improved over time, cutting the degrees of parallelism, and the query got faster with less cores. Conor talked about how 2016’s Query Store built the foundation for this, and how he’s excited to see tools continue to leverage it. He casually mentioned that row mode queries don’t scale well beyond 8-16 cores, but batch mode queries do.

9:00AM: Conor Cunningham showing a preview build that does lock escalation differently. This demo is just about impossible to follow. It would have been better in plain English to just say, “We’re trying to solve lock escalation problems,” hahaha.

9:01AM: Lindsey Allen jumping into the stage to show Azure Data Studio with Github Copilot. She started writing a comment first to explain what she wanted, and then typed SELECT, and Copilot guessed the function she wanted based on her comment. It’s basically English Query. This will be amusing to see SQL Server discover the intellectual property risks of Copilot that other language developers are already arguing (and suing) about.

9:05AM: Rohan back onstage to talk about Managed Instance Link feature (HA/DR failover to/from cloud), and backup portability (restore MI backups down on-premises to SQL Server 2022.) The slide says “General Availability”, but that’s a little tricky since the 2022 RTM release post says MI Link isn’t actually generally available today.

9:09AM: Azure Cosmos DB for PostgreSQL lets you run your Postgres apps with the data stored inside Cosmos DB, without having to change your apps. No applause here – it just isn’t the right conference for it. Was worth the 30 seconds he spent on it, but no more than that. It’s a neat idea though.

9:10AM: Azure Synapse Link for SQL is generally available, which makes your ETL processes way easier. Evidently they’ve just added a Schedule Mode, smaller compute capacity support, vNet support, and full support for datetime data types. I read that list and go, “Uh, you tried to launch an ETL product without those things?!?”

9:14AM: Azure Data Factory SAP CDC Connector in general availability, Azure Synapse Mapping Data Flow for M365 Graph is in public preview.

9:26AM: The WiFi in the conference center died for a while. I wrote a few updates, but lost them in the WiFi outage. <sigh>

9:27AM: Hugo Kornelis is fighting AML, so there’s a heart-touching video playing about #TeamHugo.

9:30AM: It’s a wrap! Off to today’s sessions.

Office Hours: Read This, Watch This Edition

SQL Server
1 Comment

This week, a lot of the questions you posted at just need pointers to resources at other places. Welcome to the Read This, Watch This series of answers.

Don’t Blame Anthony: Our developers prefer writing TSQL queries directly in their .NET code rather than writing sprocs and calling them. What are the pros / cons to this approach?

Read this.

Divakar: Is there any formula based on number of processor cores that I can set for Maxdop for a sql server

Read this.

Mr. No: What is your favorite NOSQL database?

Read this.

Dan Griswold: How would you manage access to SQL Server Agent jobs. Our development team shares jobs and they all want to have access to run and modify them. Of course, sysadmin is the only way to grant that privilege. So, we created a sql account and gave them all the pswd. Any better ideas?

Read this.

Jr Wannabe DBA: Hi Brent, once in a lifetime I saw a new index blocking insert operations to a table. What are the reasons it can happen? The support team removed the index before I was able to take a look at it, but it looks like the index created some restriction on what can be inserted. Thx.

Watch this video and this video.

John Martin: A query is imbedded in a compiled program. Is there a way to force it to use a specific index? I can’t change the query. It often picks the wrong index even if we’ve updated statistics and rebuilt the index

Read this.

Magnús: Do you have any scenarios where you like to use the QUERY_OPTIMIZER_COMPATIBILITY_LEVEL query hint?

Watch this video. 

Barney Fife: Worried about unmanaged / unlicensed SQL servers running under someone’s desk. What are some good ways to automate discovery of all SQL servers on the company network?

Try the Microsoft Assessment and Planning Toolkit.

ScenarioFromRealWorld: You log on to a SQL Server to help your client. You notice that almost every process/connection in the SQL Server is using the sa account. Both locally and also from clients through the network. Would you say anything regarding this, and if so, what would you say to your client?

Read this and this.

Leif Hole: Hi Brent ! Is there a noticeable performance difference between cluster index on one vs two integer columns ?

Watch this video.

Dominique B: What’s the white (moto) helmet’s story ?

Read this.

MyFriend: Hi Brent. sp_BlitzIndex (latest) is reporting “Index Hoarder: Unused NC Index with High Writes” for some indexes that have 0 Reads in Usage stats. But have numbers in “singleton lookups” and/or “scans/seeks” in Op Stats. Why is that & would you still consider them safe to drop?

Watch this.

Ramil: Hi Brent! Is there a tool to replicate production load on other environment? For example, we make backup or snapshot of a database, then we start to record all transactions with sessions. Then we replicate this load on other environment as it was on prod.

Read this.

MyFriend: Hi Brent Have you ever “fixed” Parameter sniffing problems, because of spills to disk, by turning Adaptive Memory grants off in SqlServer2019? And if so, what would be the main drawback in doing so?

Watch this.

CliveP: Hi Brent. Can you let me know how you would handle an Aggressive under index warning on a table where there are no missing recommendations and I don’t have access to the application code?

Watch this.

I chuckle a lot when I see data professionals banging their heads against a wall on a hard problem, Googling their brains out, frustrated because they’re not finding any easy answers.

You’ve been consuming my free material for years. You’ve worked hard to polish your knives, to continuously get better.

But for some reason, you think you’re too smart for training classes.

Does it ever occur to you that the reason why you’re not finding any easy answers is because there aren’t any? You’re a senior data professional. You’ve run out of easy buttons. It’s time for you to upgrade your skills so you don’t have to keep beating your head against that wall.

Level 1 Bundle
$395Normally $1,785
  • SQL ConstantCare®
  • The Consultant Toolkit
  • All Fundamentals classes

Or, Buy It for Life.

Pay just once, and get access to my recorded training classes for the rest of your life. (Note that these don’t include the SQL ConstantCare® or the Consultant Toolkit apps.)

Fundamentals for Life
$595Normally $795
  • All Fundamentals classes
  • All future Fundamentals classes, too
Mastering for Life
$1,495Normally $1,990
  • All Mastering classes
  • All future Mastering classes, too

The Fine Print

To get these deals, you have to check out online through our e-commerce site. We only take payment via credit cards – no Subway coupons or IOUs.

Can we pay via check, purchase order, or wire? Yes, but only for 10 or more seats for the same package, and payment must be received before the sale ends. Email us at with the package you want to buy and the number of seats, and we can generate a quote to include with your check. Make your check payable to Brent Ozar Unlimited and mail it to 9450 SW Gemini Drive, ECM #45779, Beaverton, OR 97008. Your payment must be received before we activate your training, and must be received before the sale ends. Payments received after the sale ends will not be honored. We do not accept POs as payment unless they are also accompanied with a check. For a W9 form:

Can we get discounts for group buys? Not during the Black Friday sale: these prices are as low as I go all year.

Can we send you a form to fill out? No, to keep costs low during the Black Friday sales, we don’t do any manual paperwork. To get these awesome prices, you’ll need to check out through the site and use the automatically generated PDF invoice/receipt that gets sent to you via email about 15-30 minutes after your purchase finishes. If you absolutely need us to fill out paperwork or generate a quote, we’d be happy to do it at our regular (non-sale) prices – email us at