It’s Our Anniversary! Let’s Have a Sale!

Company News
0

Way back in May 2002, I posted my first blog post here.

I didn’t know what it would eventually become, but I just wanted to stake out a place online where I could write stuff, put it into a database, and have that database serve my stuff out to other people. Initially, it was a blog platform that I’d written myself, and then later I migrated it to Movable Type, then to WordPress.

I never would have predicted that it’d turn into a business, or a YouTube channel, an email list with tens of thousands of subscribers, or an open source toolkit that helps thousands of people manage their databases every month. I take that stuff for granted now, because it feels like it’s been this way forever, but it’s funny to go back and read those initial blog posts and see how much things have changed.

For this year’s anniversary sale, I’m focusing on the tools that we’ve built: The Consultant Toolkit and SQL ConstantCare®. I can’t imagine doing my job without these tools in hand every day, and I realized that I haven’t been doing a good enough job of showing you how much easier they make my work. I’ll be blogging about them this month, and I wanna give you the chance to jump in and use the same tools at a lower price:

Consultant Toolkit
$349per year
  • Quickly gather diagnostic data with the same tool I use every day.
SQL ConstantCare
$495per year
  • Get daily advice via email to improve your database's performance and health.

The sale is on until June 9. Happy Anniversary of hanging out here for 24 years!


[Video] Office Hours in Port

Videos
1 Comment

Our cruise ship pulled into port in Victoria, Canada, so before we head out to the Butchart Gardens, let’s go through your top-voted questions from https://pollgab.com/room/brento.

Office Hours: In Port in Victoria, Canada

Here’s what we covered:

  • 00:00 Start
  • 02:17 Janis: Hey Brent, Deleted million of rows from Azure SQL clustered columnstore ix. Want reclaim free space in shared elastic pool without long table blocking, but SHRINKDB and usp_IncrementalShrink does not help much. Help 🙂
  • 03:35 Used-to-be-DBA: When you began your career, what were the IT job functions or titles that people were worried about disappearing? Why don’t people take the mindset of “always be learning’? I work with data, but I expect my actual job functions and title to change throughout my career.
  • 03:47 Used-to-be-DBA: When you began your career, what were the IT job functions or titles that people were worried about disappearing? Why don’t people take the mindset of “always be learning’? I work with data, but I expect my actual job functions and title to change throughout my career.
  • 06:21 Not 2019: Have you had any bad experiences with SQL Server 2022’s adaptive memory grants? I’ve twice needed to use emergency MAX_GRANT_PERCENT hints because a query that needs megabytes of memory started taking a quarter of my RAM.
  • 07:37 ContestWinner: Hey Brent, when you give away bundles in your contests, is that yearly or lifetime access?
  • 07:54 johnkurt: first thanks for helping us. we see some cmemthreads and we have already ctfp=50 and maxdop=8. A microsoft page suggest maxdop up to 16 in our case. What is yours experience with maxdop greater than 8 ?
  • 10:13 Used-to-be-DBA: Hey Brent, LLMs are not deterministic. If users are using natural language to ask AI questions about data, then does that mean the query results are not deterministic? If I ask for month over month earnings, then will I always receive the same query result?
  • 11:36 Blommetje: Do you ever tinker with things like Arduino, ESP32, HomeAssistent or iot things? To monitor pool temperature or toggle the driveway lights.
  • 14:51 gserdijn: Regarding procedure sp_spaceused. Is there anything useful to say about the value of the ratio total_index_size divided by total_data_size? Any heuristics whether a database might be overindexed or underindexed? Might be only useful as a trend metric.

Turns Out DBAs Use the Consultant Toolkit, Too

When I first hired Richie over a decade ago, the very first task I assigned him was to build The Consultant Toolkit.

I said, “Whenever I’m working with a client, I have to gather the same data over and over. I run scripts like sp_Blitz, sp_BlitzIndex, sp_BlitzFirst, and so on, copy/pasting that data into a spreadsheet for later review. Then I run sp_BlitzCache a bunch of different ways to get their most resource-intensive queries, and I have to save the query plans off one by one for later review. It’s a lot of manual labor, and that costs money. Can you build something to gather all that data automatically, put it into Excel, and save the query plans as separate files, named by the problems with them, like high CPU or high reads?”

Richie: “Yeah sure.”

That app saved us so much time that we were able to lop an entire day off my 3-day 2-day SQL Critical Care engagement, which cut costs for the client, and let us focus on the fun parts of the work instead of the drudgery.

It worked so well that whenever prospective clients would contact us even thinking about bringing us in to help, I would say, “Here’s a utility to gather a bunch of diagnostic data. Send me that, and I’ll review it for free to see whether or not I’m a good fit. Very often, I’d be able to rip through the data in minutes and say, “Oh actually, all you need to do is ___ – try that first, and that may solve the problem altogether. Let me know either way.” I didn’t have to get connected to the server at all.

I ended up giving up on the idea of trying to build a big SQL Server consulting company, so I started selling the Consultant Toolkit to other consultants and contractors. I ran a few tutorial webcasts showing folks how it worked, and consultants bought in. Today, over 100 consulting and contracting companies around the world rely on The Consultant Toolkit to deliver their services.

Or maybe y'all were just trying to emulate the lifestyle

Over time, DBAs started buying it, too.

At first, I started replying to their purchase emails saying, “Whoa, hang on, I don’t think you really wanna buy that. That’s for consultants.”

To my surprise, they did indeed wanna buy it, and they had a few interesting usage scenarios that I hadn’t predicted:

  • DBAs who worked for giant global companies, and were often called into different teams or departments to quickly troubleshoot a server
  • Customer support engineers at software vendors who would get calls from clients saying there was a problem with the app, and the customer support wanted to check on the health of the SQL Server where the data was stored
  • DBAs who just love the First Responder Kit, and use it for performance troubleshooting, and wanna have a way to get all of the data out quickly

So for y’all, here’s how it works.

When you download the Consultant Toolkit, it’s a zip file with the app ready to go. You don’t have to install it or go through any kind of setup process. It’s a command line app. To gather diagnostic data from MyServer using your Windows auth login, go to a command line prompt and run:

There are also parameters for SQL authentication, Azure SQL DB, and Entra ID multi-factor authentication, and those are covered in the documentation.

By default, it gathers a minimal set of data within a couple of minutes. If you’d like much more data, you can add the –deepdive parameter, and it’ll gather more on the plan cache, indexes, etc.

It packages the script outputs into an Excel file with one tab for each set of results, like sp_Blitz, sp_BlitzIndex’s different parameters, sp_BlitzCache with several different sorts (top queries by CPU, by reads, by execution, by duration, etc), sp_BlitzLock, plus other diagnostic scripts that I find helpful.

In addition to the Excel file, it also outputs:

  • Each query plan from sp_BlitzCache – for example, the top 1 by CPU will be saved as TopQueryBy_CPU_1.sqlplan (and the full top 10 are saved)
  • The queries running right now are saved as QueriesRunningNow_1.sqlplan (and they’re all saved, which is useful if you’re troubleshooting a live blocking issue)
  • The deadlock graphs from sp_BlitzLock are stored as “Deadlock #1, Query #1 – VICTIM.xdl”
  • On SQL Server 2019 and higher, queries that triggered missing index recommendations are saved as MissingIndex_1_dbname_schemaname_tablename.sqlplan

When it’s done, you get a single zip file with the server’s name and date, and instructions to send that zip file back. It’s designed to be easy enough for a non-technical person to run, but the output is powerful enough for you to be able to dig into really complex issues in a matter of minutes.

If that sounds useful, check it out.

It’s on sale for just $349 during our Anniversary Sale, and I believe it’ll pay for itself within a week. You’ll do your work faster and more thoroughly, with less time involved from everybody else in the process. I’ll guarantee it with a 14-day, no-questions-asked money-back guarantee. If it doesn’t pay for itself, email me at Help@BrentOzar.com and I’ll give you your money back, period. Enjoy!


New Official Documentation on Forced Parameterization

Execution Plans
5 Comments

I love me some documentation.

For years, I’ve pointed folks to the SQL Server 2008 documentation on Forced Parameterization, a really useful tool for reducing plan cache bloat, getting more accurate reusable query plans, and enabling SQL Server 2019 and 2022’s Intelligent Query Plan features, many of which rely on the same query text coming in repeatedly over time in order to tune it. That documentation was pretty sparse, though.

Good news! There’s a new set of documentation on Forced Parameterization, and it goes into more details about:

  • What kinds of literals are converted into parameters
  • The exceptions: things you would expect to get converted, but won’t be, and will stop the whole statement from getting parameterized at all
  • The partial-parameterization exceptions: things that won’t get converted, but other stuff in the query still will, leading the plan cache to continue to be a hot mess and stopping the IQP features from working on those queries

There’s also a foreboding warning:

Danger

At first blush, that sounds completely reasonable, but it carries a hidden connotation. If you’re using Query Store, and you expect the forced query to remain the same from patch to patch or version to version, it might not. I’ve never actually seen these things change, but Microsoft is saying they could, which is just another heads-up that forced query plans may not be forced anymore after a change like that. This might have even always been in the old docs – I just never noticed it before. Still, it seems reasonable to me. I’d love to see more investments in Forced Parameterization.

Also, technically, we don’t have Service Packs anymore, so I was a good community member and submitted a pull request to change that wording to Cumulative Updates. As a reminder, when you see a typo or outdated language in the docs, you can click the little three-dot icon at the top right, then click Edit, and you’ll be shoved over to Github to edit there and submit a pull request:

By the power of Graybeard

If you’ve never used Github before, the process is intimidating at first, but once you get used to it, it’s a piece of cake and you can submit edits in less than 30 seconds.


“We want to archive, but we still want to query the data.”

Development
16 Comments

At least once a month, I get this question from a client:

We have big data, and we want to save money. We want to move the older data down to some kind of archives that will cost us less.

I ask exactly one followup question:

Are you willing to modify the app that queries the data, or do you want the users to still be able to query the old data in their existing screens and reports?

If you’re willing to modify the app – like having an “archive reports” section for when users wanna query the older data, and use a different connection string pointing to an archival server, then great! It’s easy. Just put together a cheap 4-core SQL Server Standard Edition VM with crappy storage, and archive the data over to that. I have all kinds of tips to make that easier and cheaper.

But you don’t wanna modify the app.

You want to still be able to use the same screens and same reports. When you wanna run an all-history trending report, or find Acme’s order from 2002, you wanna use the exact same app and same reports.

You just want old data to magically be cheaper.

Table partitioning doesn’t do that. In theory, table partitioning lets you put new data on expensive/fast storage, and old data on cheap/slow storage. In practice:

  • None of your storage connected to production servers is really that cheap.
  • The data’s still inside the same database, so maintenance jobs like backups, high availability, corruption checking, and statistics updates are still time-intensive, and development servers are still huge.
  • As data ages, and you wanna move it to cheap/slow storage, that’s a row-by-row logged & locked operation. The bigger your data is, the uglier of a process this is.
  • The index designs have to be the same across all partitions. (You can do some fancy stuff with filtered indexes, but it’s a giant pain in the keister.)
  • Table partitioning comes with a lot of design & implementation considerations, basically requiring rewriting the whole table and all its indexes.
  • Every single one of your queries have to have WHERE clause filters on the partitioning column (which is inevitably a date range in these scenarios). If not, query performance gets worse because SQL Server can’t eliminate the old partitions.

Partitioned views kinda do it – but not really. In theory, they suffer from most of the above disadvantages too, with a couple of interesting exceptions:

  • The data can be in different databases, so old stuff can be in an Archive database. You can seal that one as read-only when you’re not transferring data over from the live database. That makes backups, high availability, corruption checking, and statistics updates faster.
  • Indexes can easily be wildly different for archives – like columnstore, or tons of indexes without sacrificing the live data’s write performance.

But the rest of partitioning’s disadvantages still apply, and you’ll wanna review those, because they usually end up as showstoppers.

Linked servers kinda do it – but not really either. In theory, you can put the old data on a cheap Standard Edition VM with less frequent backups, less high availability, and maybe even no disaster recovery. You replace the old production table’s name with a view that does a union-all across the current live data, plus a reference to the linked server’s archival data. In practice:

  • If any query doesn’t have the partitioning column (like date) as part of its where clause, performance can be spectacularly bad, because SQL Server doesn’t always filter data over on the archival server. Sometimes it brings all of the archive data across the network wire over to the production server, and does the filtering there.
  • Linked server data isn’t cached. When multiple users run multiple queries, even simultaneously, their query data isn’t shared or cached. It’s fetched from the archive server over the network into your production server’s memory, taking up space every time, replacing stuff that your production server used to cache.
  • As data ages, and you wanna move it to cheap/slow storage, this is really slow since we’re moving across servers now, and writing to a really slow destination.

I’ve seen a lot of people try to use the linked server approach. They test a bunch of queries, work to make sure they don’t touch the linked server, and then go live – only to get a horrible surprise about the queries they didn’t test. It only takes a handful of frequently-called queries accidentally fetching data from the linked archive server to bring your production box to its knees.

Try the linked server approach first.

If you still wanna try archival after reading the above limitations, then try the linked server approach. It’s the one that will produce the most business value if you can get it to work. Here’s how to approach it conceptually:

  • Set up a development environment with two servers: “production” and “archive”
  • Do the development work to replace the production tables with views that union-all across your current production data, plus the archival data on the linked server
  • Archive data over to the archival server, taking it out of production – you wanna accurately simulate the data distribution that you’re going to see in the real-life environment
  • Capture a trace of activity in real-life production for a business day
  • Set up a trace of activity on the development “archive” server
  • Replay the production trace on the development “production” server
  • Stop the trace of activity on the development “archive” server, and review its contents. Those are the queries that are going to hit your development “archive” server in real life. If you’ve done your work correctly, few queries will be hitting that server – because if they are, then you’re frequently querying archive data, and what was the point of all this?

It’s a lot of work – but it sure is better than yolo’ing your way through this and discovering how bad it’s going to be after you’ve already archived in production, and you have to move it all back.


Free SQL Server Spring Training on Performance Tuning

Times and tech are changing quickly. Let’s get you up to speed on the latest developments to do your work more quickly with a series of free live webcasts.

How to Tune Indexes Faster – May 12 & 18Brent Ozar will show you how to use the latest features of sp_BlitzIndex to rapidly improve performance on an existing database. He’ll show you how to figure out quickly if you’ve got too many indexes or not enough, and then demonstrate how to get advice from AI on how to consolidate them in seconds.

How to Tune Queries Faster – May 14 & 20Brent Ozar will show you how to use the latest features of sp_BlitzCache to rapidly improve performance on an existing server. He’ll show you how to find your server’s top bottleneck, find the queries causing that bottleneck, and then demonstrate how to get advice from AI on how to tune them in seconds.

How to Gather Diagnostics Faster – June 2Whether you’re a consultant, a full time DBA at a big company, or just a lone sysadmin in a hurry, sometimes you need to gather a lot of diagnostic data, quickly, while SQL Server or Azure SQL DB is having a performance emergency. You wanna grab all this without running a bunch of time-consuming commands, and you want to be able to review the data later, at your leisure, to understand what was happening on the server. In this webcast, Brent Ozar will show you two ways to do it: by setting up Agent jobs to run the First Responder Scripts, and by using the Consultant Toolkit.

How to Get Daily Database Advice – June 4You’ve got a lot on your plate, and SQL Server is just one of the things trying to get your attention. You’re looking for a way to have someone else keep an eye on your servers to alert you about really important tasks that can’t wait. You’ve tried third party monitoring apps, but they’re constantly crying wolf, spamming your email in-box with warnings about things that don’t really matter. In this webcast, we’ll show you how to install the totally free SQL ConstantCare app, the kinds of things it warns you about, and explain our monitoring philosophy. If you’d like to get performance advice with the paid version, we’ll show you how that works, too.

All of the meetings start at the same time: 1PM Eastern, 10AM Pacific. If you can’t make the classes, we’ll send out recording links after the classes finish. The classes will be lecture-only – you don’t have to follow along on your machine, and you don’t have to install or configure anything.

Register here on the blog. See you in class!


Who’s Hiring Database People? May 2026 Edition

Who's Hiring
8 Comments

Is your company hiring for a database position as of May 2026? Do you wanna work with the kinds of people who read this blog? Let’s make a love connection.

You probably don't wanna hire these two.If your company is hiring, leave a comment. The rules:

  • Your comment must include the job title, and either a link to the full job description, or the text of it.
  • An email address to send resumes, or a link to the application process – if I were you, I’d put an email address because you may want to know that applicants are readers here, because they might be more qualified than the applicants you regularly get.
  • Please state the location and include REMOTE and/or VISA when that sort of candidate is welcome. When remote work is not an option, include ONSITE.
  • Please only post if you personally are part of the hiring company—no recruiting firms or job boards. Only one post per company. If it isn’t a household name, please explain what your company does.
  • It has to be a data-related job.

If your comment isn’t relevant or smells fishy, I’ll delete it. If you have questions about why your comment got deleted, or how to maximize the effectiveness of your comment, contact me.

Each month, I publish a new post in the Who’s Hiring category here so y’all can get the latest opportunities.


Free Webcast: Fixing Parameter Sniffing with Index Tuning

Indexing
4 Comments

I'm trying not to sniff your parametersWhen a query is sometimes fast and sometimes slow, there are a lot of ways you can reduce the effects of parameter sniffing. We’re always going to have parameter sniffing in SQL Server and Azure SQL DB – it’s just the way the product is built – but there are a lot of options to reduce the blast radius.

One of the most powerful ways to do it – especially if you’re stuck with code you can’t change, like code generated by Entity Framework or other ORMs – is to tune indexes. It sounds odd, but by changing the shape of indexes, you can reshape execution plans – either encouraging SQL Server to take the paths you want, or removing paths that result in inefficient plans.

On May 21, I’m doing a free webcast with Idera on How Index Tuning Reduces the Blast Radius. In there, I’ll also be demoing sp_BlitzIndex’s new AI features, which can help you tune indexes across more tables, faster, using whatever LLM you like, hosted or local.

See you there!


So You Wanna Build an App

AI
6 Comments

I get a lot of emails like this:

Hi! I built an app. Can you take a look at it, give me your feedback, and then help get the word out to your readers?

No. I wish I could, but no.

But this happens so often that I wanna write a blog post explaining the position I’m in.

I only have so many hours in the day. I’m usually booked 3-4 weeks in advance. I set strict work/life balance guardrails on my time, so any work I add to my calendar either has to displace paid client work, or free community work like scripts & blog posts & videos, or my personal life. I just can’t displace any of that for free private work.

I do absolutely consult for software vendors. I used to work for Quest Software, and I’ve consulted for Amazon, Google, and a lot of third party software vendors. It’s real work that requires real effort on my part, and I need to get paid for that.

Having said that, I still wanna help you for free, so I’ve put together this blog post with my favorite advice for software makers. There’s a lot of hard-learned lessons in here, and I hope you can just read ’em and avoid some of the most common pitfalls that folks run into.

The Mom Test

First, read The Mom Test.

The book The Mom Test by Rob Fitzpatrick teaches you what questions to ask about your app idea, and who to ask. It sounds simple, but it’s not. Instead of approaching people with your solution, what you wanna do is ask them about the problem that your app solves. You also absolutely do not want to give them answers, nor do you want to lead them, nor argue with them. You want to ask questions that prompt them to tell you what they do, without skewing their answers.

For example, let’s say you’re building a SQL Server monitoring tool. Instead of mentioning anything about a monitoring tool, the first questions you might ask:

  • “When was the last time you thought your SQL Server was having a performance problem?”
  • “What steps did you take to investigate it?”
  • “How long did that take?”
  • “Did it get you across the finish line?”
  • “Are you satisfied with that? Are your users and manager satisfied with that?”

This helps you understand your real competition. Your toughest competition is complacency. Your target audience are already used to solving the problem, so any new solution is going to require them to research the options, evaluate them, implement one of the solutions, learn to use it, and to remember to use the new way when the problem strikes. Those are huge, huge barriers to adoption.

If they’re satisfied with their current solutions, stop. Give up. Look for another problem to solve. (I know you’re not convinced yet, so keep reading to see the rest of the hurdles you’re going to need to jump.) It’s not that you can’t build something better – it’s that your marketing costs will be unbelievably high. If your target market is already happy with what they have, they’re not going to switch to your product without yours being spectacularly better, cheaper, and simpler.

But if they’re DISsatisfied, congratulations! You’ve found a potential market! Keep going and ask the next question.

Next, ask where they’d look for alternatives.

In our monitoring system example, you’d ask, “It sounds like your manager is dissatisfied with the high costs of your SQL Server performance problems, so…”

  • “Have you looked for alternatives?”
  • “Where did you look?”
  • “What alternatives did you find?”
  • “What stopped you from switching to one of them?”

These questions help you understand what your target customers are looking for, the marketing routes you’ll need to take to get your product out there, and the common objections as to why your potential customers aren’t using your competitors.

In our performance monitoring example, let’s say our target user says they Googled for stuff. After you’re done asking them questions, you want to thank them, then go off and repeat the same steps they did. For example, say they Googled – then you should fire up a fresh VM from a cloud provider, not signed in with your accounts, so it doesn’t skew your results, not using any ad blockers, and do the same kinds of searches they did, so you can see what you’re up against:

Google results

Look at the first page of results, above the fold. This is your second layer of competition (after complacency), and it includes the AI summary at the top, and the sponsored results. These are all things your users are going to consider.

Make a spreadsheet of the various options, with the competitors going down the left side and the competitive attributes across the top. The competitive attributes aren’t the things you’re proud of: they’re the things your target user said they were dissatisfied with. For example, if the target user said they were having problems with the monitoring tool being too slow, too expensive, and not compatible with Mac desktops, those are the things you’d put across the top, and gauge each of your competitors to see whether they would solve your target user’s problems.

Sometimes you’re going to find that the competition actually already solves the target user’s problems.

Because sometimes, the target users are lying to you.

They’re telling you that they looked for alternatives, but they didn’t. They’ll tell you they tried the alternatives, but they didn’t. They’ll tell you the alternatives had a specific problem, but it was years ago, and the alternatives have long since solved that problem. Your target users just haven’t found out because – again, complacency.

I know, because I see it all the time with my own clients. (That includes you, dear reader.) Folks gripe about a problem – monitoring, source control, database management tooling, you name it – but when I ask the shallowest questions about what work they’ve done to replace those options, the answers become vague and evasive – because the problem isn’t bad enough that it bubbles up to the top of their task list.

The Lean Startup explains who to ask.

The Lean StartupIn a perfect world, you’d ask perfect strangers, but getting contact info for perfect strangers is pretty hard.

The book The Lean Startup covers how to do it, and I’m gonna try to summarize it as best I can. Build a short 1-page landing page describing the pain you think your target market is facing, and say that you’re working on a new way to solve it. People can put in their email address to get notified when the beta version is available.

Then, buy Google, Facebook, and Instagram ads for the kinds of keywords you think your target market will be searching for. Say you’re building a SQL Server monitoring tool – you don’t actually have to buy keyword ads for that, because that’s expensive. Instead, you can buy keyword ads for related topics with less competition, or have Facebook build you a target audience for the kinds of people who visit a particular web site.

You can – and should – do this even before you’ve got a product built. (That’s why it’s called “lean” – you don’t have to spend much.) This way, as contact emails come in, you can use the questions you built from The Mom Test to talk to them about their pain, and where they’ve been looking for relief. This helps you validate your product idea long before you invest in it.

At that point, you’ll understand the pain, market, and your solution.

You’ll understand more about the pain you’re trying to solve, how your target users are relieving that pain today, and how your proposed pain relief will need to look in order to get any market traction. Ideally, you do these questions first, before you build something, so that you save valuable time. I’m not saying you’ll save time by not building altogether – I’m saying you’ll build something that solves the real pain users are facing, not the pain you think they have, and not the pain your competition thinks the users have.

Ideally, you build something that solves it in a clear simple way.

Non-ideally, you’re reading this after you’ve already built something – but at least now you understand how you need to modify your app, pricing, and marketing material in order to be the #1 competitor in your target market.

Next, build your minimum marketing materials.

At minimum, you need a single web page (which can be the readme.md in your app’s Github repo, if you’re open source) that explains:

  • The pain the user is facing
  • A clear, 30-60 second video demoing your solution relieving that pain (can also be an animated gif)
  • A 2-3 paragraph explanation of how your solution relieves that pain
  • Pricing (or if it’s open source, the license)
  • A download link, plus a link to minimum requirements and installation instructions, with a video showing the fastest, simplest installation method
  • A few 1-5 minute videos showing common use cases, and 2-3 paragraph text explanations with screenshots of those use cases, covering folks who wanna read or folks who wanna watch/listen
  • A link for where to get support or file issues
  • A signup form to join your email list for announcements

You want the reader to go from top to bottom to get their objections out of the way, in order, so that they’re gradually more and more sold on your product. Think back to the target users that you interviewed, what they were searching for, and why they didn’t choose one of the existing alternatives. Your page needs to make it clear that your solution doesn’t have those problems that prevented ’em from using one of the alternatives.

I’m going to use Erik Darling’s SQL Server Performance Monitor page as an example:

Page as of 2026-04-02, will probably change over time

That one first line addresses his target market’s most common concern: performance monitoring tools are expensive.

Then, go back to the target users that you interviewed and tell them, “Hey, while I was doing research, I stumbled across this page. Can you take a look at it?” Have them go through the page, and just listen to their reactions. Ideally, when you’re just getting started, you don’t want your name on the page: you just want their raw reactions as to whether or not they’re interested. If they jump up and point at the screen, saying they gotta have this thing, then you’ve done a good job.

Then the real work starts.

I know, I can hear you from here, yelling, “OMG, Brent, I thought the rest of this was work!” Nah, that’s just table stakes.

Remember back when our target users said they looked for alternatives, and where they looked? You need your solution to show up there. Some of those locations are going to be genuinely hard, like getting on to the first page of Google search results. You may have to resort to expensive ads on Google, Facebook, Instagram, etc, and those also really need expensive consultants to get the job done correctly.

You probably wanna start with the cheapest, least-popular places where your target users looked. For example, maybe they looked in Reddit or a particular forum. You wanna be active there, showing people how to solve their pain points with your solution, without being spammy.

You can’t just announce the product once and expect the users to come rolling in, either. You have to keep talking about the pain, your relief, and how it works. Again, I’m going to use Erik as an example. If you go to his blog or his YouTube channel, odds are that at least one post in the last week or two is about his free Performance Monitor. He has to keep demonstrating it in different ways, showing its value, showing how easy it is, because people never hear you the first time.

That work never ends. Ever.

I announced sp_Blitz, my free SQL Server health check stored procedure, way back in 2011, and I still have to blog about it and the rest of the First Responder Kit stored procedures all the time. New users keep coming into our community, experiencing the same database pains, and I have to keep showing them how to get relief. All this, even though my solution is free!

I can’t do all this for everybody.

I truly wish I could. I get so many emails from people with really cool ideas. Heck, the thing that motivated me to write this post was that I got an email from somebody who’d built an open source tool on Github that sounded like something I’d personally love to use – but the Github readme included a 2-page wall of text about the product, no screenshots, no license, no list of versions or platforms supported, etc. I felt like I was part of the target user market – but I would have to do so much work on my own part to even figure out if the thing could work for me, and … I just don’t have time for that.

In the age of AI, where anybody can haphazardly describe something and a robotic assistant will slap something together that looks vaguely professional, building an app is the easy part. It’s too hard for consumers to sift through the noise to find the good stuff.

However, we’re also in the golden age of open source, where it’s never been easier to manage a public open source project. We now have all kinds of AI tools to help us vet code contributions from strangers, quickly with low effort.

So if you’re thinking about building something to relieve a pain you’re experiencing, and you don’t wanna do everything described in this article, I’d highly encourage you to look at the open source tools you already use on a day-to-day basis. Could those tools be modified to more helpfully address the pain point you’re experiencing? Are the project maintainers open to code contributions from you that aim to relieve that pain? Consider starting there instead of building an all-new tool. You’ll get the satisfaction of helping people get their pains relieved – without the pain that comes with trying to get a new product adopted by the market.

I know, it’s sexier to be the person who runs/owns The Product™, and it’s tempting to think you could be the next SaaS gazillionaire from your Incredible Idea™, but the reality is a lot harder than that, and it’s a grind. It’s absolutely doable, and you CAN do it – but it’s unrealistic to expect other people to do that grind for you.

The least I can do is give you a roadmap for that grind. If you do that grind, and you build the minimum page described above, THEN email me a link to it! That way, I can take 5 minutes to read through it, scan a couple of the videos, and give you my feedback for free. Good luck, happy grinding, and I’m excited to see what you build!


You Can’t Trust “Edit Query Text” in SSMS’s Execution Plans.

SQL Server Management Studio
0

When there are IF branches or conditional logic, “Edit Query Text” only shows you the branches that actually executed. It’s easy to miss whole swaths of code that didn’t happen to execute one particular time.

To demo it, let’s create a simple stored procedure with an IF branch that sometimes executes, and sometimes doesn’t:

Because there’s a user named Brent Ozar, a row is updated, which means the IF branch doesn’t execute. The actual plan shows the first & last statements executed:

sp_BlitzCache

Click on the query plan, which shows all 3 statements, including the IF branch that didn’t execute, which is good:

Cached plan

If you’re doing performance tuning based on that estimated plan, at least you’ll get a full picture of what could happen depending on what values were passed into the proc. You can see the IF branch in the plan’s XML, too, if you right-click on the query plan and click Show XML:

IF branch in XML

But if you’re one of those poor souls who relies on the Edit Query Text button when looking at plans, boy, have I got some bad news for you:

So much missing in action

The IF branch’s contents don’t show up – and hell, the END statement doesn’t even show up! The “warning” up at the top about truncation really pisses me off because it leads people to believe that only stuff at the end might be truncated, like the “END” statement. However, even stuff in the middle can be missing in action! That’s wild.

So me personally, I try to make it a habit to never use Edit Query Text. Go to the source of the query instead – script out the proc, the function, or go to the app code to see what it’s doing.

I’m trying to be a better community member, and log SSMS issues whenever I blog about them, so if you’d like to keep updated on Microsoft’s work to fix this issue, here’s the feedback item. I’m not asking you to upvote it – we don’t need a storm of upvotes here since it’s not really critical, and I trust the process to do its thing. I just wanted to make you aware of it because most of you aren’t updating your SSMS on a regular basis, so you wouldn’t get a fix quickly if it comes out, and this bug has been this way for as long as I can remember.


There Are Days When I Feel Like Giving Up on the Plan Cache and Query Store.

Monitoring
15 Comments

In theory, SQL Server performance monitoring is pretty simple:

  1. Review the server’s top wait types
  2. Find the queries causing those wait types
  3. Fix those queries, or improve the way the server reacts to them (indexes, settings, etc.)

But in practice, step 2 is awful because:

  • Apps send unparameterized strings to the database server
  • Entity Framework users build queries with FromSqlRaw or string.Format()
  • Entity Framework users write queries with .Contains, which builds an unparameterized IN list, even when they’re only looking for a single value (which got better in EF9)
  • People write sloppy dynamic SQL that just concatenates values directly into the query string
  • SaaS developers put each client in their own database, and plans aren’t reused across databases

So as a result, the plan cache and Query Store are damn near useless because every query that comes in is seen as a “new” query. I wrote about this back in 2018, and since then, the problem seems like it’s gotten continuously worse. I’ve been tracking client servers over the last couple of years, and these days, 1 in 3 servers I face has these issues. My lens might be distorted since maybe people who aren’t having this problem are all solving their performance issues with conventional tools like sp_BlitzCache and Query Store, but even if that’s true, there’s a problem bigger than human query troubleshooting.

Modern versions of SQL Server are increasingly reliant on properly parameterized queries. Tools like Automatic Plan Regression (aka Automatic Tuning), Adaptive Memory Grants, Adaptive Joins, CE Feedback, DOP Feedback, Parameter-Sensitive Plan Optimization, and Optional Parameter Plan Optimization all rely on proper parameterization so that they can adapt to the same query over time. If every query comes in wearing a disguise, these features just don’t work.

There’s a database-level switch that’s supposed to help: Forced Parameterization. Turn it on, and SQL Server and Azure SQL DB examine every incoming query, and if it isn’t fully parameterized, the literals are stripped out and replaced with variables. The problem is that it doesn’t work in a lot of situations:

I'M MAD AS HELL AND I'M NOT ... actually I'm going to keep taking it because I get paid to take it

  • Partially parameterized queries – if a query has any parameters, Microsoft assumes the whole thing is parameterized, which is especially problematic for EF’s .Contains
  • Literals in the SELECT list, like SELECT 1 AS ClientId, …  – which always blows me away when I see them, but strangely it’s a commonly used technique for reasons I will never understand
  • Literals in HAVING, GROUP BY, and ORDER BY
  • And more, as explained in this SQL Server 2008 R2 documentation page that has never been updated

Even when Forced Parameterization does work, turning it on suddenly causes parameter sniffing emergencies. Queries that used to get their own hand-crafted plans suddenly get reusable plans, and while that’s great for performance monitoring, it’s not so great for end user performance in some scenarios. Your application might have 1,000 queries, and 990 of them might be just fine with reusable queries – but those 10 others represent 10 different parameter sniffing emergency situations that are going to strike out of nowhere, and keep striking if you don’t fix the queries for good.

I don’t have any answers.

It just feels like there are two kinds of shops:

  • The ones who properly parameterize everything, and can leverage the plan cache and Query Store, but suffer from parameter sniffing emergencies. So they have good monitoring, but they need it, because they face performance emergencies from time to time.
  • The ones who don’t parameterize at least some of their code, so the plan cache & Query Store are largely useless at best, misleading at worst – but they don’t have parameter sniffing emergencies. So they have bad monitoring, but … they don’t care as much.

Some days, I look at that latter group and say, I get it. Not all the time! Most of the time, I want the cool features built into modern versions of SQL Server, Azure SQL DB, and Intelligent Query Processing. But some days… some days, I want to embed literals into all my queries.


Contest Winners: Make the Comments Look Like My ChatGPT History.

AI
4 Comments

I asked y’all to make the comments on this post look like my ChatGPT history, and y’all understood the assignment. The answers were all over the place from accurate to humorous. Here are the 5 runners-up first, each of whom won a Fundamentals Bundle for themselves or someone they love, since many of these folks already had a Fundamentals membership:

“What is the closest distance to walk by a pool without any chances of falling in?” – Kelley Pribil – This calls back to this 2023 incident caught on my home security cameras. I laughed out loud when I read this because I still think about this incident every time I walk around that area of the pool. “Gotta be careful, don’t wanna take a swim right now…”

“Write a short article about a Contest that is called ‘Make the Comments Look Like My ChatGPT History'” – Cavaliery – Back in 2023, I bought a WordPress plugin that would write articles for you, plus content review, tweaking, etc. I experimented with it for a while, bought a separate domain with no names on it, had it author a bunch of Azure posts automatically, and watched the web page analytics to see if it would catch on. I ended up shutting it down – it was too early for the tools to write good stuff – but these days I wouldn’t be surprised if it was a lot more successful.

“Can I route the OBDII car performance metrics to the local SQL Server instance?” – Andrew Buis – Around 2003, I built a car computer for my gooorgeous Volvo C70. It was long before automotive cellular connections were an affordable thing, so I had it watching for open WiFi networks, and when it found one, it would automatically download my mail and read it to me, among other things. This question is exactly the kind of thing 2003 Brent would have asked ChatGPT.

“Write a gentle rejection email to someone who submitted a pull request to First Responder Kit that technically works but that I will never, ever merge.” – Rebecca – This is a great idea and I will actually do it, hahaha.

“What problem are you trying to solve” – Zweitze – There were several takes on this joke, but I like Zweitze’s best because it turns things around on ChatGPT instead of my readers. I can just imagine looking at some garbage coming out of ChatGPT, getting exasperated, and yelling, “Yo, what problem are you trying to solve with this?!?” (I have to confess that I’ve yelled some pretty strong stuff at GIthub Copilot during code reviews when it gets distracted.)

And the winners…

Who each pick up a Fundamentals & Mastering Bundle for themselves or to re-gift to a passing stranger:

“1. What are the top 10 tropical vacation hotspots? 2. What gin and tonic drinks are rated in the top 5 best? 3. How many licks does it take to get to the center of a tootsie roll pop?” – Jersey – I have actually prompted ChatGPT with the contents of my bar and fridge, told it that I prefer very easy-to-mix drinks, and that’s how I started drinking Boulevardiers. Really rough hangovers though, so I quit that.

“Explain the ending of “Lost” like I am 5 years old. Is the plural of “index” “indexes” or “indices”? Also, how do I spell “mantenance”?” – Scott – I adooooored Lost, and I never thought about the plural of index, and maintenance is one of those words that took me years to turn into muscle memory.

I’m going to separate this last one by Alex into its own section because it’s long:

A long one by Alex

Please give specific, actionable advice for the following business problem. Do not give advice that recommends changing my brand, creating vast amounts of new content, or undermining my client’s integrity. Provide a complete answer in a friendly, business minded tone.

I’m looking for some advice on how to bring in more clients for my SQL Server consulting and training business.

I have a couple of avenues I can use to reach an audience:
1. I run a community blog that posts most working days, with ~150,000 unique weekly visitors. Engagement is driven by my email newsletter, which is sent out after each post, has an open rate of 30%, and contains a copy of the most recent post.
2. I run a weekly YouTube Q/A roundup that gets around ~2K views over the first week. Each video covers between 8-15 questions from my community submitted via a poll.
3. I build and maintain a set of free maintenance SQL Server scripts that are widely used in and outside of my community.
4. I run a query plan tuning website that sees ~75k unique weekly visitors.
5. I host in-person training sessions at a variety of database conferences.

I have recently taken some time off for the first couple months of the year, and am coming back to a somewhat tougher market, given the state of the economy and recent developments in AI.

I run a monthly job opportunities post for my community to post jobs in their companies. My latest monthly job post has dropped down to 1 post, from 7 the previous month, and was at a steady ~8-12 through most of last year. My recent posts and moves towards AI have driven the most engagement in my community–I’ve seen widespread engagement around some recent AI capabilities added to my maintenance scripts and my query plan tuning website. Finally, my in-person trainings are doing well, my latest training (optimization based with a K-Pop demon hunters theme) about sql server optimization is close to selling out.

I already have committed resources towards producing new content at this time and I don’t have the bandwidth to work on a lot more right now.

I do have the ability to give away keys for my online training courses, and I’m willing to commit 10 or so to this project.

Provide 3 potential ways to choose from that let me leverage my current audience to bring in more clients to my business. 1 lighthearted and silly, 1 serious, and 1 that you think is most likely to bring ROI. Do not provide anything except the 3 projects.

ChatGPT 5.4’s response

  1. “Worst Query of the Month” giveaway campaign
    Run a playful monthly contest across the blog, newsletter, YouTube roundup, and free scripts: people submit their ugliest real-world SQL Server performance problem, and you roast it lovingly, pick a winner, and give that person one of your training course keys. The point is not to create a mountain of new content, but to turn the content engine you already have into a lead magnet for people who have active pain right now. Require submissions to include company, environment size, what they already tried, and whether they want help fixing it internally or with outside consulting. That gives you a natural way to spot companies with budget, urgency, and messy workloads without acting salesy. At the end of each month, add one simple CTA everywhere: “If your team has a query you’re embarrassed to show in public, that’s usually when it’s time to bring in outside help.” This works because it is fun, highly on-brand for you, and directly surfaces warm consulting opportunities from people already admitting they have a problem. Brent says: the only problem with this is that I’d have to read all that hot garbage.
  2. Manager-facing internal pitch kit for training and consulting
    Create one practical “send this to your boss” package and promote it repeatedly through your existing channels. Keep it small: one email template, one one-page PDF, and one short landing page section that explains when a company should buy training versus consulting, what business outcomes they should expect, and what kind of teams are the best fit. Your audience already trusts you technically, but many of them are not the buyer. They need help carrying your value proposition upward inside their company. Include language around budget efficiency, faster incident resolution, fewer wasted developer hours, and AI reality checks without hype. Then tie your 10 course keys to this by offering them as a bonus for companies that start a conversation: for example, “Book a call about team training or tuning help this month, and I’ll include a few course seats for your team.” This is serious, low-lift, and respectful of buyer integrity because it helps your champions make an honest internal business case instead of pushing gimmicks. Brent says: that’s genius, and I’m sure ChatGPT would do a good job of writing this kind of thing, too.
  3. Free “AI Readiness for SQL Server Performance Tuning” consulting funnel
    This is the one most likely to drive ROI because it aligns with where you are already seeing unusual engagement. Package a tightly scoped offer for teams that are curious about AI but wary of nonsense: a short advisory session or mini-assessment on how to use AI safely in SQL Server performance tuning, code review, plan analysis, and incident response. Promote it through your blog, newsletter, query plan site, and FRK ecosystem as a limited-capacity offer for teams, not individuals. Position it as practical and skeptical: what AI is good at today, where it can waste time, how your tools now support it, and how to evaluate whether their environment is a fit. Use the 10 training keys as incentives for qualified conversations: “For the first 10 companies that book an assessment, I’ll include training access for your team leads.” This gives you a timely reason to talk to companies that may not have budgeted for classic consulting yet, but do have current interest and executive attention around AI. It also creates a clean bridge from community enthusiasm into paid advisory work without requiring you to produce a lot of brand-new material. Brent says: I bet this would work too, but then I’d have to build it, and the content would constantly age out because AI is changing so quickly.

All in all, what a fun exercise! Thanks for joining in with me on that.


[Video] Office Hours at the ICC Wales Before Bits

Videos
0

I flew into Wales a couple days early to get used to the time zones, and showed up at the conference center where SQLBits would be located. Let’s go through your top-voted questions from https://pollgab.com/room/brento on a very chilly morning:

Office Hours in Newport, Wales Before SQLBits

Here’s what we covered:

  • 00:00 Start
  • 01:02 Jersey: Has Microsoft fixed any of the bugs in the MERGE command? And is it worth using in its current state?
  • 01:44 chris: Hi Brent, after you’ve addressed a performance emergency are there cases where you recommend table structure changes as a better long-term resolution? If so, what are some examples of anti-patterns you’ve come across where table structure should be addressed?
  • 03:21 39 Flat: What makes you say that the queries from monitoring tools are optimized? Whenever I catch them on sp_WhoIsActive, they are far worse than what is in the FRK.
  • 04:33 Failed To Fail: What do you think about Erik’s monitoring tool ?
  • 06:02 Used-to-be-DBA: Hey Brent, Where do you see the Enterprise IT industry in 20 years? Will we be back on-prem with everyone having their own private clouds? Or will we own less and everything will be a service?
  • 07:12 Storage Bottlenecked: My user databases are too big for my limited hardware to CHECKDB, but they are all in AGs with 4 nodes. Is automatic page repair enough to keep me safe?
  • 08:01 JoyOfSqlCoding: Hi Brent, as of today, what SQL version is more stable for production? 2022 or 2025?
  • 09:00 rookieDba: Hyperthreading on SQL Server with a high core count. Yay or nay? The best is of course to test on a production-like traffic, but that’s a giant pain in the rear
  • 09:34 TeaEarlGrayHot: I have had mixed results with SQL Loops. Most i’ve seen eventually run out of memory, have large spills and subsequently high CPU. I have also seen loops written for fast ordered deletes that never have problems. What is the difference in loops that spill vs loops that do not?
  • 12:29 Craig W: How to integrity check a 15 TB database that has a small (6 hours) window of non-critical use?
  • 12:58 Ricardo: Hi Brent. Do you feel one day you will have ‘passive income’ or are you constantly creating / updating your courses?
  • 14:11 Anonymous: Out of curiosity, is there a reason you didn’t expand your career to performance tuning other components like SSAS (now Power BI)? Is it because they never had much observability baked in?
  • 14:57 Arramasta: I’m not understanding remote DAC correctly,I have a job that failed to kick off due to “Could not connect because the maximum number of ‘1’ dedicated administrator connections already exists.”. If the setting in sys.config is 0, why am I still seeing this in my error log ?
  • 16:08 flyboy91901: SQL Server 2022 Std vs Enterprise. What is the decision process for one or the other? Also, the behavior is WAY different between the 2. App behaves on Std, on Enterprise the SP throws an Arithmetic error. Changed the Compat to 130 and it works. Exec Plans so different.
  • 17:03 Nobody expects the Spanish Inquisition: Hola Muchacho, Have you had any issues with installing or running SQL Server 2025 on Windows server 2025 even though being local admin? I’m thinking permission or similar, I’ve had issues with GUIs not starting, but I can accomplish the same thing with powershell without issues.
  • 17:32 Sean: In a recent OH, you said you and Claude were “Learning Together” about new functionality. One of my vendor libraries broke with an update, documentation is sparse, and it seems forums are out of vogue. Can you share your experience breaking new grounds with AI?
  • 19:05 Cam: Hi Brent. Loved your post about making an AI agent of yourself. I have gone wild and created a panel of experts. So far it’s you, Erik Darling, Jon Skeet, and even Paul White dropped by. It’s wildly entertaining to work with you all. Thank you so much for this awesome idea!
  • 20:20 Used-to-be-DBA: Hey Brent, I just got back from Fabcon and realized that many of the great conversations that I had with presenters post session might have been for nothing. I forgot to introduce myself! How do you handle meeting new people at conferences? exchange business cards?

SQL ConstantCare® Population Report: Spring 2026

It’s time for our quarterly update of our SQL ConstantCare® population report, showing how quickly (or slowly) folks adopt new versions of SQL Server. There were only slight 1-2% shifts, nothing big:

  • SQL Server 2025: 1%
  • SQL Server 2022: 31%, up 2% from last quarter
  • SQL Server 2019: 42%, down 1%
  • SQL Server 2017: 9%, no change
  • SQL Server 2016: 9%, down 1%
  • SQL Server 2014 & prior: 6%, no change
  • Azure SQL DB and Managed Instances: 2%, no change

SQL Server 2025’s 1% adoption rate might sound small, but it mirrors the adoption rate curves of 2019 and 2022 when those releases came out. It took 2019 a year to break 10% adoption, and it took 2022 a year and a half. I’ve grouped together 2014 & prior versions because they’re all unsupported, and 2016 will join them quickly in July when it goes out of extended support. (I can’t believe it’s been almost 10 years already!) Here’s how adoption is trending over time, with the most recent data at the right: SQL ConstantCare Population Report Winter 2026

The new stuff continues its steady push from the top down, driving down the old versions out of support.

How big are the servers out there?

How much total data is on each of the monitored servers?

  • < 50GB: 31%
  • 50-250GB: 23%
  • 250GB-1TB: 23%
  • 1-3TB: 12%
  • >3TB: 11%

For a long time, the measure of a “very large database” was around 1 billion rows or 1 terabyte. These days, 23% of all SQL Servers out there have over 1TB, and 1 in 10 are over 3TB! Those numbers aren’t all that big in the grand scheme of things these days. Our jobs just keep getting more challenging as organizations continue to gather more data, and hold off on archival or purging. Some of the other interesting big numbers: we’ve got servers in the audience with >400TB data, 128-448 CPU cores, 2-6TB RAM, and thousands of databases. Y’all are putting that data to work!


There’s a Bug in sys.dm_exec_query_plan_stats.

SQL Server 2025
6 Comments

When you turn on last actual plans in SQL Server 2019 and newer:

The system function sys.dm_exec_query_plan_stats is supposed to show you the last actual query plan for a query. I’ve had really hit-or-miss luck with this thing, but my latest struggle with it is that two of the numbers are flat out wrong. It mixes up CPU time and elapsed time.

Here’s a simple query to prove it:

That query goes parallel, burns a lot of CPU, and finishes in a few seconds:

CPU time is higher than elapsed time, indicating that the query went parallel across multiple cores. The SET STATISTICS TIME numbers are correct, as are the time numbers in the actual query plan.

But if you check the last “actual” query plan in sys.dm_exec_query_plan_stats:

The numbers are exactly backwards:

QueryTimeStats

The last “actual” plan says the query took 8 seconds (it didn’t) and that it burned less CPU time than elapsed time. The numbers are just transposed.

I’ve filed a bug about this, but this is where it gets tricky as a script author. Sure, for now, I can simply reverse the numbers in my First Responder Kit scripts, fixing Microsoft’s error, but … now I gotta watch every upcoming set of Cumulative Update notes, reading to see if they fixed the bug, then go back to any relevant scripts, put in build number checking, and flip the math for the corrected builds.


Building My Dev/Prod Demon Hunters Session, Part 2: The Tactics

First Responder Kit
4 Comments

This week at the SQLBits conference in Newport, I’m teaching an all-new all-day session called Dev-Prod Demon Hunters: Finding the Real Cause of Production Slowness.

It filled up initially, but they’ve opened up more seats for it, so if you’re interested, there’s still time to join! Attendees get a free Fundamentals & Mastering 1-Year Bundle. Register now for SQLBits, or if you’re already registered, email contactus@sqlbits.com to get into my workshop.

Cosplay ideas (for you, not me)I’ve already written about how I designed the session’s strategy. My overall strategy was to set up 2 SQL Servers, then step through several different queries, each facing a different reason why they were fast on one server, slow on another server. For example, we might face issues with parameter sniffing, different statistics on each server, and different settings on each server.

Now, tactically, how do I teach each specific lesson?

There are a bunch of ways teachers can communicate a topic, like:

Or, perhaps one of the hardest ways: give attendees an already-built tool that does the work for them. I’ve done this in the past when I unveiled sp_BlitzFirst at the PASS Summit (then called sp_AskBrent, and since renamed), live for the first time in front of the audience, springing it on ’em.

Giving them a tool is hard because:

  • You really gotta write the entire session first as the specs for the tool, because the tool has to deliver on what the session training was supposed to be all about
  • You gotta write the tool – although these days with Claude Code, that’s way easier
  • Then you gotta rewrite the session based on how you accomplish the goal using the tool

Building a tool is also hard because I’m basically automating myself out of some work. If I tell you how to do something really complex, then I can sell hours of training material on that, and you walk out of that class feeling like you learned a big quantity of information.

You don’t necessarily work faster that way though.

The more I worked on the Dev-Prod Demon Hunters workshop, the more clear it became that we, as an industry, really need a tool to make this easier. Before building the tool, I’d written out a giant troubleshooting checklist telling the audience all these places they needed to go to investigate these performance-killing demons. The better the session became, the more fleshed-out this checklist became, and it just isn’t realistic to expect performance tuners to look at:

  • sys.configurations
  • Database-scoped options
  • Trace flags
  • Query plan contents to see which histograms were used for estimation
  • Histograms for all of those statistics to see if they’re different
  • Last actual query plans to compare wait stats, spills, CPU metrics
  • Different indexes & indexed views present on each server
  • Different hardware capabilities
  • And oh yeah, parameters used to compile the plans

So onstage Wednesday, I’ll unveil the tool for the first time and teach the audience how to use it. See you there!


[Video] Office Hours in the Vegas Home Office

Videos
0

When you’ve got questions about architecture, development, or best practices, post ’em at https://pollgab.com/room/brento, and upvote the ones you’d like to see me cover. It’s like free bite-sized consulting.

Office Hours: Microsoft Database Q&A

Here’s what we covered this week:

  • 00:00 Start
  • 01:37 DBAInAction: Hi Brent, could you please suggest an archival/purging strategy for multi-TB tables to free up disk space? Also, is there a recommended method for designing archiving/purging process for new databases before they become too large? Thank you!
  • 03:47 MyTeaGotCold: I was surprised that Mastering Server Tuning never mentioned what you can do with files and filegroups in user databases. Do you find that they solve problems for you?
  • 06:52 Forgetful: Do you ever recommend putting the tempdb log file on a different drive to the tempdb data files?
  • 07:42 Adrian: Hi, would you enable tempdb ADR by default for new SQL 2025 servers? Any risks or negatives to consider, especially in combination with Availability Groups?
  • 11:12 Dopinder: For SQL Server multi tenancy, one database per account seems optimal in limiting potential future corruption. However, doesn’t SQL AG tip over in the one database per account model due to AG database count limits?
  • 13:40 FrugalShaun: What’s your go-to stat for measuring SQL Server throughput? When I want a quick comparison with past performance I check Batch Requests/sec, but that shows requests received, not necessarily work actually completed. What do you rely on instead?
  • 15:02 Micen1: I have a customer running MDS on SQL 2019, they are looking to be able to run it on Azure SQL db but it seems that the MDS app still needs IIS on a vm? Do you the future of MDS or could you give your thoughts abouts its future and what to expect?

Updated First Responder Kit and Consultant Toolkit for April 2026

If there was ever a release you should skip for reliability reasons, it’s this one. There’s a lot of AI-edited code in here. I’m really only putting it out there because I want to let early adopters start playing around with a few new things:

  • sp_BlitzIndex adds an @AI parameter to get index consolidation advice for a single table.
  • A breaking change in the AI config tables for sp_BlitzCache (and now, sp_BlitzIndex as well.) We used to have both the AI providers and prompts in the same table, but I needed to normalize that out into two tables now that we’re adding AI capabilities to more procs. If you’ve started playing around with sp_BlitzCache’s AI config table, run this script to migrate your configs to the new table structure before running the new version of sp_BlitzCache.
  • Added a new SQL Server Setup Checklist markdown file in the Documentation folder. This replaces the old PDF doc that we used to distribute, but it hadn’t been updated in years. I did a quick pass with some basic updates, and folks are welcome to add their own guidance. Just create a Github issue first describing the kinds of changes you’d like to make so that you can have a community discussion about the guidance – I wanna make sure the right stuff ends up in there.
  • There’s a new sp_kill stored procedure, driven by your design ideas. I’ll be blogging more about this in the coming weeks, but wanted to get the first draft out there for a client who needed it ASAP. You’re welcome to use it now, and log any issues you find with it.
  • Starting with this release, we only support SQL Server 2016 SP2 & newer. Remember, even SQL Server 2016 drops out of support in 3 months, and 2014’s been out of support for 2 years. If you need to tune your Antiques Roadshow servers, use the scripts in the Deprecated folder. Why SP2 and not RTM or SP1? Because SP2 added a ton of DMVs and columns in existing DMVs.

To see how these work, read the documentation. In the coming weeks, I’ll be updating my (still free!) class, How I Use the First Responder Kit, with details on the AI calls and sp_kill.

To get the new version:

sp_Blitz Changes

  • Fix: wrap sp_Blitz, sp_BlitzIndex, sp_BlitzAnalysis, and sp_DatabaseRestore stored proc contents in a BEGIN and END to make it easier to automate updates to the Consultant Toolkit. (#3851)
  • Fix: code modernization for SQL Server 2016 & newer. (#3872)

sp_BlitzCache Changes

  • Enhancement: return oversized >128 node plans inline instead of asking the user to run a separate query. (#3868, thanks Erik Darling.)
  • Enhancement: add per-database warnings for duplicate and single-use plans to help narrow down which apps are causing the problems. (#3878, thanks Erik Darling.)
  • Fix: wrong parameter name in readme.md fixed. (#3861, thanks Ioan Podaru.)
  • Fix: skip @AI = 1 calls when query plan is null, and allow @AI = 2 to build a prompt even when there’s no query plan, just a query text. (#3855 and #3854)
  • Fix: invalid column reference in AI-generated code. (#3880, thanks shekarkola.)
  • Fix: code modernization for SQL Server 2016 & newer. (#3870)

sp_BlitzIndex Changes

  • Enhancement: AI advice available at the table detail level. When calling sp_BlitzIndex @TableName = ‘mytable’, you can use the @AI = 2 parameter to get an AI prompt for your favorite LLM, or @AI = 1 to call the LLM directly as we do with sp_BlitzCache. (#3827)
  • Fix: code modernization for SQL Server 2016 & newer. (#3876)

sp_BlitzLock Changes

  • Fix: corrected deadlock numbering errors. (#3859, thanks Vlad Drumea.)
  • Fix: avoid duplicate rows in result sets. (#3765, thanks Tisit.)
  • Fix: nicer error messages in Azure SQL DB that include instructions on creating an event session. (#3892)

sp_BlitzWho Changes

  • Fix: code modernization for SQL Server 2016 & newer. (#3874)

sp_kill Changes

  • First version! See the documentation for more details. (#3864)

Consultant Toolkit Changes

Brent Ozar's Consultant ToolkitIf you’re a consultant that does long term work, maintaining a client’s SQL Servers, then you probably want to track the health and performance of those servers over time. You want data – in a database.

We’ve got a new Loader app that watches a folder for incoming zip files, and when one shows up, it processes the data to load it into a SQL Server (or Azure SQL DB) repository for you.

This means you can set up the Consultant Toolkit at your clients on a scheduled daily task, upload the data to S3 (built in) or use your own file sync methods to get it to the location of your choosing, and then have the data automatically loaded into your database server for you.

To learn more about that, read the PDF documentation included with the Consultant Toolkit.

We’ve also added a new separate Consultant Toolkit download for SQL Server 2012-2014. Microsoft no longer supports those versions, so we’re not making changes to it anymore. It still works just the same as it always did – it just doesn’t have the cleaner code and additional diagnostics that we’re adding to the newer versions, starting this month.

For Support

When you have questions about how the tools work, talk with the community in the #FirstResponderKit Slack channel. Be patient: it’s staffed by volunteers with day jobs. If it’s your first time in the community Slack, get started here.

When you find a bug or want something changed, read the contributing.md file.

When you have a question about what the scripts found, first make sure you read the “More Details” URL for any warning you find. We put a lot of work into documentation, and we wouldn’t want someone to yell at you to go read the fine manual. After that, when you’ve still got questions about how something works in SQL Server, post a question at DBA.StackExchange.com and the community (that includes me!) will help. Include exact errors and any applicable screenshots, your SQL Server version number (including the build #), and the version of the tool you’re working with.


Announcing the Panel Lineup for My SQLBits Cloud Discussion

Cloud Computing, SQLBits
0

At SQLBits, I’m hosting a panel discussion called 20 Years of the Cloud: What Changed, What Didn’t, and What’s Next. I’m excited to share the panel lineup – all of the links go to their LinkedIn profiles:

I’m excited because the panelists have a diverse set of experiences. Across the group, they’ve seen the cloud from its infancy 20 years ago, to its current state, and their visibility across a lot of companies and industries means they’re better equipped to see what’s coming next.

I’ll be asking them questions like:

  • What misunderstandings do executives or application teams still have about what cloud means for database performance and operations?
  • Has cloud improved collaboration between dev, ops, and data teams, or just created new boundaries and blame patterns?
  • What’s something that younger data professionals take for granted today that used to be a slow, painful process?
  • If you had to point to one area where cloud delivered beyond expectations, what would it be? What about under-delivered?
  • Looking ahead, which current cloud trends feel foundational, and which are just fashionable?
  • What’s a belief about cloud that you held 10 years ago, and no longer believe?
  • What should attendees be more skeptical about, and what should they be more optimistic about?
  • What skill or mindset should a data professional build now if they still want to be effective 10 years from today?
  • If you had to live with a system for 10 years, what choices would you make differently than if you were just trying to get it live in 3 months?

Wanna watch it unfold? Join us Thursday 10:00-10:50 in the Living Lounge. Here are calendar invite files for all of my sessions to make it easier:

See you in Newport!


SQL Server 2016 Support Ends in 90 Days.

SQL Server 2016
14 Comments

On July 14, 2026, Microsoft’s extended support ends for SQL Server 2016.

We aged like fine wine, not like SQL Server 2016
Brent Ozar Unlimited circa 2016

They will offer Extended Security Updates that you can buy for 3 more years, either through Azure or your licensing partner. The price is stunning:

  • Year 1, July 2026-2027: 75% of the original license price
  • Year 2, July 2027-2028: 150% of the original license price
  • Year 3, July 2028-2029: 300% of the original license price
  • Total for 3 years of security updates: 525% of your original license price

I didn’t know numbers went that high.

This must be that “big data” I keep hearing about. In Microsoft’s credit, I understand they’re working on an ad campaign to raise awareness, and it’ll feature Roger Allen Wade’s classic song. I’m looking forward to those commercials.

If you’re SQL Server 2016 or prior, and you’re having difficulty getting management buy-in for version upgrades, and you’re at a large enough company where you have a security team, send this in an email to your security team:

Does our tech insurance, HIPAA/PCI/SOX compliance needs, or corporate policies require us to be on currently supported software, or is it okay if we’re relying on out-of-support software that no longer gets security updates?

This lets somebody else be the bad cop, and you can connect the security team with your managers & accountants to let them duke it out. If your company doesn’t have a dedicated security team, you can send that same question to your manager, but then you’re the bad cop. That’s not nearly as much fun, but you should still probably send the email just to have something in writing to cover your rear.

Ideally, the security team and/or managers recognize the risk involved in running your shop on Antiques Roadshow software, and you start picking out which version of SQL Server you should replace it with.

If your company’s security team and/or your manager reply in writing saying no, no problems, then you’ve at least done your duty. I will say, though, that if your company is willing to be on unsupported software, they’re probably doing it to save money – and that means they’re probably also cheap on stuff like salary, employee benefits, and training. Long term, you probably wanna start passively looking for work elsewhere before your skills get as out of date as your database server. (What a dark way to end a post, eh?)