I’m Coming to the PASS Summit in Frankfurt!

#SQLPass
0

PASS Data Community Summit Frankfurt

The pre-conference lineup for the PASS Data Community Summit Frankfurt event was just released, and I’m proud to share that I’ll be teaching my new all-day pre-conference workshop.

Dev-Prod Demon Hunters:
Finding the Real Cause of Production Slowness

Production is slow. Development is fast. The same query runs in both. Somewhere between the two, a performance demon is hiding—and this session is about hunting it down.

Your queries drive me to drink.
My own demons

Inspired by Brent Ozar’s love of the K-Pop Demon Hunters theme song, this class is delivered almost entirely as live demos, not slides. Brent Ozar will run real queries against two environments labeled “dev” and “prod,” then work through them exactly the way an experienced DBA would in the real world: comparing server settings, analyzing execution plans, and uncovering the subtle differences that led SQL Server to make different decisions. Each “hunt” reveals another demon—statistics, configuration, data distribution, or plan choice—and shows how easily a test environment can lie.

Along the way, Brent will demonstrate practical techniques you can use immediately: running sp_Blitz to surface meaningful environment differences, comparing execution plans to understand why SQL Server behaved differently, and making targeted changes to development so it better reflects production reality. By the end, you’ll understand how to stop guessing, stop blaming the engine, and follow the clues that lead to the truth—because when dev and prod finally move in sync, that’s when performance goes golden.

3 things you’ll get out of this session

  • Discover what caused query plans to vary from production
  • Learn how to quickly assess environment differences that would cause query plan changes
  • Understand how to change dev to more closely match prod

Pre-requisites: You should already be comfortable writing queries, reading execution plans, and using the First Responder Kit to gather data about your server’s wait stats and health.

Registration is open now with early bird pricing expiring March 31 (quickly!), and attendees will get a free year of my Recorded Class Season Pass: Fundamentals. See you in Frankfurt!


[Video] Office Hours: Q&A on the Mountaintop

Videos
4 Comments

Well, maybe mountain is a bit of a strong word, but it’s one of the highest elevation home sites in Las Vegas, with beautiful views over the valley, the Strip, the airport, and the surrounding mountains. Let’s go through your top-voted questions from https://pollgab.com/room/brento while taking in the view – and you can move the camera around, since this is an Insta360 video.

I had to cut the first ~60 seconds of the intro, so it seems to start in an odd place:

Here’s what we covered:

  • 00:00 Start
  • 00:33 Sun Ra: Who is your favorite musician?
  • 01:27 Dopinder: What is your opinion of the query hint tool in SSMS22? Who is the target audience? Will end users misuse it?
  • 02:30 Trushit: I need medallion architecture for BI initiative. What would be your thought process? Largest table today is about 3M rows.
  • 03:42 I Graduated MIT: In the MIT class, you say developers just need clustered indexes on each table (and FK) as a starting point, and DBAs take over later once it’s in production. Does AI change that? Should devs use AI to predict indexes earlier?
  • 05:15 Bandhu: SQL log shipping performance can be bad over cloud provider SMB storage options. What’s your opinion of swapping out the default log shipping transport (SMB file copy) to using something more network efficient (Blob storage for Azure or S3 for Amazon)?
  • 06:54 racheld933: Do you have a specific industry/sector of clients that you prefer over others? Like healthcare, finance, education, etc. Are there any specific challenges or trends that show up within each group?
  • 08:21 RoJo: Can you comment on the use of BitLocker on a SQL server host? I’m concerned on speed, and another layer to add when patching or updating. Is it really needed if the Host is physically locked in a rack and room.
  • 10:09 Alice: AI sent me down a powershell rabbit hole of try this, oh wait, sorry try that only to tell me many attempts later that what I was asking for is no longer supported. What is the worst AI rabbit hole you’ve experienced?
  • 11:30 Mister Robot: At what point does “AI-assisted DBA” turn into “why do we still have DBAs?”
  • 12:58 NicTheDataGuy: Hi Brent, you made a comment in a previous post that ” I actually put serious thought into deciding which tool I was going to learn next because I’d have so much free time”, curious what you would have done and why? is there a tool/language you think is going to be in demand?

Y’all Are Getting Good Free AI Advice from PasteThePlan.

AI, PasteThePlan.com
9 Comments

PasteThePlan has a new AI Suggestions tab, and it’s been really fun to watch the query plans come in and the advice go out. Here are some examples:

  • Date “tables” – when I looked at the query, I glossed over the real problem. I thought arc_Calendar was a real date table, but AI figured out that it’s actually generated on the fly with spt_values, something that blows my mind at what an incredibly bad idea that is. It never crossed my mind that someone would even consider doing this in production, and AI tells them what to do instead.
  • This is fine – someone asked for advice on a select from a 1-row temp table. AI said get outta here with that nonsense.
  • One-table delete – but due to foreign keys, it’s actually touching multiple tables. The AI advice explains what’s going on and how to make it go faster, plus boils it down to “if you only do one thing” – I love that! I love prioritized advice.
  • Reporting process – a kitchen-sink style query that at first glance should probably be rewritten to dynamic SQL, but they’ve already solved that to some extent by slapping an option recompile hint on it. The AI advice catches a scalar UDF causing problems, suggests a rewrite using a technique the company’s already using on another column, and suggests a slight rewrite to push filtering earlier. Again, I love its recap here, all for free, in less than 30 seconds.
  • Lengthy paged dynamic SQL – ugh, even just glancing at this makes me look at the clock to think about how long it would take me to analyze. In seconds, AI reasoned over this monster to suggest indexes that would help the paging, plus raises some eyebrows at a weird not-exists-not-exists design.
  • Meaningful function rewrites – advice to quickly and easily change non-sargable functions into index seeks, and remove a correlated subquery.

Advice I’m not as wild about, and I’m thinking about how to tune the AI prompt in order to improve it:

  • 16,000 row export with no WHERE clause – my first reaction here is to say, “Whoa now, we don’t tune for 16,000-row XML export queries.” However, when it comes to tuning the prompt to deliver better advice, I think we need to push the user harder to copy/paste in actual execution plans that include timing information, and then evaluate whether we should bother making changes, or whether the query’s performing good enough based on the wild output we’re asking for. This Power BI query is a similar example – whenever I see scientific notation for the estimated number of rows, I wanna stop tuning there and go get the actual plan.
  • Covering indexes galore – this is just one example, but my first iteration on my AI prompt didn’t discourage covering indexes, and AI seems to really wanna suggest ’em. I need to refine the prompt to suggest starting indexes with just the keys, and then only add includes for covering if the query’s still not fast enough after the initial indexes. There also seems to be a hesitance to recommend clustered indexes on heaps, even reporting heaps that were just created for the purpose of the query we’re looking at.
  • Lose the cursor – I’m surprised at how often I’ve seen folks paste cursor plans in. I do love the advice – it’s always “hey lose the cursor” – but I think we could do better. I’d like to be able to proactively rewrite stuff for folks, but realistically, a web page isn’t the best UI for that, and it will probably need a better, slower model with a >30 second timeout. We need to tell people, “Here’s a link to a set of commands that will rewrite the query for you” – whether that’s Claude Code, ChatGPT Codex, the plain chat interface for those tools, or something else.

And some queries & advice are just making me think. For example, in this multi-step reporting query, the AI seemed to find a lot of advice, but to understand if the advice is any good, I kinda want a followup status report from the user. Did they implement any of these fixes? Which ones? What kinds of differences did they see? We probably need a feedback loop of some kind to help iterate over the AI prompt.

I’m not delusional enough to think PasteThePlan.com is the right long-term solution for getting plan advice! We only have about 50 people using it each weekday. However, if that helps 50 people per day avoid posting questions to forums, and get instant answers that solve their problems for free and make users happier, then I’m very happy with that result. I couldn’t possibly answer 50 peoples’ query questions per day for free in my spare time!


My Wish List for SQL Server Performance Features

SQL Server vNext
18 Comments

There are a lot of shopping days left before Christmas, and even more before the next version of SQL Server ships, so might as well get my wish list over to Santa now so the elves can start working on ignoring them.

My work focuses on performance tuning, so that’s what my wish list focuses on, too. They really are wishes, like I-want-a-pony, because I know I’m discussing some stuff that’s easy to describe, but really challenging to implement.

Forced Parameterization v2: The original implementation of this feature helped mitigate the multiple-plans-for-one-query issue, but it has a ton of gotchas, like not fully parameterizing any partially parameterized queries, or the select list of select statements. I still hit a lot of clients with those problems in their queries, making monitoring tools, Query Store, and the plan cache way less useful, and I’d love to see forced parameterization updated to fix these additional issues. If you search the the SQL feedback site for forced parameterization, there are a few dozen issues that reference it in various ways, including plan guide challenges.

Forced Parameterization v3, Handling IN Lists: When you use Contains() in a LINQ query, it gets translated into a T-SQL IN clause. This list can have different numbers of values depending on how many things you’re looking for – maybe you’re looking for just 1 value, or 10, or 100. Unfortunately, even when forced parameterization catches these, it still builds different plans for different quantities of values – like 1 value, 10 values, and 100 values all produce different plans in the cache – again, making monitoring tools, Query Store, and the plan cache way less useful. I would love the ability to just build one plan for an IN list, regardless of the number of values.

Better Approach to Query Hash: The whole reason I’m focusing on Forced Parameterization here is that SQL Server compiles different execution plans for each submitted query string. Even differences in spacing and casing cause different plans to be cached in memory – again, as discussed in this multiple-plans-one-query post. What if the optimizer was just smarter about recognizing that these two queries are really the same thing, and only caching one plan instead of building two separate plans?

Cost Threshold for Recompile: If a query plan’s cost is higher than X, recompile it every time rather than caching the plan. If the query cost is 5000 Query Bucks, it’s worth the extra 1-10 seconds to compile the dang thing again before we run a giant data warehouse report with the wrong parameter plan. It’s basically like adding a Query Store hint for OPTION (RECOMPILE), but doing it automatically on expensive queries rather than trying to play whack-a-mole. Here’s my feedback request for it.

We do Christmas decorations a little differently around here
We do Christmas decorations a little differently around here

Execution Threshold for Recompile: If a query has run 100 times, asynchronously compile a new plan for it, but this time around, take a lot more time to build the plan. Don’t do an early termination of statement optimization to save a few milliseconds – we’re serious about running this query. Use the execution metrics that have piled up over those 100 executions to think about whether this is really small data or big data. This is challenging to do, I know, because it means SQL Server would need a task list (like plans to be recompiled), plus an asynchronous way of processing those tasks, plus a way to know when it’s safe to run those tasks without impacting end user activity. It would also require a lot of new monitoring to know if we’re falling behind on that task list, and ways to identify which plans were re-thought, and persistence of those “better” plans in places like Query Store. Here’s my feedback request for it.

Configurable Statistics Size: SQL Server’s stats histogram size has been frozen at a max of 201 buckets since the dawn of time. In the age of big data, that’s nowhere near enough for accurate estimates, as explained in this Query Exercise challenge post and the subsequent answer and discussion posts. I wish SQL Server would switch to a larger statistics object by default, perhaps jumping to an extent per stat for large objects rather than a single page, or a configurable stats object size. This would be painful to develop, for sure – not only would it affect building the stats, but it would also impact everything that reads those stats, like PSPO which needs to detect stats outliers. Here’s the feedback request for it.

Update Statistics Faster: In SQL Server 2016, Microsoft added parallelism during stats updates, and then promptly ripped it back out in 2017, as the documentation explains in a note in the sampling section. Forget parallelism on a single stat: I want SQL Server to do a single parallel pass on a table, updating all of the statistics on that table at the same time, instead of doing separate passes of the table for each and every statistic on it. Think merry-go-round scans meets stats updates. Here’s the feedback request for it.

DDL Change Logging: When any object is modified – table, index, stored proc, database setting, server-level setting, login – have an API hook or call of some kind. Send notifications to an API to log that action: who did it, what object was changed, and what the change was. This would make true source control integration and alerting easier, not just for the database, but for the server itself, getting us closer to the point of being able to rebuild existing servers from some kind of source of truth. Yes, I know the “right” thing to do is make people check their own changes into some kind of source control system first, and then use that source control to build the server and the database, but in reality, that poses both organizational AND technical problems that most organizations can’t fix. We need a change logging system so we can at least be reactive. There have been multiple feedback requests for this over time and they’ve all gotten archived, but ever the optimist, here’s my new feedback request for it.

That last one isn’t technically a performance feature, but… if people can make changes in a production system without easy alerting, logging, and source control, then it’s a performance issue in one way or another, because people gonna break stuff. Users gonna use.


SSMS v22.4.1: Copilot is GA. So What’s It Do Right Now?

Copilot in SSMS has two parts. Usually people focus on the pop-out Copilot chat window, and that’s useful for sure, but honestly I think you’re going to get way more mileage out of the code completions feature, right away, because it blends in with your existing workflows.

Let’s say that I’m working with the Stack Overflow database and I wanna find the top 10 users with the highest reputations, and for each one, find their top-scoring Post. I would start by typing a comment describing what I’m doing, then type SELECT, and the magic happens:

Code completions in progress

Copilot’s code completions automatically fill out pretty much what I’m looking for! That’s awesome. Note that I did have to type the word SELECT, but, uh, I’m okay with that. Copilot code completions don’t kick in until you at least give it a character. Hey, I’ve got plenty of characters around here. Let’s hit Tab to accept its work, and then hit enter:

Code completions in progress

We’re at an impasse until I type the word FROM, which is fine, let’s do that:

Code completions in progress

And it figures out that I want the Users table first. It even suggests aliasing the table correctly so that it matches up with what it’s already got in the SELECT! Lovely. Note that it only wrote one line – just the Users table – and not the subsequent join. Hit Tab to accept it, then enter:

Code completions in progress

Again, nothing happens until I type something else in, so we’ll prompt it with an inner join:

Code completions in progress

It adds the bang-on correct join to the Posts table, even though there’s no foreign key to explain the join, AND it lays out the WHERE clause. Note that earlier it only added one line for the From – just the Users table – but here it adds both the Posts table, and suggests no more joins are necessary, and it’s time for the WHERE clause, and adds it.

It does filter for only questions, which is something I didn’t ask for. Hmm. I would imagine that this filter and the joins are influenced by the fact that the Stack Overflow database is open source, and there’s a lot of copyrighted blog posts AI training material out there that Copilot learned from, so it’s automatically adding that. Your own surely private database might not get that quality of recommendations (although here of course the quality is what we call “bad”, since I didn’t ask for that filter, but “bad” is still a quality.)

Hit tab to accept, and it just sits there until we start the ORDER, at which point it fires back up with more advice:

Code completions in progress

The order is pretty good, but overall the query doesn’t produce the results we’re actually looking for, as we’ll see when we hit F5:

Code completions in progress

That’s … not what we wanted. We specifically asked for the top 10 users, and for each one, get their highest-scoring post. That’s not what we’re seeing here.

Let’s switch over to the Copilot Chat window and ask the same question:

Copilot Chat in progress

Note that I didn’t even ask Copilot Chat to evaluate the query in the SSMS window! It just decided to do that, and gauged the code completion chat as lacking in brains. That’s fantastic! In fairness, Copilot Chat takes a hell of a lot more time to figure that out, and as its analysis continues…

Copilot Chat in progress

It shows the results, and those results are bang on. I love how it adds the top post title, too, which our code completions query didn’t. Continue scrolling down and after the results:

Copilot Chat in progress

Brilliant! It … well, it doesn’t show me the actual query it wrote, but it does follow exactly what I asked for. I technically didn’t ask it to write the query – I just asked it for the results, so I’m left with copy/pasting the results out of the text, or asking it to show me its query.

If you want to get a feel for the real time response speed, you can watch this going down in the silent video below:

(No audio narration from me on that one because I was jamming out to the John Summit set in Vail as I wrote this post, getting psyched up for Experts Only Vail this weekend. Any EDM fans in the house?)

My verdict: enable code completions, NOW.

adore Copilot’s code completions because they show up where you are. You have to enable them by going into Tools, Options, Text Editor, Inline Suggestions, and Copilot completions, then go into Inline Suggestions, Preferences, and check Show suggestions only after a pause in typing. (Otherwise they’re obnoxiously fast and when you hit tab, you’ll get the wrong stuff constantly, and you’ll be constantly forced to go back.)

Are they as good as a human can do? Not even close, as this quick & dirty demo showed. Code completions just makes your life easier, silently, as you’re working, without interrupting the tools and workflow you’re already used to using. I feel like it’s about 70% accurate, 30% inaccurate, which sounds terrible, but that 70% is massively helpful.

The Copilot Chat window is much more accurate, but it’s slow as hell in comparison, and it requires you to change your workflow. Nothing against that – it does good work – but I know you, dear reader, and you’re lazy. You’re not gonna switch over to the chat window. You’re gonna just keep typing in SSMS, and for that, Copilot code completions is the bomb.

Now if you’ll forgive me, I wanna go throw a sweater in the car for the road trip lest I shiver.


[Video] Office Hours in North Bay, Ontario

Videos
7 Comments

I’m up in North Bay, Ontario, Canada for the 2026 Can-Am Curling Cup run by Aaron Bertrand – and my team actually won!

Hanging out with Andy Mallon, Aaron Bertrand, Leanne Swart, Michael J Swart, and Ken Mallon before the tournament
The winning team, woohoo!

Let’s bask in the warmth (cough) of our glory as we go through your top-voted questions from https://pollgab.com/room/brento.

Here’s what we covered:

  • 00:00 Start
  • 02:07 Adrian: With all the storage on the same SAN with nVME, having separate virtual disks for data, log, temp data and temp log still makes sense? Our SQL servers have 7-8 drives each and managing free space is boring.
  • 03:44 Newish Jr DBA: I’m a 7 month old SQL/DBA baby, but I just heard about policy management (PBM). After searching your blogs I did not find much. Do most shops not use it? Could you please elaborate on the pros and cons and primary use case for this tool?
  • 05:05 Silent but Threadly: If a DBA never speaks at conferences or blogs, does that limit their career growth?
  • 06:07 A Rose: Is “Database Engineer” just a rebranded DBA, or is it actually a different skill set?
  • 07:02 MyTeaGotCold: Got much left to do for updating your mastering classes for SQL Server 2025?
  • 08:49 Not Brent O: At what size or number of servers do you recommend clients use Central Management Servers (CMS), and any other advice regarding CMS?
  • 09:20 Meatbag: How is AI changing your training classes and conference sessions?
  • 11:35 For a Friend: How many years of experience does someone really need before they can call themselves a “Senior DBA”?
  • 13:11 MidwestDataPerson: I use SP_Rename to switch tables. Any gotchas about this approach? What about indexes and statistics? e.g. ‘Table1’ becomes ‘Table1_old’ and ‘Table1_new’ becomes ‘Table1’. Indexes are built on ‘Table1_new’ prior to rename.

Updated First Responder Kit and Consultant Toolkit for March 2026

For the last few years, I’ve slowed the First Responder Kit release down to once per quarter. It felt pretty feature-complete, and I didn’t want to make y’all upgrade any more than you have to. You’re busy, and I don’t want to take up your time unless there’s something really big to gain from doing an upgrade.

However, things are changing fast – in a good way – and you can blame AI.

Thanks to AI, we’re able to iterate faster, and bring in more improvements in less time than I ever would have thought possible. I know a lot of folks out there hate AI, but this release is a good example of how it can make your job better.

sp_BlitzIndex now offers AI advice for a table’s indexes. It’s as easy as:

And you get a new result set with advice from ChatGPT or Gemini:

sp_BlitzIndex AI Advice

Click on it to see the advice in more detail, in Markdown for easy copy/pasting into client-friendly recommendations:

sp_BlitzIndex AI Advice

Including the exact index creation & undo scripts:

sp_BlitzIndex AI Advice

For more details on that, check out the new documentation on Using AI with the First Responder Kit, this free video on using @AI = 2, and this video on using @AI = 1.

Another big set of commits this month: Erik Darling had the brilliant idea to run the FRK scripts through Claude Code for an automated code review. I’m kicking myself for not doing it sooner. It found a bunch of real bugs across the scripts, stuff that slipped past everybody for years, plus a lot of code quality things. You can click on each issue in the below list to see the specific fixes for each proc. Make no mistake, most of these weren’t fluffy “your syntax should be better” bugs – these were real bugs, like sp_BlitzLock was over-reporting memory grant values by 8x because a DMV’s contents was already in KB as opposed to data pages. That’s the kind of bug that humans are rarely going to catch because we rarely do things like compare a query’s memory grants between diagnostic tools and their plans.

This release has a breaking change in the AI config tables for sp_BlitzCache (and now, sp_BlitzIndex as well.) We used to have both the AI providers and prompts in the same table, but I needed to normalize that out into two tables now that we’re adding AI capabilities to more procs. If you’ve started playing around with sp_BlitzCache’s AI config table, run this script to migrate your configs to the new table structure before running the new version of sp_BlitzCache.

To get the new version:

sp_Blitz Changes

  • Enhancement: add warning about AI-influencing Agents.md and Consitution.md extended properties being present in user databases. (#3798)
  • Enhancement: warn if Automatic Tuning is in a non-default state. (#3800, thanks Reece Goding.)
  • Enhancement: skip Google Cloud SQL admin database gcloud_cloudsqladmin. (#3818, thanks Vlad Drumea.)
  • Fix: typo in Acclerated Database Recovery Enabled. (#3796, thanks Christophe Platteeuw.)
  • Fix: typo in “individial”. (#3835, thanks CuriousGeoSq and GitHub Copilot – this was our first completely robot-performed bug fix done inside of Github.com. I was delighted by how easy the process was.)
  • Fix: code review by Claude Code found 14 assorted bugs. (#3807 and #3808, thanks Erik Darling.)

sp_BlitzCache Changes

  • Breaking change: the last release used a single Blitz_AI table to hold both AI provider configurations and AI prompts. In this release, to support sp_BlitzIndex having its own set of prompts, we split that Blitz_AI table into two tables. This script will migrate the data from old to new tables. (#3823)
  • Enhancement: include the CONSTITUTION.md extended database property when building an AI prompt so that your company’s code and database standards will (hopefully) be honored. (#3809)
  • Fix: performance tuning by removing a duplicate join. (#3791, thanks Connor Moolman.)
  • Fix: code review by Claude Code found 16 assorted bugs. (#3806, thanks Erik Darling.)

sp_BlitzIndex Changes

  • Enhancement: skip Google Cloud SQL admin database gcloud_cloudsqladmin. (#3818, thanks Vlad Drumea.)
  • Enhancement: add AI advice for table-level index review. (#3670, #3827, #3837)
  • Enhancement: add more support for JSON indexes. Note that SQL Server itself doesn’t appear to track index usage statistics on these indexes, at least not yet as of 2025 CU3. (#3736)
  • Fix: in table level detail mode, show details for resumable index builds that are still building for the first time. (#3812, thanks Reece Goding.)
  • Fix: don’t error out when @GetAllDatabases = 1 is used with restricted permissions. (#3820, thanks Vlad Drumea.)

sp_BlitzLock Changes

sp_BlitzWho Changes

Wanna watch me work on some of these pull requests? Here was a live stream with me plugging along:

Consultant Toolkit Changes

We didn’t release a Consultant Toolkit in conjunction with this month’s FRK release. We’re having some problems with our automated build system, and I didn’t want to hold back the FRK release. Once we get that fixed, I’ll actually do another quiet FRK release (because we’ve already got a couple of things in the works for it), and email the Consultant Toolkit owners about that new release.

For Support

When you have questions about how the tools work, talk with the community in the #FirstResponderKit Slack channel. Be patient: it’s staffed by volunteers with day jobs. If it’s your first time in the community Slack, get started here.

When you find a bug or want something changed, read the contributing.md file.

When you have a question about what the scripts found, first make sure you read the “More Details” URL for any warning you find. We put a lot of work into documentation, and we wouldn’t want someone to yell at you to go read the fine manual. After that, when you’ve still got questions about how something works in SQL Server, post a question at DBA.StackExchange.com and the community (that includes me!) will help. Include exact errors and any applicable screenshots, your SQL Server version number (including the build #), and the version of the tool you’re working with.


Get Free AI Query Advice in PasteThePlan.

PasteThePlan.com
7 Comments

At PasteThePlan.com, you can paste execution plans into your browser, then send a link to someone else to get query advice. It’s useful for online forums, Stack Exchange, and the like.

After you paste the plan, you’ve got a new AI Suggestions tab. It sends your query plan (and my custom prompt) up to ChatGPT 5.3, and in 30 seconds or less, you get a second opinion on ways to make it go faster:

Paste-the-Plan AI suggestions

Here’s the prompt we’re using for now, in case you’d rather do this kind of thing yourself:

Please give specific, actionable advice for the attached Microsoft SQL Server query plan. You can give recommendations to tune the query or the indexes on the underlying tables. Do not give server-level or database-level advice: stay focused on this specific query, these specific tables. Your one-time advice will be handed to the developer responsible for the database. They will not be able to change server-level or database-level settings, and they will not be able to correspond with you again. Distill your advice down to the most important things that are likely to make a difference in performance. Deliver your advice in a friendly, upbeat way because you are on the same team, rooting for their success, and having a good time.

Right now, we’re using OpenAI’s ChatGPT 5.3 with a 25-second timeout. If it times out, we advise you to get out your own wallet and use your own AI tool, since we have to pay for these API calls. I like you, though. I think you’re worth it.


[Video] Office Hours, Standing in the Ocean Edition

Videos
4 Comments

My tripod is probably never gonna recover from the salt water, but the water was so nice that I couldn’t resist. This is a 360 video, so you can grab the screen and move it around to see Magens Bay in St Thomas, US Virgin Islands, as I go through your top-voted questions from https://pollgab.com/room/brento.

Here’s what we covered:

  • 00:00 Start
  • 01:22 NotCloseEnoughToRetirementToStopLearning: Hi Brent-In a recent office hours you talked about aggressive stats maint. causing plan sensitivity\cache issues. My shop is aggressive on stats. Can you talk about how to investigate if we have this problem to a point we should adjust are current practice?
  • 02:36 kladze: I’m trying to learn XE, most written material out there is how to setup a basic session and then query it. But since there is a huge amount of objects you can capture, i find it almost impossible to find the once i need. How can i properly learn about XE and the objects i need?
  • 03:37 Sujan N: What is your take on certifications? Eg: DP-300. Does it actually help you step up? What advice would you give to an 8-month old dba on leveling up? Thanks for all the help!
  • 04:50 Mark Richey: With opensource DB solutions like Postgress being the back end of very publicly facing and hugely scaled solutions like ChatGPT along with the like of ahem yourself using and teaching them, can you foresee the big licence products slowly becoming a thing of the past?
  • 05:23 racheld933: Regarding upgrades from 2008r2 or 2012 to 2025, is there ever a situation where you’d want to upgrade to a lower version like 2019 or 2022 before 2025? Or is a direct upgrade always the way to go? I’ve upgraded Postgres in AWS RDS where I was forced to do version hops.
  • 07:51 ArchipelagoDBA: You do recommend do update statistics maybe weekly. I get your reasoning behind that but isn’t it there then a risk that you will run into stale statistics like values outside of the histogram?
  • 08:35 Sov: Any thoughts on managing databases as code in git and building/deploying DACPAC’s to manage the production schema? My org is working to adopt this model
  • 09:29 dbApe: Sometimes my SQL server gets stuck on UpdateQPStats, where app is unavailable, AutoUpdateStats and AutoUpdateStatsAsync are set to True, Is this my cue to turn it off?
  • 10:33 Andrew: Hi, I’ve stumbled into a DBA job and taken on around 30 database servers which were somewhat managed by several different people across the business prior to this. What would be your recommendation to get some consistency across them now that one person is managing them?

Using Claude Code with SQL Server and Azure SQL DB

AI
21 Comments

Let’s start with a 7-minute demo video – I didn’t edit this down because I want you to be able to see what happens in real time. In this video, I point the desktop version of Claude Code at a Github issue for the First Responder Kit and tell it to do the needful:

That’s certainly not the most complex Github issue in the world, but the idea in that short video was to show you how easy the overall workflow is, and why you and your coworkers might find it attractive.

Now, let’s zoom out and talk big picture.

The rest of this blog post is not for people who already use Claude Code. I don’t wanna hear Code users complaining in the comments about how I didn’t cover X feature or Y scenario. This is a high-level ~2,000-word overview of what it is, why you’d want it, what you’ll need to talk to your team about, and where to go to learn more.

I should also mention that I use a ton of bullet points in my regular writing. As with all of my posts, none of this is written with AI, period, full stop. These words come directly from my booze-addled brain, written as of March 2026, and this stuff will undoubtedly drift out of correctness over time.

What’s Claude Code?

Think of it as an app (either desktop or command line) that can call other apps including:

  • sqlcmd – Microsoft’s command-line utility for running queries. You’re used to using SSMS because it’s much prettier and more powerful, but sqlcmd is fine if all you need to do is run queries and get results, and that’s all Claude Code needs to get started. As you get more advanced, you can use something called an MCP that gives Claude Code an easier way to chat with the database.
  • Git / Github – so that it can get the latest versions of your app code (or DBA scripts, or in this case, the First Responder Kit) from source control, make changes, and submit pull requests for you to review. For the purposes of this post, I’m just gonna use the term Github, but if your company uses a different source control method, the same principles apply.

That means it has access to:

  • Your Github issues and pull requests – which may present confidentiality issues for your company.
  • Your local file system – in theory, you might be able to lock this down, but in practice you’re probably going to gradually expand Claude Code’s permissions to let it do more stuff over time.
  • A database server – so think about where you’re pointing this thing, and what login you give it. If it’s going to test code changes, it’s probably going to need to alter procs, create/alter/drop tables, insert/update/delete test data, etc. On harder/longer tasks, it’s also going to be processing in the background while you’re doing other stuff, so you’re probably going to want to give it its own SQL Server service for its development use so it doesn’t hose up yours.
  • Your code base – and if everything before didn’t raise security and privacy concerns, this one certainly should.

Think of it as an outside contractor.

When your company hires outside contractors, they put a lot of legal protections in place. They’ll set up:

  • A non-disclosure agreement to make sure the contractor doesn’t share your secrets with the rest of the world
  • A contract specifying what exactly each side is responsible for and what they’ll deliver to each other
  • Insurance requirements to make sure the contractor will be able to pay for any egregious mistakes
  • Human resources standards to make sure the contractor isn’t high and hallucinating while they work

With AI tools, you don’t really get any of that. That means if you choose to hire one of these tools for your company, all of this is on you. Even worse, anybody on your team can endanger your entire company if they don’t make good decisions along the way. I can totally understand why some/most companies are a little gun-shy on this stuff. It’s right to be concerned about these risks.

Here – and most of the time when you see me working with AI on the blog or videos – I’m working with the open source First Responder Kit, or code that I use as part of my training classes. This stuff is all open source, licensed under the MIT License. I’m not concerned about AI companies stealing my code.

That’s the best way for you to get started, too: play around with Claude Code on an open source Github repo that you usually use as a user (not a developer), like the First Responder Kit, Ola Hallengren’s maintenance scripts, Erik Darling’s SQL Server Performance Monitor, DBAtools, or even Microsoft’s SQL Server documentation. Learn to use Claude Code there, and later on, after you’ve built up confidence and a few good wins, then think about bringing it into your own company to work on your day job stuff. And when you do that…

When your company brings in an outside contractor…

The security and legal teams are going to care about:

  1. What Claude Code has access to – aka, Github, your local file system, your development database server, etc.
  2. Where Claude Code sends that data for thinking/processing – you should assume that it’s sending all of the accessible data somewhere
  3. If you send that data outside your company walls for thinking/processing, your company is also going care about how the thinker/processor uses your data – as in, not just to process your requests, but possibly for analysis to help the overall public or paying users

This leads to one of the big decisions when you’re using Claude Code: where does the thinking/processing happen?

The thinking can be done locally or remotely.

Claude Code is an app, but the thinking doesn’t actually happen in the app. Claude Code sends your data, prompt, database schema, etc somewhere.

Most people use Anthropic’s servers. They’re the makers of Claude Code. For around $100/month per person, you get unlimited processing up in their cloud. The advantage of using Anthropic’s servers is that you’ll get the fastest performance, with the biggest large language models (LLMs) that have the best thinking power, most accurate answers, and largest memories (context.) The drawback, of course, is that you’re sending your data outside your company’s walls, and you may not be comfortable with that.

If you’re not comfortable with Anthropic, maybe your company is more comfortable with Google Gemini’s models, or OpenAI’s ChatGPT models. At any given time, it’s an arms race between those top companies (and others, like hosting companies like OpenRouter) as to who produces the best tradeoffs for processing speed, accuracy, and cost.

If you’re not comfortable with any of those, you can do the processing on your own server. When I say “server”, that could be a Docker container running on your laptop, an app installed on your gaming PC with a high-powered video card, or a shared server at your company with a bunch of GPUs stuffed in it.

In that case, it’s up to you to pick the best LLM that you can, that runs as quickly as possible, given your server’s hardware. There are tiny not-so-bright models that run (or perhaps, leisurely stroll) on hardware as small as a Raspberry Pi. There are pretty smart models that require multiple expensive and power-hungry video cards. But even the best local models can’t compete with what you get up in Anthropic’s servers today.

The good news is that you don’t have to make some kind of final decision: you can switch between hosted and local models by just changing Claude Code’s config file.

The contractor and prompt qualities affect the results.

Generally speaking, the better/newer LLM that you use, and the smaller of a problem you’re working with, the more vague prompts you can get away with, like “we’re having deadlock problems – can you fix that?”

On the other hand, the older/smaller/cheaper LLM that you use – especially small locally hosted models – the more specific and directed your prompts have to be to get great results. For example, you may have to say something like, “sp_AddCustomer and sp_AddOrder are deadlocking on the CustomerDetails table when both procs are called simultaneously. Can you reduce the deadlock potential by making code changes to one or both of those procs? You can use hints, query rewrites, retry logic, whatever, as long as the transactions still finish the same way.”

And no matter what kind of LLM you’re using, the more ambitious your code changes become, the more important the prompt becomes. When I’m adding a major new feature or proposing a giant change, I start a chat session with Claude – not Claude Code, but just plain old Claude, the chat UI like ChatGPT – and say something like:

I’m working on the attached sp_Blitz.sql script, which builds a health check report on Microsoft SQL Server. It isn’t currently compatible with Azure SQL DB because it uses sp_MSforeachdb and some of the dynamic SQL uses the USE command. I’d like to use Claude Code to perform the rewrite. Can you review the code, and help me write a good prompt for Claude Code?

I know, it sounds like overkill, using one AI to tell another AI what to do, but I’ve found that in a matter of seconds, it produces a muuuuch better prompt than I would have written, taking more edge cases of the code into account. Then I edit that prompt, clarify some of my design decisions and goals, and then finally take the finished prompt over to Claude Code to start work there.

For now, I use Claude Code on a standalone machine.

I really like to think of AI tools like Claude Code as an outside contractor.

I’m sure the contractor is a nice person, and I have to trust it at least a little – after all, I’m the guy who hired it, and I shouldn’t hire someone that I don’t trust. Still, though, I gotta put safeguards in place.

So I keep Claude Code completely isolated.

I know that sounds a little paranoid, but right now in the wild west of AI, paranoia is a good thing.

For me, it starts with isolated hardware. A few years ago, I got a Windows desktop to use for gaming, streaming, and playing around with local large language models (LLMs). It’s got a fast processor, 128GB RAM, a decently powerful NVidia 4090 GPU, Windows 11, Github, and SQL Server 2025.

I think of that computer as Claude Code’s machine: he works there, he lives there. That way, I can guarantee none of my clients’ code or data is on there, and it doesn’t have things like my email either. When I wanna work, stream, record videos from that Windows machine, I just remote desktop into it from my normal Mac laptop.

When I wanna do client work without sending the data to Anthropic, I’ve got Ollama set up on that machine too. It’s a free, open source platform for running your own local models. It supports a huge number of LLMs, and there is no one right answer for which model to use. I love finding utilities like llmfit which check hardware to see what models can be run on it, and finding posts like which models run best on NVidia RTX 40 series GPUs as of April 2025 or on Apple Silicon processors as of February 2026, because they help me take the guesswork out of experimenting. I copy client data onto that machine temporarily, do that local work, and then delete the client data again before reconfiguring Claude Code to talk to Anthropic’s servers.

How you can get started with Claude Code

Your mission, should you choose to accept it, is to add a new warning to sp_Blitz when a SQL Server has Availability Groups enabled at the server level, but it doesn’t have any databases in an AG. To help, I’ve written a short, terse Github issue for this request, and a longer, more explicit one so you can also see how the quality of the input affects the quality of your chosen LLM’s code.

To accomplish the task, the bare minimum tasks would be:

  1. Install Claude Code (I’d recommend the terminal version first because the documentation is much better – the desktop version looks cool, but it’s much harder to get started with)
  2. Clone the First Responder Kit repo locally
  3. Prompt Claude Code to write the code – tell it about the Github issue and ask it to draft a pull request with the improved code, for your review

Stretch goals:

  1. Set up a SQL Server instance for Claude Code to connect to – could be an existing instance or a new one
  2. Set up sqlcmd or the SQL Server MCP so Claude Code can connect to it – if you use the MCP, you’ll need to edit Claude Code’s config files to include the server, login, password you want it to use
  3. Prompt Claude Code to test its code

You don’t have to submit your actual work as a pull request – I’m not going to accept any of those pull requests anyway. (I’ll just delete them if they come in – and it’s okay if you do one, I won’t be offended.) These Github issues exist solely to help you learn Claude Code.

How I can help

Unfortunately, I can’t do free personalized support for tens of thousands of readers to get their Claude Code setups up and running. At some point, I might build a paid training class for using Claude Code with SQL Server, and at that point, the paid students would be able to get some level of support. For now, though, I wanted to get this blog post, video, and GitHub issues out there for the advanced folks to start getting ahead of the curve.

However, If your company would like to hire me to help get a jump start on using Claude Code to improve your DBA productivity, proactively find database issues before they strike, and finally start making progress on your known issues backlog, email me.


Row-Level Security Can Slow Down Queries. Index For It.

Execution Plans
3 Comments

The official Azure SQL Dev’s Corner blog recently wrote about how to enable soft deletes in Azure SQL using row-level security, and it’s a nice, clean, short tutorial. I like posts like that because the feature is pretty cool and accomplishes a real business goal. It’s always tough deciding where to draw the line on how much to include in a blog post, so I forgive them for not including one vital caveat with this feature.

Row-level security can make queries go single-threaded.

This isn’t a big deal when your app is brand new, but over time, as your data gets bigger, this is a performance killer.

Setting Up the Demo

To illustrate it, I’ll copy a lot of code from their post, but I’ll use the big Stack Overflow database. After running the below code, I’m going to have two Users tables with soft deletes set up: a regular dbo.Users one with no security, and a dbo.Users_Secured one with row-level security so folks can’t see the IsDeleted = 1 rows if they don’t have permissions.

Now let’s start querying the two tables to see the performance problem.

Querying by the Primary Key: Still Fast

The Azure post kept things simple by not using indexes, so we’ll start that way too. I’ll turn on actual execution plans and get a single row, and compare the differences between the tables:

If all you’re doing is getting one row, and you know the Id of the row you’re looking for, you’re fine. SQL Server dives into that one row, fetches it for you, and doesn’t need multiple CPU cores to accomplish the goal. Their actual execution plans look identical at first glance:

Single row fetch

If you hover your mouse over the Users_Secured table operation, you’ll notice an additional predicate that we didn’t ask for: row-level security is automatically checking the IsDeleted column for us:

Checking security

Querying Without Indexes: Starts to Get Slower

Let’s find the top-ranked people in Las Vegas:

Their actual execution plans show the top query at about 1.4 seconds for the unsecured table, and the bottom query at about 3 seconds for the secured table:

Las Vegas, baby

The reason isn’t security per se: the reason is that the row-level security function inhibits parallelism. The top query plan went parallel, and the bottom query did not. If you click on the secured table’s SELECT icon, the plan’s properties will explain that the row-level security function can’t be parallelized:

No parallelism

That’s not good.

When you’re using the database’s built-in row-level security functions, it’s more important than ever to do a good job of indexing. Thankfully, the query plan has a missing index recommendation to help, so let’s dig into it.

The Missing Index Recommendation Problems

Those of you who’ve been through my Fundamentals of Index Tuning class will have learned how Microsoft comes up with missing index recommendations, but I’mma be honest, dear reader, the quality of this one surprises even me:

The index simply ignores the IsDeleted and Reputation columns, even though they’d both be useful to have in the key! The missing index hint recommendations are seriously focused on the WHERE clause filters that the query passed in, but not necessarily on the filters that SQL Server is implementing behind the scenes for row-level security. Ouch.

Let’s do what a user would do: try creating the recommended index on both tables – even though the number of include columns is ridiculous – and then try again:

Our actual execution plans are back to looking identical:

With a covering index

Neither of them require parallelism because we can dive into Las Vegas, and read all of the folks there, filtering out the appropriate IsDeleted rows, and then sort the remainder, all on one CPU core, in a millisecond. The cost is just that we literally doubled the table’s size because the missing index recommendation included every single column in the table!

A More Realistic Single-Column Index

When faced with an index recommendation that includes all of the table’s columns, most DBAs would either lop off all the includes and just use the keys, or hand-review the query to hand-craft a recommended index. Let’s start by dropping the old indexes, and creating new ones with only the key column that Microsoft had recommended:

The actual execution plans of both queries perform identically:

Key lookup plan 1

Summary: Single-Threaded is Bad, but Indexes Help.

The database’s built-in row-level security is a really cool (albeit underused) feature to help you accomplish business goals faster, without trying to roll your own code. Yes, it does have limitations, like inhibiting parallelism and making indexing more challenging, but don’t let that stop you from investigating it. Just know you’ll have to spend a little more time doing performance tuning down the road.

In this case, we’re indexing not to reduce reads, but to avoid doing a lot of work on a single CPU core. Our secured table still can’t go parallel, but thanks to the indexes, the penalty of row-level security disappears for this particular query.

Experienced readers will notice that there are a lot of topics I didn’t cover in this post: whether to index for the IsDeleted column, the effect of residual predicates on IsDeleted and Reputation, and how CPU and storage are affected. However, just as Microsoft left off the parallelism thing to keep their blog post tightly scoped, I gotta keep mine scoped too! This is your cue to pick up this blog post with anything you’re passionate about, and extend it to cover the topics you wanna teach today.


Logical Reads Aren’t Repeatable on Columnstore Indexes. (sigh)

Sometimes I really hate my job.

Forever now, FOREVER, it’s been a standard thing where I can say, “When you’re measuring storage performance during index and query tuning, you should always use logical reads, not physical reads, because logical reads are repeatable, and physical reads aren’t. Physical reads can change based on what’s in cache, what other queries are running at the time, your SQL Server edition, and whether you’re getting read-ahead reads. Logical reads just reflect exactly the number of pages read, no matter where the data came from (storage or cache), so as long as that number goes down, you’re doing a good job.”

To illustrate it, we’ll start with the large version of the Stack Overflow database, and count the number of rows in the Users table.

Statistics io output shows that the first execution has to read pages up from disk because they’re not in cache yet:

The first execution has 4 physical reads and 329,114 read-ahead reads. Those were all read up off disk, into memory. But the whole time, logical reads stays consistent, so it’s useful for measuring performance tuning efforts regardless of what’s in cache.

The same thing is true if we create a nonclustered rowstore index too:

Statistics io output shows physical reads & readahead reads on the first execution, but logical reads stays consistent throughout:

But with columnstore indexes on SQL Server 2017 & newer…

On SQL Server 2017 or newer (not 2016), create a nonclustered columnstore index:

And watch lob logical reads while we run it 3 times:

Lob logical reads shows 22,342 for the first execution, then 10,947 for the next two passes.

This isn’t true on SQL Server 2016, which produces the same logical read numbers every time the columnstore query runs, clean buffer pool or not. Just 2017 and newer.

Actual live Brent reaction to this issue

<sigh> This is why we can’t have nice things.

This is also one of those reasons why it’s so hard to teach training classes. Stuff changes inside the product, and then years later, a demo you wrote no longer produces exactly the same results. You have to try re-running the demo from scratch, thinking you just made a mistake, and then you have to narrow down the root cause, and then to do it right, you really need to check each prior version to understand when the thing changed, and Google trying to find out if anybody else shared this and you just didn’t read that particular post, and then update your own training and write a blog post so that nobody else gets screwed by the same undocumented change, which of course they will, because not everybody reads your blog posts.

You won’t, though, dear reader. At least I helped you out, hopefully. And that makes it all worthwhile. (Not really. I’m going to go have a shot of my office tequila, and it’s not even 10AM as I’m writing this.)


I’m Not Gonna Waste Time Debunking Crap on LinkedIn.

AI
44 Comments

LinkedIn is full of absolute trash these days. Just flat out bullshit garbage. (Oh yeah, that – this post should probably come with a language disclaimer, because this stuff makes me mad.)

People wanna look impressive without actually putting in the work to gain real knowledge. They’re asking ChatGPT to write viral “expertise” knowledge posts for them, and they’re publishing this slop without so much as testing it.

I’m going to share an example that popped up on my feed, something LinkedIn thought I would find valuable to read:

AI bullshit slop

It’s pretty. It looks like it was written by an authoritative source.

But if you drill just a little deeper, there are telltale giveaways that the author is a lazy asshole who wastes other peoples’ time. They didn’t bother to put the least bit of fact-checking in. I’m not even talking about the overall accuracy, mind you – let’s just look at the comparison table. On the left side, there are two sections marked “Efficiency”, and on the right side, two sections marked “Usage”:

Efficiency Efficiency

That doesn’t make any sense. Then keep reading, and look at the bottom sections. On the left, they both say the same thing – but only one thing is checked:

WELL WHICH ONE IS IT

Thankfully, the situation is much better on the right side, where, uh, both things are checked, so that’s also meaningless:

So nice they checked it twice

I hate this bullshit. I hate it. Haaaaate it. I work so hard to help debunk query myths and help you write better queries, and then some jerk-off like this slaps a prompt into ChatGPT, creates a pretty (but altogether full of crap) table, and it gets engagement on LinkedIn – thereby spreading misinformation all over again.

If you’re lucky, and the thought slop leader hasn’t tried to hide their source, at least LinkedIn puts a little “content credentials” icon at the top of AI-generated images. You can hover your mouse over it like this:

Content Credentials

ChatGPT and Google Gemini are both labeling their images with hidden tags, helping sites like LinkedIn identify content that was AI-generated. However, ambitious authors can strip those tags out, trying to claim ownership of their content. (sigh) And they will, because they’re in a race to be the best slop leaders.

See, LinkedIn actually rewards bad content because commenters jump in to point out the inaccuracies, thereby making LinkedIn think the content was comment-worthy, and so it should be promoted to more viewers. Those viewers in turn don’t read the comments, and they just think the original post was merit-worthy – after all, it was recommended by LinkedIn – which spreads the misinformation further.

I love AI, and I use it every single day, but I hate the holy hell out of what’s happening right now.

So even though it drives me absolutely crazy to see this fake knowledge being passed off as truthful, I’m not gonna bother debunking it. These morons can create it faster than I can debunk it. I can’t even block these “authors” when I see them writing trash, because… they’re the very people who need to be reading my stuff! Sure, they’re slop leaders today, but tomorrow they may turn the corner and want to start actually learning SQL, and when they see the light, I wanna be there for them.

I’m just gonna keep offering you the best alternatives that I can: real-life, hands-on material that I’ve learned through decades of genuine hard work. Hopefully, you’ll continue to see my work as worthy, dear reader, and keep sharing the good stuff that you like, and keep investing in the training classes that I produce. Fingers crossed.


[Video] Office Hours: Back in the Bahamas Edition

Videos
1 Comment

Yes, I’m back on a cruise ship with another 360 degree video. Lest you think I’m being wildly irresponsible (or responsible perhaps?) with your consulting and training money, be aware that this particular cruise was free thanks to the fine folks in the casino department at Norwegian Cruise Lines. In between beaches and blackjack, let’s go through your top-voted questions from https://pollgab.com/room/brento.

Here’s what we covered:

  • 00:00 Start
  • 01:22 Petr: How would you convince a customer running our app on SQL Server 2022 in a sync AOG to enable delayed durability on their production databases? Our load tests show a 10× improvement in write throughput and elimination of commit latency spikes, with acceptable data?loss risk.
  • 05:30 HeyItsThatGuyAgain: You’ve done a lot to support the “Accidental DBA.” Do you recommend any resources for the “Accidental Data Team Manager?”
  • 05:51 How Did I Even Get Here: I am one of only 4 DBAs at a big university that lumps Dev and Prod together. I am our only SQL Server DBA and I assist with Oracle. This is my first DBA job. I can study what I want in my free time but I’m overwhelmed. Do you have any advice for how to decide where to explore?
  • 07:17 ArchipelagoDBA: We have been using defautl sampling for our statistics updates. We have now noticed that this is causing low quality execution plans. Are there any risks to go flat out enforcing FULLSCAN for all or are there better to gradially roll it out to help with specific queries?
  • 08:31 SteveE: Hi Brent, How is the adoption of AI development tools looking out in the real world? Are you seeing many clients using fully automated AI development tools, an uptake in assisted programming using chatbots or are teams still using traditional methods?
  • 10:10 Subash: I need to take migration assistance report for 2025 servers.. For upgrade 2019 to 2025 so that my manager asked the compatibility changes report before upgrade I checked in DMA 2025 option is not available..i checked in SSMS 21 version available till 2022 not for 2025.help
  • 10:56 EagerBeaver: Hi Brent, can parameter sniffing happen on a query (not SP)? I run the same query twice with different parameter and second query got same cardinality estimation as the first one. Index statistics for used index are recalculated and 2 parameter have vast difference in Equal Rows.
  • 13:48 I_like_SQL_Server: Hi Brent, We have some wide and large tables with unindexed foreign keys (3rd p db). There are too many FKs so I cannot index them all, how should I think and how do I prioritize? You have a modul in Mastering index tuning about FKs but it doesn’t take up this specific challenge.
  • 15:02 Stumped: Every day something causes the log file in one of my very large databases to grow to over a terabyte, which fills the drive. How do I find out what is doing that?
  • 16:18 2400BaudSpeedster: I dislike copilot and can’t figure out how to like it. Is there anyway to avoid copilot integration besides not upgrading past a certain version? Any suggestions on how to get over it and somehow embrace it?

Who’s Hiring Database People? March 2026 Edition

Who’s Hiring
13 Comments

Is your company hiring for a database position as of March 2026? Do you wanna work with the kinds of people who read this blog? Let’s make a love connection.

You probably don't wanna hire these two.If your company is hiring, leave a comment. The rules:

  • Your comment must include the job title, and either a link to the full job description, or the text of it.
  • An email address to send resumes, or a link to the application process – if I were you, I’d put an email address because you may want to know that applicants are readers here, because they might be more qualified than the applicants you regularly get.
  • Please state the location and include REMOTE and/or VISA when that sort of candidate is welcome. When remote work is not an option, include ONSITE.
  • Please only post if you personally are part of the hiring company—no recruiting firms or job boards. Only one post per company. If it isn’t a household name, please explain what your company does.
  • It has to be a data-related job.

If your comment isn’t relevant or smells fishy, I’ll delete it. If you have questions about why your comment got deleted, or how to maximize the effectiveness of your comment, contact me.

Each month, I publish a new post in the Who’s Hiring category here so y’all can get the latest opportunities.


I’ve Been Using Macs for 20 Years. Here’s What You Wanted to Know.

Home Office
12 Comments

tl;dr – It’s easier because I’ve always chosen to run SQL Server VMs, so it was easier for me to switch than you might expect – but if you’re a Microsoft IT pro, I don’t recommend switching.

Home office setup, circa Jan 2026Now for the long version.

About 20 years ago, back in 2006, I excitedly blogged that my boss at the time had agreed to let me buy a Mac (with my employer’s money.) I’d been really frustrated with Windows for quite a while at that point. Even today, the Windows 11 start menu disgusts me. I literally paid for this operating system, why are you showing me ads and irrelevant garbage?!?

I’d been trying to make the switch from Windows over to Linux since around 2002, and I’d never been able to make it stick. I kept having problems on Linux with hardware, driver support, apps, and just plain usability. Apple’s Mac OS seemed to be a gateway drug to Linux: it was built atop FreeBSD, so I thought I’d be able to use Apples as a stepping stone to transition all the way over to Linux.

It didn’t end up working out that way. I was so delighted with Apples, and their ecosystem kept growing. Today, it includes phones, tablets, headphones, TVs, and the third party ecosystem stretches out far beyond that. I decided to stick with Apples rather than move on to Linux.

If you’re on Windows and you’re thinking about making the switch today, I actually wouldn’t recommend it for most folks. The mental work required in order to switch platforms is kind of a pain in the rear. You’ll be less productive for the first year or two, by far, and any supposed gains won’t come until long after you’ve had many frustrations along the way. If you do make the switch, I’d recommend a pre-built Linux machine like the ones from System76 or Lenovo. Macs are great, but the current OS (Tahoe) is a hot mess. I haven’t upgraded myself, still on Sequoia. Anyhoo, on to how it works for me.

The first big question:
how does SQL Server work?

My MacBook ProHot take: it doesn’t really, and it doesn’t matter. Hear me out.

When I first made the switch 20 years ago, I was a production DBA, and I didn’t run SQL Server locally anyway. I didn’t even run Management Studio locally – I used a jump box, a VM in each data center with all my tools installed. I’m a huge, huge believer in jump boxes:

  • When you need to run something for a long period of time without worrying about disconnects, jump boxes make it easy
  • When your employer decides to mandate workstation reboots to apply some stupid group policy or Windows update, no problem
  • When there’s a network blip between your workstation and the data center, your queries don’t fail
  • When you have multiple domains, like complex enterprises with a lot of acquired companies, no problem – you can set up multiple jump boxes or multiple logins
  • When you have to support a diverse environment that requires different versions of SSMS, some of which may not play well with each other, it doesn’t matter, because you just build different jump boxes for different needs
  • When you don’t have your laptop available, like if you’re visiting a friend or family, no problem, as long as you can VPN & RDP in
  • When your laptop dies, you can still tackle production emergencies while you get the new laptop up to speed – you just need RDP

Notice that none of those start with “if” – they start with “when”. You might be lucky enough to be early enough in your career that you haven’t hit those problems yet, and you may even make it all the way through your career without hitting them – just like you might make it all the way through your career without needing to restore a database or do a disaster recovery failover. You might hit the lotto, too.

There are two kinds of DBAs in the world: the experienced, prepared ones, and the ones who run SSMS and SQL Server locally on their laptop.

After witnessing a lot of nasty disasters, I’m pretty passionate about that, and you’re not going to convince me otherwise. When I have a long term relationship with a client, they give me a VPN account and a jump VM, and that’s the end of that. I know there are going to be commenters who say, “But Bryant, I work with small businesses who can’t afford a jump VM,” and I don’t have the time or energy to explain to them that I work with small businesses too, and their sysadmins already have their own jump boxes because they’re not stupid. Small jump boxes in AWS Lightsail are less than $50/month.

Consultants and trainers need SQL Server though.

I said I made the switch back when I was a production DBA, and I had no need for local SQL Server then. When you’re a consultant and/or trainer, though, you’re gonna have to do research, write demos, and show things to clients, which means you’re gonna need access to a SQL Server.

Most consultants and trainers I know use a local instance of SQL Server for that. In theory, you can run SQL Server on MacOS in a container. I don’t, because it still doesn’t give me SQL Server Management Studio. When I’m teaching you performance tuning, I have to meet you where you are, and show you screenshots & demos with the same tools you use on a daily basis, and that means SSMS. So for me, the container thing is useless since I need Windows anyway – there’s no SSMS on the Mac, at least not yet.

I use cloud VMs and cloud database services (like AWS RDS SQL Server and Azure SQL DB) because I’m picky about a few requirements.

I want my demos to use real-world-size databases and queries, which means the large versions of the Stack Overflow database. I want to deal with 100GB tables, and I want to create indexes on them in a reasonable time, live, during classes. There are indeed laptops large enough to handle that – for example, my MacBook Pro M4 Max has 16 cores and 128GB memory – but also…

I want redundancy, meaning multiple SQL Servers ready to go during class. If something goes wrong with a demo, I want to be able to switch over to another instance without losing my students’ valuable time. I can’t tell you how many presentations I’ve sat through where the presenter struggled with a broken demo, saying things like, “Hang on, let me try restarting this and restoring the database, I can’t understand why this is happening…” They’ve already lost the audience at that point. Like a timeless philosopher once said, ain’t nobody got time for that.

I teach live classes online, and if a local instance of SQL Server is struggling with a nasty query, it’s going to affect the video & audio quality of my live stream – especially if I’m running multiple local VMs, some of which may also be restoring databases in the background to prep for the next class.

I have to jump around from demo to demo when I’m working with clients on private issues. They may be facing several radically different issues, or they may want me to jump to an unplanned topic. Because of that, I need multiple instances ready to go with fresh, clean Stack Overflow databases. After each demo, I can kick off a database restore to reset the server back to baseline, while I switch over to another VM to keep moving on the next demo.

I have to teach onsite sometimes, and we’re talking about hardware requirements that are way beyond even the largest laptops. I would either have to haul around multiple laptops and a networking setup, or … just have internet access. I know that a decade ago, it was common to be in environments where you might not have internet, but that hasn’t happened to me in a long, long time.

So for me personally, local VMs are not the answer. It doesn’t matter whether my laptop is Windows, Mac, or Linux, I just can’t accomplish the above goals with local instances of SQL Server. Whenever I’m teaching, I fire up multiple cloud VMs, all based off my standardized SQL Server & SSMS image, with my demos ready to go. I open RDP connections to each of them, and then I can switch back and forth between them.

I typically use these AWS instance types:

  • i7i.xlarge: 4c/32GB, 937GB NVMe, ~$4 for 10 hours
  • i7i.2xlarge: 8c/64GB, ~1.9TB NVMe, ~$8 for 10 hours

Those costs can pile up – there are months where my VM bill is around $1,000! However, those are also months with high income, so I look at it as the cost of doing business. Again, those costs would be present whether I was running a Mac laptop or not.

If your time is effectively free, then a more cost-effective solution would be to buy or rent a big server, rent space in a colo somewhere, install a hypervisor, and manage remote connections to it. I have tried that (very briefly), and I don’t have the patience to deal with support problems on mornings when I’m trying to prep for a client engagement or training class.

I’m not trying to convince you to do any of this. Really, switching to Macs doesn’t make sense for most Microsoft data professionals, and it never has. However, SQL Server isn’t the only thing I do, and I personally happen to like the way Macs handle a lot of the other stuff I do.

In the midst of setting up my home office in a different room of the house
Skytech gaming PC at bottom right

I do still use Windows machines! I have a Skytech Windows gaming PC with an NVidia 4090 that I use for Claude Code and for local AI models, and I run an instance of SQL Server 2025 on there for quick query tests or simple blog posts when I’m in my home office. I also have a leftover Windows laptop that I use as a side monitor when I’m live streaming and looking at my side camera, answering audience questions. I just run Chrome on that though.

Things I love about Macs

The hardware is fantastic. Apple Silicon processors are ridiculously battery-efficient, powerful, and have brilliant thermal management. I haven’t heard a computer fan since the Silicon processors came out in 2020. There have been short 2-3 day trips where I haven’t bothered to bring a laptop charger because the thing just runs for days. The drawback is that Apple’s hardware, while fantastic, doesn’t offer cutting-edge features that you might find in other brands of laptops, phones, and tablets. For example, I’ve got a Huawei Mate XTS dual-fold phone that I absolutely adore, and I wish Apple offered something similar, but they probably won’t for another year or two at least.

The hardware actually has a resale value. I know the pricing seems expensive at first, but it holds way more value than PC laptops. In late 2024, I bought my M4 Max (16″, M4 Max, 128GB RAM, 2TB SSD, nano-texture display) for $5549, and out of curiosity, I just ran it through a couple of buy-your-Apple-device sites just now, and the average trade-in value was $3,300. I usually trade in my Macs every couple/few years, and the ownership cost usually averages out to about $100/month. That’s similar cost-of-ownership to buying a new $3,000 PC laptop every 3 years, and those things are worthless after 3 years of hard road use.

The shared memory architecture is great for AI users. My MacBook Pro has 128GB of memory, and there’s no division between memory used by the operating system and memory used by the video card. There’s not really even such a thing as a video card – it’s all integrated onboard in Macs. As a result, you can use MacBook Pros to run giant machine learning models that can’t possibly fit in 16-32GB PC video cards, let alone laptops. However, if you’re working with 16GB quantized models that do fit in a desktop NVidia graphics card like a 4090 or 5090, the NVidia card will absolutely smoke Apple Silicon processors in terms of token processing speed. (Sure, Apple fans will tout that the MBP can do the processing on the road, on battery, silently, but still, you’re not gonna be happy with Apple Silicon’s AI speeds if you’re moving from a desktop 4090 or 5090.)

The operating system is stable. I don’t remember the last time I had an OS crash or error that required a reboot. However, the term “stable” also means that there haven’t been any significant advancements in the last decade or so (which is why I was looking at moving back to Windows for a while there.)

There are a lot of ecosystem benefits. When you copy/paste on a MacBook Pro, the same copy/paste buffer is available on any of your devices – you can paste on your iPad or iPhone. AirDrop lets you easily push photos, files, contacts, whatever to other devices, including other devices around you. When a text comes in on your phone, you see it on all your devices, and your messages are synced across all of them. (Mark a message read on your laptop, and it’s read on your phone too, etc.) Pair your AirPod headphones on your phone, and they automatically work on your laptop too. Pretty much anytime you think about how data could be shared across different devices, it just already works on Apples.

There are a ton of neat apps available. Now, this is where it gets tricky. You’re reading this Microsoft database blog, and you’re likely doing a lot of work on the Microsoft platform. That also means you probably work for a company who licenses the Microsoft suite for all employees, and relies on it every day. You live in Outlook, Excel, Teams, and SSMS. I’mma be honest, dear reader: those apps are garbage on Macs. Oh sure, there are versions available (except for SSMS), but those versions are sad similes of their Windows equivalents. Outlook and Excel in particular are amazing at what they do on Windows.

So if you decide to switch, you’re likely going to end up using a lot of other apps instead. Long ago, fellow Mac & Linux user Jeremiah Peschka got me started on the concept of, “If the app is available in the browser, you should use the browser version,” and that’s paid off. I use the Google online suite for my email and calendaring, and a lot of web apps for my business work. When I do have to use a local app, it’s likely something that’s in my task bar below:

My taskbar

The less Microsoft apps you rely on to do your job, the easier you’ll find it to switch to Macs. The more of them you use – and in particular, if they include the O365 suite – then honestly, you shouldn’t switch. You’ll be happier in Windows.

Your questions from LinkedIn

That's Anthony Bourdain by painter Cassie Otts
Side camera laptop to show audience questions during streams

I posted on LinkedIn that I was going to write a blog post about this, and I asked y’all what you’d want to know. Here were your questions:

Are you benefiting from the shared memory architecture for local LLMs?” – Eugene Meidinger – Yes, I use LMStudio to run large local LLMs, and it’s really useful when I’m working with clients. I don’t wanna paste their code into a cloud-hosted LLM that may not take privacy seriously. I would only recommend this if you need the privacy aspect though – otherwise cloud-based LLMs from Anthropic, Google, and OpenAI are sooo much better and faster.

Is it a pain to run parallels for Windows only software?” – Eugene Meidinger – You’re going to laugh: I do have Parallels installed, but I only use it to run … Mac OS VMs, hahaha! I don’t even have a Windows VM set up. I had them pre-2020, but when Apple made the switch to Silicon processors, the ARM version of Windows was in a pretty sorry state. At that point I just decided to draw the line in the sand and be done with Windows locally, period. Besides, I’d switched to Mac so long ago at that point that SSMS was the only Windows-only app I had left.

“What was the learning curve like, and how long before you felt fully productive?” – Rebecca Lewis – It was terrible. AWFUL. It was probably 2 years before I felt fully productive at the same speed that I was before. This is the single biggest reason that I wouldn’t recommend that any seriously experienced Windows user make the switch.

Funny side note: I forced my mom to make the switch. I used to do tech support for her, but at one point, I was just so rusty on basic Windows consumer support questions that I said, “I haven’t used Windows for years, and I just don’t know how to fix your new printer.” So I bought her an iMac, introduced her to the nearest Genius Bar, and they took over tech support. That was awesome. I haven’t fielded a tech support question from her in years.

Did you switch because of SQL Server work, or despite it? I mean was the move to Mac about improving your SQL Server workflow, or was SQL Server just baggage you brought with you?” – Rebecca Lewis – Despite it. I was a production DBA at the time, but I’ve always just been curious and liked trying new things. SQL Server was definitely just baggage that I brought with me. 

Is there anything that Windows has that you miss or would like in Mac?” – Vlad Drumea – A lot!

  • Outlook, Excel, SSMS for sure. Yeah, technically they exist on Macs, but they’re nowhere near as fast or feature-complete as their Windows counterparts.
  • Power BI Desktop. The lack of a Mac version actually stopped me from using Power BI going forward – I tapered off my Power BI usage a couple years ago when it seemed clear that Microsoft wasn’t going to build a Mac client.
  • Games are a weak spot too. Often I’ll read about a game on Steam (like Decimate Drive), check the operating system requirements, and sigh.

I’d be most interested in whether there are ways to make the MSSQL extension for VS Code feel comfortable, now that Azure Data Studio is not long for this world.” – Daryl Hewison – When it comes to database developer tooling, Microsoft seems to have all the attention span of a toddler hopped up on espresso. I want my blog posts to meet people where they’re at, so to speak – I want the pictures to seem familiar – so I stick with SSMS for now. I’m going to stay that route in 2026, and in 2027, I’ll revisit to see whether the MSSQL extension for VS Code has been consistently improved, see if they’re staying on top of the Github issues, etc.

How do you like the terminal and package management?” – Phil Hummel – I don’t think any consumer operating system has really solved the problems of package management and virtual environments cleanly yet. For example, if I wanna experiment with a data analytics tool, it’s probably going to have all kinds of package requirements that slightly differ from other tools that I have installed, and the old & new packages won’t play well with each other. Virtualization still feels like the safest, cleanest answer to me.

Any issues with powershell?” – Ron Loxton – I’m probably the world’s lightest PowerShell user. I just use it to merge text files together, so it’s fine for me.

Why Mac over a Linux distro?” – Mark Johnson – I touched on this above, but I wanna reframe it as, “If you were gonna switch away from Windows in 2026, would you switch to Mac or Linux?” I’m heavily into the Apple ecosystem (I have an iPhone, iPad, Apple TVs, HomePods, etc.), and there are benefits to keeping everything in the ecosystem. However, if I wasn’t in that ecosystem – like if I used an Android phone as my daily driver – then I’d definitely buy a Linux laptop from a specialized vendor like System76, spend a week banging on it for basic tasks like USB, Bluetooth, wireless networking, pairing with a cell phone, editing PowerPoints, remote desktopping into places with Entra authentication, Zoom meetings, printing, closing the laptop to see if sleep/resume worked without it setting my laptop bag on fire, etc. If it worked, I’d go with that. If not, System76 has a 30-day return policy, so I’d return it, and give Macs a shot.

If you’ve got any questions about my Mac work, feel free to leave ’em in the comments and I’ll answer ’em there.


The Tech Consulting Market Isn’t Looking Good.

Consulting
18 Comments

After hearing the same dire news from several of my consulting friends, I put a poll up on LinkedIn:

Consultant poll results

About half of the consultants out there are having a tougher time bringing in new clients than they have in the past.

There are a couple things to keep in mind about the numbers. First, some folks call themselves consultants, but they’re really long-term contractors, working for the same client full time for months (or years!) on end. If these people have long-term contracts, they may not be looking for new work, so they may not see the shift in the market yet.

Second, I purposely said “tech consultants” because I wanted to cast a wide net: people who work with any kind of technology. Of course, my peer group on LinkedIn skews towards data people and developers, though.

Finally, I didn’t quantify anything here, and it’s a really short, simple poll with just 3 choices. There aren’t options for much better or much worse, nor does it ask for any quantifying data like number of incoming leads, billable vs unbillable time, or even whether or not the consultant is even looking for new work. It’s just a quick straw poll to help y’all look around the room to see what’s going on.

In the poll’s comments and on related social media discussions, opinions were all over the place about the root causes. Some people say it’s uncertainty, others say it’s the US economy, and of course there’s the elephant in the room, AI. I’ve talked to several existing clients where managers have said, “For 2026, anytime we wanna buy something, build something, or hire someone, we’re gonna try AI first and see what happens.” Yesterday, tech company Block announced that they’re laying off about half of their 10,000 employees in order to force the rest to use AI. Block has always been cutting-edge, and I wouldn’t be surprised if many other big companies – and not just tech ones – follow suit.

Oh I remember these.

February 2026 feels like March 2020.

We know something’s going on, and there’s a lot of fear, uncertainty, and doubt about what the implications are.

Back in March 2020, I remember sitting in Iceland, getting ready to go home because the US State Department told us to, and I was thinking, “Well, business is going to shut down for 6-12 months, so I guess I’m gonna be on the bench for a while.” I actually put serious thought into deciding which tool I was going to learn next because I’d have so much free time. As it turned out, the uncertainty cleared pretty rapidly, and in 2020-2022, most of us in tech worked more (and harder) than we ever had before, helping companies deal with chaotic change. Will 2026-2027 go the same way? It’s too early to tell.

This post doesn’t offer precision or analysis. I just wanted to give y’all a place to chat about it, and to know that you’re not alone. It’s certainly happened to me too – my new consulting pipeline almost shut off starting in December, although it hasn’t really affected me yet because I’d long planned to mostly be on vacation Dec-Feb, and then work on training material for SQL Server 2025 & SQLBits when I returned to the office in March. I’m keeping an eye on incoming leads, though, because it’s wild how quickly it shut down.


[Video] Office Hours in the Vegas Home Office

Videos
0

Let’s hang out in my home office in Vegas and go through your top-voted questions from https://pollgab.com/room/brento. The audio’s a little echo-y in this one because I just moved my office down to a small first floor guest bedroom with hardwood floors instead of carpet, and I haven’t treated the room for noise yet.

Here’s what we covered:

  • 00:00 Start
  • 03:56 DBAInAction: Hi Brent, Have you actually come across anyone running SQL Server AGs in containers, or is that more of a ‘paper’ solution? Honestly, do you think containerizing SQL Server is even worth the effort? Thank you!
  • 04:24 New Developer: After 30 years as a prod-support DBA I got laid off but got a Development DBA position. I know SQL, batch scripting, and can usually tell what a Powershell script is doing, but I’m not a developer, at least not yet. Any advice on how to proceed? I’m pretty lost.
  • 08:33 Little Bobby Tables’ mother: Even with DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE, I still can’t reproduce the slow first execution of a SP each day. Is there another SQL cache that can be cleared? Is there an OS cache for on-prem? (my entire Dev team has done your courses – thank you)
  • 11:07 Blommetje: Do you ever consider quitting SQL and doing something completely different or just retire?
  • 14:44 Dopinder: Should we be running regular CheckDB checks on log shipping secondary? Have you ever seen DB corruption travel from primary server to log shipping secondary server?
  • 16:02 CorporateDBA: To move sysadmin logins to least privilege, I tried SQL Audit, but the log growth is unmanageable. How do you identify the “working set” of permissions for a legacy user without killing disk I/O? Is there a lightweight way to profile needs over time?
  • 16:43 MyTeaGotCold: Know any good resources for Resource Governor?
  • 18:24 TheOneThatFailedTheInterview: Interview question: How to estimate resources for new SQL project (I know, it is very generic)? Answer like extrapolate from previous system did not make interviewers happy. Is there some scientific method or is it always a guesstimate?
  • 22:16 marcus-the-german: Hi Brent, you mentioned that you paint your green screen. Is this a special paint for green screen or just green?
  • 24:08 Andrew D: We are on Hyperscale planning to move from USWest to USWest2 so we can use Availability Zones. There is documentation around using FOG to move the primary but I can’t find anything about moving HA or Named replicas (assume manual) Are you aware of any gotchas?
  • 25:21 Raphra: Will get a chance to see you at Fabcon this year?
  • 26:58 marian: Hello Brent, Have you ever thought of taking some young apprentice and training him / her in the Jedi arts so he / she can carry on your outstanding community work?
  • 28:01 Felipe: Why does it seem harder for DBAs to work for companies outside their own countries? I see developers working under many different models for companies around the world.
  • 30:43 AndrewG: With the current job atmosphere. Is Jr/entry support dba dead? It seems hoping for a data analyst job ( even though this seems limited as well) is the way to go and hope for DBA experience.

I Don’t Take Private Questions at Pre-Conference Classes.

Writing and Presenting
4 Comments

When I first started teaching database sessions at conferences, I noticed a pattern.

When I finished the session, closed PowerPoint, and thanked everyone for attending, a big queue of attendees would instantly start forming at the podium. People would line up to ask private questions, one at a time.

And simultaneously, a small audience would form – people who wanted to hear every question and answer, but didn’t actually want to ask any questions of their own.

As a presenter, that sucked. There were sessions where I couldn’t leave the room until two hours after my scheduled time was over. Several different people in line would ask the exact same question, but because the line was long, they wouldn’t hear that the question had already been asked and answered, so I’d have to repeat it again.

So I came up with a better solution – for me, at least, and maybe it’ll work for you.

Whenever I’m teaching a day-long pre-conference class at a conference, I start with a few logistics slides. I explain what the attendees will learn during the class, how bio breaks will work, and that questions are welcome at any time through the day. Just raise your hand whenever, and we’ll cover your questions right there, as long as they’re related to what we’re discussing.

However, I explain that I have three rules.

First, I can’t take any private questions during bio breaks. I need to pee too, and I have to set up the demos for the next module. Without this, my bio breaks would be utterly frantic. Even WITH this, I still get people coming up during the bio breaks, and I have to gently remind them to raise their hands during the class because I need the break time to set up.

Second, the day will finish with an hour of open Q&A. I aim to end the class at around 4PM, and at that time, we’ll take a short 5-minute bio break. People who want to pack up and leave to beat rush hour traffic or catch a train are welcome to pack up and go, because the official training material is done. People who want to ask questions are welcome to stay, and the rest of the time is spent doing totally open Q&A. Any questions are welcome, whether they’re related to the training material or not – you’re welcome to ask whatever question has been giving you trouble at work.

Third, when the open Q&A is over, there are no private questions allowed. I’ll stand and answer questions as long as people keep asking them, but when you stop asking questions, THE CLASS IS OVER. I will thank everyone for attending, and I’ll pack up my laptop and go. If there’s a question you don’t feel comfortable saying in front of the group, you’re welcome to email me at help@brentozar.com, and I’ll hit that after I get home from the conference. (They’re given specific things to include with their help email, and told that if they don’t include those things, they’ll get my standard template response that suggests ways they can get help for free.) I make a joke out of it by saying, “When open Q&A is over, I’m going directly to my hotel room for a bottle of wine to recover, and you are not going to stand in between me and that wine.”

This reshapes the end of the training class.

I announce that the training material’s done, and that we’re going to take a 5-minute break so folks who want to leave can pack up and head out, and then we’ll switch to the open Q&A portion. I thank everyone for coming today, and take a bow, and the “finishing” round of applause hits. I put a 5-minute timer up on the screen, and I close all my apps so people can see that we’re done with that.

Then, when the open Q&A starts up, it’s genuinely fun. People understand that we can jump around to any topic they’re interested in, and the more questions they hear, the more comfortable they seem to be in posing their own.

But as the questions start to slow down after a while, I have to remind folks about Rule #3. I’ve learned over the years that if I don’t keep repeating it as the questions taper down, there’s always gonna be somebody who tries to pull me aside afterwards to say. “Hey, I just had a few quick questions,” and then they open up a Word doc with a wall of questions their boss sent them to class with.

As the Q&A slows down, as we approach 5PM, I’ll say in a joking tone, “Going once… going twice… any more questions? Remember, after this is over, I’m going to pack my laptop up, go to my hotel, and order room service and a bottle of wine, because I need to recover. When there are no more questions, this class is over, and there will be NO PRIVATE QUESTIONS, remember? You’re not going to line up and accost me afterwards, right? Any more questions?” I genuinely want to hit every single question, and I want everyone in the audience to learn from every question. Questions and answers are so much fun – heck, that’s even part of why I do so many of them in my Office Hours videos, and clearly thousands of y’all love watching me answer other peoples’ questions.

Eventually, the questions stop, and I close things out by thanking everyone for coming, and wishing them a safe drive home. I close my laptop, yank the HDMI cable out, and as I’m putting the laptop in the bag, at least a couple people still line up, every single time. I still say, “No, sorry, remember, we talked about this, we’re not doing private questions.” And they say the same thing every time – “yes but it’s just one question…”

By writing this blog post, I don’t expect anything to change. The people asking these questions aren’t listening to my instructions right there in front of the room, spoken out loud. They’re sure as hell not reading blog posts like this proactively, and filing that information away in their mind for whenever they see me at a pre-conference class. But those of you who are loyal long-term readers, the ones who really do pay attention, will see what happens, and you’ll smile and nod at me when it happens, because you’ll remember this blog post. And we’ll share a smile and a chuckle while I try to politely pack up my laptop bag and go home.

Wanna see me in action? I’m teaching an all-day training course at SQLBits called Dev/Prod Demon Hunters, and here’s how it’ll work.


10 Signs It Was Time to Hire Me

Clients and Case Studies
0

You’ve been reading my blog, watching my videos, and maybe even taking some of my training classes. You’ve heard me say things like “my clients” from time to time, and you’re wondering… why do companies actually hire me? What problem are they trying to solve?

Well, when clients meet with me for the first time, I have a series of questions I put up on the screen for them to talk through:

Why this? Why now? Why me?

The last one basically asks, “What’s the straw that broke the camel’s back? What made you pull the trigger to schedule a sales call with me today as opposed to waiting another couple of weeks?”

Here’s a rundown of several of my recent clients and the reasons they hired me, in no particular order:

1. “Our Azure bills are growing out of control.” The small company’s customer base had been growing 10-20% per year, but their Azure bills had more than doubled over the last year. They kept hitting 100% CPU, and they’d upsized from 4 cores, to 8 cores, and just before calling me, to 16 cores. Management told the tech team, “We’re going to 16 cores, but only for emergency purposes – you gotta get this database server back under control, and back down to 8 cores, max.”

2. “We’re playing Whack-a-Mole.” The company had migrated to Azure SQL DB Managed Instances, and ever since, every couple of weeks, they faced a performance emergency they’d never seen before. Management was tired of the surprises, and wanted to know if it was an MI problem or something else. (It was something else.)

3. “Our third party ERP app performs terrible.” The entire company’s staff was grinding to a halt because salespeople couldn’t place orders, manufacturing processes were timing out, the shipping dock was slowing down, etc. The ERP vendor was blaming the SQL Server hardware and storage, but the company wasn’t so sure.

4. “We want to become proactive.” The small company & team had been growing for years. They’d started as a 3-person shop, and were now approaching 30 people. Whenever they’d had performance problems in the past, the original founder would kinda wing it, but now they wanted to become more practiced and polished. They wanted to assess the server and the team’s existing skills, then build a learning plan.

5. “We know your tools, but we’re hitting a weird wall.” The large company with a team of 3 full time DBAs had been using the First Responder Kit scripts for years, been through all of my Mastering classes, and were able to handle most of their performance issues. However, they were stumped by an unusual recurring storm of poison waits that would strike at random days/times.

6. “The DBA’s gone, and we need a plan.” It never was really clear whether the DBA left on their own or was laid off by cost-cutting management, but either way, things had started going very wrong with the database server. Management needed a prioritized list of what to fix versus what could get by, and then assignments to various existing team members to divvy up the work.

7. “We need a data warehouse strategy.” The so-called data warehouse server had two dozen databases on it, and was getting hammered by half a dozen different teams who’d built a few dozen applications, including real time OLTP on it. The DBAs knew things were bad, but the teams needed an independent outside opinion, in writing, that management could use to build a new strategy over the coming years to get the house in order.

8. “We disagree about whether our caching is working.” Management brought me in because the developers swore the app was using Redis for caching, but the DBAs swore the app was hitting the database constantly. (It turned out they were both right.)

9. “We’re getting ready to refresh everything.” Every 4-5 years, this company would build new SQL Servers. They wanted advice on a range of topics like whether it was time to use readable replicas, if they should try Fabric Mirroring, and whether they should invest developer time to move to a newer version of Entity Framework.

10. “We want fast training targeted at our skill level.” The company had about 20 developers and 1 full time DBA. They knew they’d been shipping good-enough code for a decade, and they wanted to start leveling up. The DBA didn’t have the time or training material to bring the developers up to speed quickly. The company wanted a quick assessment of the code base, then a couple of days of training for the developers to make things better each time they touched existing code.

I love my job because every week is different. I get called into all kinds of companies to solve all kinds of problems, quickly. If you’d like my help, here’s my 2-day SQL Critical Care® process, and you can schedule a sales call from there. I look forward to working with you!