Blog

SQL ConstantCare® Population Report: Fall 2025

In this quarter’s update of our SQL ConstantCare® population report, showing how quickly (or slowly) folks adopt new versions of SQL Server, the data is very similar to last quarter:

  • SQL Server 2025: exactly 1 server being monitored, heh. God bless y’all for giving it a shot this early.
  • SQL Server 2022: 25%, up from 24% last quarter
  • SQL Server 2019: 43%, was 45%
  • SQL Server 2017: 10%, was 11%
  • SQL Server 2016: 13%, was 12%
  • SQL Server 2014: 5%, same as last quarter
  • SQL Server 2012 & prior: 1%, same
  • Azure SQL DB and Managed Instances: 2%, same

SQL Server 2022’s market share has remained pretty stable since its huge jump in Q1, when people replaced a lot of 2016 servers with 2022. With SQL Server 2025’s release approaching quickly, I’m not sure SQL Server 2022 will ever approach the amazing dominant streak that 2019 experienced – that thing’s been a monster! It’s held over 40% market share for two years straight now. Here’s how adoption is trending over time, with the most recent data at the right: SQL Server Adoption Rates

With both Microsoft Ignite and the PASS Data Community Summit on the week of Nov 17-21, I think it’s fair to say that SQL Server 2025’s release is probably imminent, so let’s start thinking about its upcoming adoption rate.

Unlike 2022, I expect SQL Server 2025 to catch on fast.

SQL Server 2022 faced a perfect storm that slowed adoption. 2022 just wasn’t ready in time for users who needed to plan their migrations off of the soon-to-be-unsupported SQL Server 2008, 2008R2, and 2012. The pandemic threw a monkey wrench in a lot of IT projects, and honestly, that included SQL Server 2022 itself. 2022 wasn’t feature-complete for a year, and the cumulative updates were a mess.

I think SQL Server 2025 faces nearly the opposite conditions, and I think it’ll catch on faster.

One reason is that SQL Server 2016 goes out of support in July, yet it still holds 13% of the market. Assuming that 2025 doesn’t suffer from the same rocky CU start that 2022 did, 2025 is primed to capture 2016’s market. SQL Server 2017 follows right behind, going out of support in October 2027, and that’s another 10% of the market. Surely people won’t be replacing 2017 with 2019 or 2022, not in the year 2026.

Another reason is that even though 2025 won’t be feature-complete at release, Microsoft has gotten smarter about that this time around by gating features behind a database-scoped option called PREVIEW_FEATURES. Because change event streaming (aka, low-overhead mirroring to Fabric) is one of those preview features, and because Microsoft looooves getting that sweet sweet Azure revenue, I bet features like that will move out of preview quickly. (I mean, I say that, but at the same time, on-prem AGs with Azure Managed Instance secondaries was a similar situation, and that one took ’em over a year with 2022, so… maybe?)

Finally, I think developers want to call REST endpoints from T-SQL. I know, I know, the DBAs in the audience aren’t excited about that, and I totally understand the risk of dramatically slower transactions and increased blocking problems if external app servers (especially outside of the company network) are suddenly in the critical path of getting transactions completed. It’s like any other tool, it’s gonna have good and bad use cases, but at least I think developers will want to use it, unlike 2019’s ability to call Java with T-SQL.

That’s why I’m updating all of my training classes.

SQL Server 2019 was a big deal, with a lot of intelligent and adaptive query processing stuff. After that, though, 2022 was a yawner. I didn’t bother updating my training classes for it because there just wasn’t that much to share with you. Even if you adopted 2022 – and relatively few shops did – it didn’t make a radical difference in how you tuned indexes, query plans, or wait stats.

SQL Server 2025 is another story. These days, between SQL Server 2025, Azure SQL DB, Managed Instances, and AWS RDS SQL Server, the majority of servers you’re tuning all have adaptive and intelligent query plans, monitored and tuned with Query Store, tracked with Extended Events, indexed with columnstore, and tuned with newer query hints. You’re gonna need new skills – Profiler and Perfmon counters ain’t gonna cut it – and I’m here to help.


The Version Store Won’t Clear If ANY Database Has Open Transactions.

TempDB
11 Comments

Short story: what the title says.

This is especially problematic for folks who merge multiple databases onto the same server. All it takes is one badly-behaving application to leave its transactions open, and suddenly it causes the rest of the databases to run TempDB out of space. That app’s transaction might not even be changing anything, and it might have never caused problems for that application before – but once it shares a TempDB with other apps, it causes cascading problems.

Long story: let’s start by creating a database and throwing some data in it:

Check to see how much space is used in the version store right now, and it should be none:

Do a non-updating update, and check the version store again:

Even though our update statement wasn’t in a transaction, and even though the query’s already finished, you’ll still see a copy of the table in the version store temporarily:

Version store in use

Wait 30 seconds or so, and then check the version store again. It’ll clean itself out automatically in the background. That’s how the world is supposed to work when everything’s going well.

However, imagine that you install a 3rd party vendor app on that same server, and it goes into its own database:

The badly behaved application left its transaction open, but what’s the big deal? It only affects the vendor app, not our own isolated app, right?

Badly behaved app using the version store

Here’s the problem: switch back over to our own supposedly isolated app, try the one-line update statement again, and check the version store space usage:

Version store still in use

You can wait all you want, but our version store size ain’t going down, even though our databases aren’t connected in any way, and even though my well-behaving app isn’t using transactions! Every update & delete happening in my ShouldBeIsolated app is going to go into the version store, and stay there, until my BadlyBehaved app commits or rolls back its transactions.

What if I turn on ADR in BadlyBehaved?

The database-level setting Accelerated Database Recovery moves the RCSI version store out of the shared TempDB, and into the user database itself. This enables faster rollbacks. We’ll roll back our transaction in BadlyBehaved, wait for everybody’s version store to clear out of TempDB, then turn on ADR in BadlyBehaved:

Now, our BadlyBehaved app isn’t using any TempDB space:

BadlyBehaved using ADR

And if we switch back over to our isolated app database, and run our update statement, and check the TempDB version store usage:

Version store growing

Yes, the table initially goes into the version store – it has to, when you’re using RCSI – but then wait 30 seconds and check again, and:

Still there

Bad news. ADR doesn’t help this scenario. Again, if any database has open transactions, even if that database has ADR enabled, the version store won’t clear itself out.

Note: this post was inspired by this 2008 MSDN blog post by Sunil Agarwal and this now-removed post (slow archive at the Wayback Machine) by an unknown author at Kohera.be. Kohera kept moving their post URL around, and then finally deleted their post, and MSDN certainly has a habit of doing that, so I figured I should just stick a post on here and be done with it because I mention this problem during my Fundamentals of TempDB class.


[Video] Office Hours in My Backyard

Videos
2 Comments

Let’s hang out in the backyard – as recently seen on Zillow Gone Wild – and take your top-voted questions from https://PollGab.com/room/brento.

Here’s what we covered:

  • 01:09 AussieDBA: I was saddened to hear of the passing of Andrew Clarke, who surely had the best pseudonym in the SQL industry, Phil Factor. His articles were some of the first I read online, and I was impressed with his complete and clear explanations. Can you share your fondest memory of him?
  • 03:06 Elijah: Parallelism waits are the most common wait type on our server. My Cost Threshold for Parallelism is set to 50 and MAXDOP is set to 8. I want to gradually lower MAXDOP to reduce waits. How can I convince our DBA who is very hesitant to tune server-wide settings and prefers using Resource Governor?
  • 04:27 DMExecCoffeeStats: Back in February, you said something to the effect of “you shouldn’t use schemas to namespace tables.” If I understood you correctly, is there a reason to prod my coworkers into undoing this practice, or should I just leave it be?
  • 06:04 WhoMovedMyCheese: We’re evaluating a couple of different monitoring systems. Do you have any preference or experience?
  • 06:48 AubreyPlaza: My Azure SQL DBs are constantly having a nervous breakdown from constant INSERT/UPDATE spam. I think you mentioned Stack Overflow batching stuff instead of being chaos gremlins. Which course of yours teaches you how to do batching?
  • 07:38 Elwood: Any tips for finding fast and reliable internet when you’re traveling, like overseas or cruise ships?
  • 08:53 Daniel: In your post about max memory settings, you casually mention that you set this up automatically on your lab servers as part of Agent startup. Are there any other settings you use in this context, and would you recommend this in production environments as well?
  • 10:23 Dopinder: Do you recommend any AI courses that are complimentary for a SQL DBA?
  • 11:27 NotCloseEnoughToRetirementToStopLearning: Has setting a persisted sampling rate for a statistic ever gotten you across the finish line? If so, please share the problem that it solved and how you determined the right sampling rate.
  • 12:18 MyTeaGotCold: Have you played around with SQL Server 2025’s new Optimized Locking yet? Any thoughts?
  • 15:08 DrawingABlank: What happens when a memory-optimized table runs out of memory?
  • 15:53 It Could Be a Boat: If you were going to start your career all over again, would you pursue expertise in Postgres as opposed to SQL Server?
  • 16:47 FrugalShaun: I’ve just finished my first Azure SQL DB consulting job. Tools like sp_Blitzcache were useful but I found the experience painful. Simple things like tracing queries was much more difficult than it should have been and the docs suck! How do you cope or do you just avoid?
  • 17:53 Benjamin: I just started a new job as Jr. DBA. I was shocked to discover on my second day that there are several stored procedures on their SQL Server that use linked servers to import data from oracle databases that they don’t own. What alternatives can/should I offer to leadership?
  • 19:09 Dopinder: Does high VLF count (4,160) ever matter for TempDB? It’s on azure vm ephemeral drive.
  • 20:09 T-Rex Come Back: Ever seen a contained database out in the wild? Not the AG thing. Seems like a dead feature.
  • 20:35 UpdateStatsFixesAll: Hi Brent, I have a query plan which is showing a “Columns With No Statistics” warning. I can see a statistic for this column below that table in Object Explorer and when I update that statistic the warning in the plan disappears. Any idea why the statistic is being ignored?
  • 21:31 Big Blue Couch: Hey Brent, Do you have any thoughts on MS use of CDC to mirror on prem DBs to Fabric in SQL Server versions prior to 2025?
  • 23:05 It Could Be a Boat: Hey Brent, do you have any advice on working for a micro-manager or control freak?
  • 25:01 DBA_Unsupported: My new employer is small shop with a lot of ”tech debt”. Mostly SQL2008 & 2012 instances. They won’t upgrade anything because the environment is stable. Any recommendations for the smoothest sailing?
  • 27:13 Dopinder: Is asking programming questions to AI just as helpful as reading the documentation?

Query Plan Pop Quiz Answers 2 and 3: I’ve Got Good News and Bad News.

Execution Plans
5 Comments

In the Query Plan Pop Quiz, questions 2 and 3 asked you about what the sizes of arrows on query plans meant. The good news is that almost all of you got Question 2 right, but the bad news is that the vast majority of you got Question 3 completely incorrect, and the saddest part of that is that you’ve been using that inaccurate knowledge to guide your query tuning – and wasting your time.

Question 2: on an estimated plan, what does the thickness of the colored arrow represent?

Estimated plan

If you hover your mouse over that arrow, the tooltip offers numbers for estimated number of rows and estimated total data size:

Estimated tooltip

I don’t really care what the “right” answer is out of any of that, but the general idea is that the arrow’s thickness is based on the amount of data making that arrow-shaped journey. The thickness might be based on either the row count or the estimated total size, I’ve never cared to dig deeper because it hasn’t mattered for me. I bet someone bright in the comments will have a repro script showing the accurate answer.

However, here’s where it goes haywire…

But here’s the part you got wrong:
what about on actual plans?

Question 3: on an actual plan, what does the thickness of the colored arrow represent?

Actual plan

I guarantee that some of you said, “It’s the amount of actual rows that made that arrow-shaped journey, the number of rows or data size that came out of that operator.” However, look closer: 0 rows came out of that operator, as evidenced by the “0 of 739714” number shown above.

And then as you read that stuff I just typed, I guarantee you, I GUARANTEE YOU, that some of you said, “Oh, I see, so it must mean the estimated rows or data quantity that was supposed to come out.”

And that would be a very fair assumption, given the first part’s laughable answer about how query plan costs are always estimates, even when we’re looking at actual plans. However, if that were true, then the other arrows you see on the screen above would be large, too, because they had large estimates, as shown up higher in this very blog post showing this plan’s estimates. But those arrows are tiny – so what the hell do the arrow sizes represent?

Here’s where it gets ridiculous and surreal.

On an actual plan, that arrow size represents the number of rows read by the previous operator, as hinted by the tooltip when you hover over the arrow:

Actual number of rows read

The clustered index scan here read 2.5M rows, yet produced 0. The thick size of the arrow is supposed to be warning you about the exact operator it’s pointing away from, saying it did too much work while producing too little results.

You can’t make this stuff up. This is what you’re up against as a performance tuner: query plans that actively mislead you about what you should be paying attention to. That’s why it’s so important for you to get good performance tuning training before you waste more hours of your short life trying to fix things that aren’t even a problem, and ignoring things that really are.

I’m not saying you should buy my training classes – and in fact, you shouldn’t! At least, not right now, because my annual Black Friday sales are about to start, dear reader, and I want to take as little of your money as possible. Unless your boss is paying – in which case, tell them to go grab you a Fundamentals and Mastering Lifetime Bundle for $2,495 right now because you’re worth it. (Definitely tell them the lifetime one, because if you’re the kind of person who asks them to pay extra, then you’re probably the kind of person who will take that training with you when you leave that job. I’m not saying. I’m just saying.)


TempDB Filling Up? Try Resource Governor.

TempDB
5 Comments

TempDB is one of the banes of my existence.

Anybody, anybody who can query your server can run a denial-of-service attack in a matter of seconds just by filling it up with a simple query:

This while loop will gradually fill up TempDB, and when one of the attempts eventually fails, that’s okay because the session stays open. It’s still using the rest of the space, preventing other folks (and other system tasks) from using it.

You definitely shouldn’t run that on your last day of work, right before you walk out the door, because even though they’ll disable your login, your existing already-running queries will continue until they finish, or in this query’s case, until it pulls a finishing move on your storage, and your TempDB files expand like a pair of Sansabelt pants trying to handle an all-you-can-eat buffet. And you definitely shouldn’t run it in a loop. (If you do, make sure to drop the table if it exists at the start.)

If you’re not on SQL Server 2025 yet, your main line of defense is to pre-size your TempDB data files ahead of time to whatever max size you want them to be, and then turn off auto-growth. That’s… not much of a defense. Badly behaved queries can still run the entire TempDB out of space, causing problems for other users.

On SQL Server 2025,
use Resource Governor.

We’ve finally got a way to defend ourselves. We can configure Resource Governor to divide people into groups (something that seems to be trendy lately), and then cap how much TempDB space each group can consume. You don’t even have to divide them into groups, either (take note, politicians) – you can just cap how much resources everyone can use altogether. This even works on SQL Server 2025 Standard Edition because Microsoft made that feature available to everybody in that version.

To keep things simple for the sake of this blog post, let’s just assume we’re limiting everyone’s usage altogether. You can either set a fixed-size cap:

Or a percentage of TempDB’s overall size:

Strangely, you can configure both of those at the same time – more on that in a second. Run this query to see what the configured limits are, how much space they’re using right now, what their peak space usage was, and how many times their queries got killed due to hitting the TempDB space limits:

The results:

Resource Governor capping TempDB usage

And what it looks like to your end users when they fill up their quota:

Query stopped by Resource Governor

Instead of causing a system-wide problem when there’s no space left at all in TempDB, now it’s just a… well, I wanted to finish this sentence by typing “just a query-level problem,” but that’s not entirely true. The query’s still holding all of the TempDB space available to the entire default workload pool, and that’s not gonna cut it. To do it right, we have to deal with a lot more gotchas than I can cover in one blog post.

The gotchas (and there are many)

The trickiest gotcha is that the limits only take effect if your TempDB file autogrowth configuration matches your Resource Governor limitations. The documentation on this is a little wordy, but the short story is that if you only cap by percent (not by exact MB size), then the percent limitation only takes effect when either:

  • Auto-growth is turned OFF for ALL TempDB data files, and max size is unlimited, or
  • Auto-growth is turned ON for ALL TempDB data files, and max file size is set

Autogrowth disabledIt has to be all or nothing. At least Resource Governor warns you if you try to enable RG when TempDB’s file configuration won’t support the caps, but if you later modify TempDB’s configurations, there’s no warning to tell you that your TempDB configuration changes just broke Resource Governor.

I don’t really understand why they did this because if you turn off auto-growth, it doesn’t matter what the max file size is. Growth is done, finito, the end. You can’t even set max file size once you’ve turned off autogrowth because it’s irrelevant, as shown here in SSMS.

Similarly, if auto-growth is turned on, I don’t wanna have to set a max file size: I want the OS to grow the files to whatever space is available at that time. I understand why calculating limits is hard in that scenario, though, because the query processor has to calculate the limit and enforce it before the engine tries (and fails) to grow a data file out.

These rules feel like someone did the best coding job they could, with limited resources, trying not to break other pieces of the engine’s code, in order to get this feature out the door – and I’m fine with that. It’s a good compromise, but it does require you being aware of these limitations, otherwise you’re gonna think the feature’s turned on when it’s not. (That’s exactly what happened to me repeatedly during testing – I didn’t understand why the percentage limitations weren’t being enforced, thus this new check in sp_Blitz.)

Another gotcha is that for this to really help, you need to configure Resource Governor in a way that breaks queries into different workload groups. If everybody’s still lumped into the default workload group, then any one query can still run everybody else out of space.

Finally, for the limits to not bring down SQL Server itself, you have to be aware of the resource utilization of other TempDB consumers like the version store and triggers. That’s way outside of the scope of this blog post, but to learn more about that, check out my Fundamentals of TempDB class – which has been recently updated with a new module on this exact topic.


Query Plans Pop Quiz Answer #1: Costs are Garbage.

Execution Plans
4 Comments

In last week’s Query Plans Pop Quiz, the first question was, someone hands you these two queries and you get their estimated plans to decide which query to tune. Perhaps you get the estimated plans from SSMS, or from sp_BlitzCache, or from your monitoring tool.

Pick the Plan

The question was, which query should you focus on tuning? A disturbing number of comments pointed at the top query, saying that it had 100% of the cost, and that its sort was 99% of that giant cost. Wrong-o. Not even close. Do not pass go, do not collect $200.

The right answer: ignore
“query cost relative to the batch.”

Estimated query plans are like a project manager’s guess of what’s going to happen when we execute the project. It’s all candy and unicorns through rose-colored glasses.

To show you what I mean, let’s rerun these queries and get their actual plans instead of their estimates:

Actual plan overall

Focus on the times closest to the SELECT operators:

ENHANCE!
ENHANCE!

On some queries, those times can be deceiving, but not these particular queries. The first query finished in less than 3 seconds, but the second query took over a minute to execute. Armed with that time information, which query do you wanna tune? The answer’s clearly the second query, right?

Yet look at their costs relative to the batch again, which is probably the part you focused on when thinking about which query to tune:

Costs relative to batch

Query 1’s sort of 99% cost? It didn’t even sort a single row! Literally no work was done in that step whatsoever. The 100% query cost was meaningless, as was the 99% sort cost. Totally and utterly meaningless.

Even on actual plans,
those costs are meaningless garbage.

The “Query cost (relative to the batch)” numbers are junk. Seriously, set them on fire, then burn their ashes. They’re worthless, and they distract you from the real problems. I wish Microsoft wouldn’t even show them on actual plans because they mislead people so badly.

Imagine going to a project manager halfway through a project and asking them, “Hey, why’s the project running late and over budget?”

And imagine if the project manager turned to their initial plans for the project, looked them over, and said happily, “Well, according to my estimates, we’re right on track!”

That’s what “Query cost (relative to the batch)” is – it’s all based on estimates made before the query even started. When those estimates go wrong and queries perform poorly, the project manager is still there giving an all-thumbs-up, happily reporting their original status goals.

The costs are the problem,
not the estimated plan.

If you’re just getting started query tuning, your best bet by far is to focus on the longest-running queries, and then grab my Fundamentals classes during my upcoming Black Friday sale to start leveling up. The more advanced folks in the audience, like the ones who’ve conquered my Mastering Query Tuning class, will be able to spot the real problem areas in plans, whether the plans are estimated or actual.

But everybody in the audience – junior or senior – should simply ignore the “Query cost (relative to the batch)” numbers lest they waste time tuning queries that aren’t even a problem to begin with.

Stay tuned for next week and we’ll discuss questions 2 & 3 from the quiz – and it’s a mixture of good news, bad news. A lot of you got question 2 right, but a disturbing number of y’all failed question 3.


SSMS v22 Query Hint Recommendation Tool: The Invasion of the Query Hints

SQL Server Management Studio 22 Preview 3 is out, and it brings with it a new Query Hint Recommendation tool. Start by highlighting the query you want to test, then click Tools, Query Hint Recommendation Tool. It slides out a new pane on the right hand side:

Query Hint Recommendation tool

The maximum tuning time defaults to 300 seconds, but I tacked on a couple zeroes because my slow query already took ~20 seconds to run on its own, and I wanted to give the wizard time to wave his little wand around. The tool actually runs your query repeatedly with different hints, so if you have a 5-minute query, you’ll need to give the tool more time.

Click Start, and it begins running your query with different hints. A couple minutes later, I got:

Query Hint Recommendation tool finished

Neato – it skips hints that it doesn’t expect an improvement on. The winner was the below combination of hints for a supposed 78% improvement:

To integrate them into your code, highlight your query in the SSMS window, then right-click on the relevant hint, and click Append Hint to Query:

Append hint to query

And, uh, it adds it after the semicolon:

disappoint

<sigh> Okay, whatever, it’s a preview, and it’s worth a little pain because the resulting query really is dramatically faster. I ran the un-hinted and hinted versions of the query five times each to let 2025’s adaptive query planning do its thing, and still the hinted query comes out on top. It’s literally on top in the below screenshot:

Hinted on top, unhinted on the bottom

The hinted query runs in about 9 seconds (down from 25), despite spilling to disk and doing more CPU work (28 seconds as opposed to 25.) That’s fantastic!

Given the choice between these two options:

  • People rewriting queries in ways that make the query less intuitive to read, yet possibly more performant, or
  • People slapping a few hints at the end of the query

I would much rather take the latter. The latter can be backed out easily, whereas the former cannot. Yes, we’d all prefer that all of the staff attend my index & query tuning fundamentals courses (especially since it’s just $1995 for unlimited seats), because trained people would notice that the hinted version is really going faster because it’s parallel: it’s able to do the 28 seconds of CPU work in just 9 seconds because it went parallel. The unhinted query did not. So if I simply try adding this hint to the original query:

Then the query runs in just seven seconds – faster – while retaining its intelligent query plan options like adaptive joins. That’s a better answer – but, like I said, doing the better thing takes time and training. The reality is that a lot of companies out there don’t have the time or money to train their staff. They lean on free tools, and here Microsoft is giving us a free tool that can make big real-world improvements. It’s still up to people to use the tool correctly and appropriately.

I actually love this feature. It empowers more people to do more query experiments, more quickly – without really breaking the original query. You can always remove the hints, either manually or with Query Store, so it’s not that brittle or dangerous. There’s a risk to hints, of course: they have dramatically different results on different hardware and versions. Some people are gonna run this tool on non-production, underpowered servers, and they’re gonna be elated with a particular combination of hints that make their pig fly, but the hints will have terrible effects in production. That’s not the fault of the app’s tool – that’s the fault of the tool running the app.

And hey, at least it beats Copilot.

Github Copilot also arrives in SSMS v22 Preview 3. I don’t want you to think I’m anti-AI: I am very, very pro-AI. I use Anthropic Claude, Google Gemini, and OpenAI’s ChatGPT on a daily basis, plus locally hosted models. LLMs are a tool – and a heavily VC-subsidized one at the moment – and the right tools in the right hands can help get better work done more quickly. Sure, like any tool, LLMs have drawbacks, and you have to be careful not to saw your thumb off.

So I asked Github Copilot for help, and its top performance improvement suggestion was to check for indexes.

Indexes that already exist, as illustrated in the query plan in the screenshot, plus in Object Explorer at left:

Copilot, you donut

<sigh> Whatever. It’s in preview, and as long as venture capitalists keep lighting money on fire to train more models, I hope that it becomes smarter and more efficient, because the idea of it is pretty cool.


Announcing Free MASTERING Week 2025!

Company News
7 Comments

You’ve been working with SQL Server, Azure SQL DB, or Amazon RDS SQL Server for years.

You’re jaded. You’re confident. You’re pretty sure you know what you’re doing.

Mastering Week

You’ve never taken my Mastering classes because you’ve read the blog, watched the live streams, and figured you’ve pieced it all together. You can’t imagine there’s anything left to learn — no surprises left in the box.

Well, now’s your chance to find out, for free.

From November 11-14, I’m running a brand-new special event: Mastering Week, four half-day classes, totally free to attend live.

  • Tuesday: Mastering Index Tuning
  • Wednesday: Mastering Query Tuning
  • Thursday: Mastering Parameter Sniffing
  • Friday: Mastering Server Tuning

Register here to grab your seat and download the calendar invites before your coworkers try to book you into yet another “quick sync.” At showtime, head to the live stream link in the invite — that’s where the magic happens.

Can’t make it live? The recordings won’t be on YouTube or free later. You’ll need my Recorded Class Season Pass: Mastering, which includes all four full-length classes — available for one-year or lifetime access.

In just four hours, you’ll know whether you’ve really mastered Microsoft databases… or if there are still a few tricks this old dog can teach you.

Let’s hang out and talk data. Bring your curiosity (and maybe your ego!)


[Video] Office Hours: Ask Me Anything About Microsoft Databases

Videos
8 Comments

I’m back at home in Vegas, taking your top-voted questions from https://pollgab.com/room/brento on a hillside enjoying the fall desert weather.

Here’s what we covered:

  • 00:00 Start
  • 03:10 Bruce: Howdy! When, if ever, would you recommend implementing a SQL Server Central Management Server?
  • 04:09 Impostor Syndrome: Are multi-TB databases really all that rare? I have several that are over 10 TB, but I don’t think of myself as being in a top 1% company.
  • 05:23 LikeHeardingCats: Hey Brent. You’ve mentioned in the past, I believe, that you deal with Imposter Syndrome. Being the SQL Guru that you are, do you still have doubts when it comes to SQL Server? If so, how do you manage those thoughts?
  • 07:29 Stefano: Hi Brent, I make extensive use of temporary tables (
  • #table) for complex reporting or data extraction queries, particularly for preprocessing data from linked servers (yes, they exist…). My tempdb is fast and capacious. Are there any drawbacks to this approach?
  • 10:31 Dopinder: What’s your opinion of the various AI engines ability to optimize TSQL queries and sprocs? Which one is best?
  • 12:02 Culloden: Hey Brent, Have you participated in any recent DBA interviews for your clients? If so, have you noticed any trends in what skillset employers are seeking?
  • 13:45 Parameter Sensitive Boyfriend: Terminology question: if I have a query with OPTION(RECOMPILE) that has massive variation in performance based on what arguments are passed in, is it considered parameter sensitive?
  • 14:47 MyTeaGotCold: People tell me that 32 GB of RAM is enough for a high-end gaming PC. I’m looking to build one, but my DBA instincts say that anything less than 100 GB is unacceptable. How do you resolve this conflict in your machines?

Updated First Responder Kit and Consultant Toolkit for October 2025

First Responder Kit Updates
0

This quarter’s release includes new checks for SQL Server 2025’s new memory pressure warnings, Azure SQL DB’s operations in progress, warnings about not using partitioned statistics where appropriate, plus bug fixes.

How I Use the First Responder Kit
Wanna watch me use it? Take the free class.

To get the new version:

Consultant Toolkit Changes

Updated to this quarter’s First Responder Kit, but no changes to the spreadsheet template. This release adds behind-the-scenes code to export to JSON, and then import that data into a database so you can keep a centralized database with all of your clients’ diagnostic data on their servers for easier analysis. If you’re interested in testing that, email me at help@brentozar.com with a short description of your use case.

sp_Blitz Changes

  • Enhancement: warn if the server is under memory pressure using SQL Server 2025’s new sys.dm_os_memory_history. (#3690)
  • Enhancement: reduced false warnings for Linux installations. (#3702, thanks bmercernccer and Tom Willwerth.)
  • Enhancement: add @UsualOwnerofJobs parameter so you can warn about logins other than SA if that’s your thing. (#3688, thanks James Davis.)
  • Fix: servers with German languages could get an error about a subquery returning more than one value. (#3673, thanks Dirk Hondong)
  • Fix: skip checks for Managed Instances. (#3685, thanks Klaas.)

sp_BlitzCache Changes

  • Fix: implicit conversions would have problems if the parameters involved were really long. (#3681, thanks DForck42.)

sp_BlitzFirst Changes

  • Enhancement: warn if the server is under memory pressure using SQL Server 2025’s new sys.dm_os_memory_history. (#3692, #3703, thanks Eilandor.)
  • Enhancement: warn about ongoing Azure operations like database restores, creations, deletions, setting up geo-replication, changing performance levels or service tiers, etc using sys.dm_operation_status. (#3708)
  • Enhancement: when a database is being restored to a new name, show the name. (#3695)

sp_BlitzIndex Changes

  • Enhancement: when @SkipStatistics = 0, warn if partitioned tables don’t have incremental statistics. (#3699, thanks Reece Goding.)
  • Enhancement: add warning for persisted sampling rates. (#3679, thanks Reece Goding.)
  • Fix: on a few warnings, instead of filtering out newly created indexes, we were only looking at the new ones, hahaha. (#3705, thanks Bruce Wilson.)
  • Fix: possible arithmetic overflow. (#3701, thanks Reece Goding.)

sp_BlitzLock Changes

  • Fix: unusual situations might have left folks unable to create permanent output tables. (#3666 and #3667, thanks Filip Rodik.)

sp_BlitzWho Changes

  • Enhancement: get live query plans by default on SQL Server 2022 & newer. (#3694)

sp_ineachdb Changes

  • Fix: change @is_query_store_on default to null. (#3668, thanks kmorris222.)

Watch Me Working On It

In this live stream, I handle a few pull requests, then add support for SQL Server 2025’s new sys.dm_os_memory_health_history:

The work continues in this video:

For Support

When you have questions about how the tools work, talk with the community in the #FirstResponderKit Slack channel. Be patient: it’s staffed by volunteers with day jobs. If it’s your first time in the community Slack, get started here.

When you find a bug or want something changed, read the contributing.md file.

When you have a question about what the scripts found, first make sure you read the “More Details” URL for any warning you find. We put a lot of work into documentation, and we wouldn’t want someone to yell at you to go read the fine manual. After that, when you’ve still got questions about how something works in SQL Server, post a question at DBA.StackExchange.com and the community (that includes me!) will help. Include exact errors and any applicable screenshots, your SQL Server version number (including the build #), and the version of the tool you’re working with.


Query Plans Pop Quiz: Three Simple Questions

Execution Plans
28 Comments

Question 1: Pick the Problematic Plan: someone hands you a pair of queries, and you get the estimated query plans. (Perhaps you get the estimated plans from SSMS, or from sp_BlitzCache, or from your monitoring tool.) Which one of these two should you focus on tuning first, Query 1 or Query 2?

Pick the Plan


Question 2: on an estimated plan, what does the thickness of the colored arrow represent?

Estimated plan


Question 3: on an actual plan, what does the thickness of the colored arrow represent?

Actual plan

Read the page carefully, come up with your 3 answers, and then start reading the right answers.


Free Fundamentals Classes Are Coming Next Week! Register Now.

Conferences and Classes
5 Comments

You’re a developer, analyst, database administrator, or anybody else working with SQL Server, Azure SQL DB, or Amazon RDS SQL Server. You want to learn how to make your databases go faster.

Good news! Next week, I’m teaching totally free half-day versions of my classes from 9AM-1PM Eastern time, 8AM-Noon Central, 6AM-10AM Pacific:

  • Monday: Fundamentals of Index Tuning
  • Tuesday: Fundamentals of Query Tuning
  • Wednesday: Fundamentals of Columnstore
  • Thursday: Fundamentals of TempDB

These half-day classes are totally free, no strings attached, and they’re a great way of getting started on your formal database education journey, or catching up on things that you’re a little too cocky to admit you didn’t know.

To attend, register here, then grab the calendar invites to block out your coworkers from trying to schedule you for meetings, hahaha. At the time of the class, head here for the live stream. (That URL is in the calendar invites too.)

If you can’t make the live classes, the recordings won’t be on YouTube or free – you’ll need to buy my Recorded Class Season Pass: Fundamentals bundle, which includes the full day-long versions of each of those classes, PLUS additional courses on Azure networking, PowerShell, parameter sniffing, and more. You can either buy one year of access, or lifetime access.

Be there or be square – see you in class!


Who’s Hiring Microsoft Data People? October 2025 Edition

Who's Hiring
10 Comments

Is your company hiring for a database position as of October 2025? Do you wanna work with the kinds of people who read this blog? Let’s make a love connection.

Yes, you.
I think YOU should apply.

If your company is hiring, leave a comment. The rules:

  • Your comment must include the job title, and either a link to the full job description, or the text of it. It doesn’t have to be a SQL Server DBA job, but it does have to be related to databases. (We get a pretty broad readership here – it can be any database.)
  • An email address to send resumes, or a link to the application process – if I were you, I’d put an email address because you may want to know that applicants are readers here, because they might be more qualified than the applicants you regularly get.
  • Please state the location and include REMOTE and/or VISA when that sort of candidate is welcome. When remote work is not an option, include ONSITE.
  • Please only post if you personally are part of the hiring company—no recruiting firms or job boards. Only one post per company. If it isn’t a household name, please explain what your company does.
  • Commenters: please don’t reply to job posts to complain about something. It’s off topic here.
  • Readers: please only email if you are personally interested in the job.

If your comment isn’t relevant or smells fishy, I’ll delete it. If you have questions about why your comment got deleted, or how to maximize the effectiveness of your comment, contact me.

Each month, I publish a new post in the Who’s Hiring category here so y’all can get the latest opportunities.


Which Should You Use: VARCHAR or NVARCHAR?

Development
29 Comments

You’re building a new table or adding a column, and you wanna know which datatype to use: VARCHAR or NVARCHAR?

If you need to store Unicode data, the choice is made for you: NVARCHAR says it’s gonna be me.

But if you’re not sure, maybe you think, “I should use VARCHAR because it takes half the storage space.” I know I certainly felt that way, but a ton of commenters called me out on it when I posted an Office Hours answer about how I default to VARCHAR. One developer after another told me I was wrong, and that in 2025, it’s time to default to NVARCHAR instead. Let’s run an experiment!

To find out, let’s take the big 2024 Stack Overflow database and create two copies of the Users table. I’m using the Users table here to keep the demo short and sweet because I ain’t got all day to be loading gigabytes of data (and reloading, as you’ll see momentarily.) We’re just going to focus on the string columns, so we’ll create one with VARCHARs and one with NVARCHARs. Then, to keep things simple, we’ll only load the data that’s purely VARCHAR (because some wackos may have put some fancypants Unicode data in their AboutMe.)

Let’s compare their sizes with sp_BlitzIndex @Mode = 2, which lists all the objects in the database. (I’ve dropped everything else because YOLO.)

Table sizes compared

The NVARCHAR version of the table is bigger. You might have heard that it’d be twice as big – well, that’s not exactly true because the rows themselves have some overhead, and some of the rows are nulls.

The difference shows up in indexes, too. Let’s create indexes on the DisplayName column:

Table & index sizes before compression

The NVARCHAR version of the index is 629MB, and the VARCHAR version is 439MB. That’s a pretty big difference.

I used to hate it when people said, “Who cares? Disk is cheap.”

The first problem with that statement is that up in the cloud, disk ain’t cheap.

Second, memory ain’t cheap either – again, especially up in the cloud. These object sizes affect memory because the same 8KB pages on disk are the same ones we cache up in memory. The larger our objects are, the less effective cache we have – we can cache less rows worth of data.

Finally, whether the data’s in memory or on disk, the more of it we have, the longer our scans will take – because we have to scan more 8KB pages. If you’re doing an index seek and only reading a handful of rows, this doesn’t really matter, but the more data your query needs to read, the uglier this gets. Reporting queries that do index scans will feel the pain here.

But then I started using compression.

If you’re willing to spend a little more CPU time on your writes & reads, data compression can cut the number of pages for an object. Let’s rebuild our objects with row compression alone to reduce the size required by each datatype. (I’m not using page compression here because that introduces a different factor in the discussion, row similarity.)

The size results:

After row compression

Now, the object sizes are pretty much neck and neck! The difference is less than 5%. So for me, if I’m creating a new object and there’s even the slightest chance that we’re going to need to store Unicode data, I’m using NVARCHAR datatypes, and if space is a concern, I’m enabling row compression.

What about the new UTF-8 collation?

SQL Server 2019 introduced UTF-8 support to allow VARCHAR columns to store more stuff. At the time it came out, character guru Solomon Rutzky said it was tearin’ up his heart, concluding:

While interesting, the new UTF-8 Collations only truly solve a rather narrow problem, and are currently too buggy to use with confidence, especially as a database’s default Collation. These encodings really only make sense to use with NVARCHAR(MAX) data, if the values are mostly ASCII characters, and especially if the values are stored off-row. Otherwise, you are better off using Data Compression, or possibly Clustered Columnstore Indexes. Unfortunately, this feature provides much more benefit to marketing than it does to users.

Yowza. Well, we call him a character guru for more reasons than one. At the time, I wrote off UTF-8, but let’s revisit it today in SQL Server 2025 to see if it makes sense. We’ll create a table with it, and an index, and compress them:

The resulting sizes:

UTF-8 sizes with compression

Sure, the UTF-8 versions are about the same as the VARCHAR version – but, uh, neither of those is really a savings compared to compressed NVARCHAR. What if we remove the compression from all 3 versions?

The resulting sizes:

Without compression

Like Solomon wrote years ago, UTF-8 really only makes sense when you’re not allowed to use compression for some reason.

The verdict: just use compressed NVARCHAR.

If you’re positive your application will never need to store Unicode data there, sure, default to VARCHAR. You get a disk space savings, you can cram more in memory, and your index scans will be faster because they’ll read less pages. (Not to mention all your maintenance operations like backups, restores, index rebuilds, and corruption checks.)

But you also need to be sure that the application will never pass in NVARCHAR parameters in its queries, which introduces the risk of implicit conversion.

I say if there’s even a chance that you’ll hit implicit conversion, or that you’ll need to put Unicode data in there, just default to NVARCHAR columns. Compress ’em with row-level compression to reduce their overhead to within a few percent of VARCHAR, and that mostly mitigates the concerns about about disk space, memory caching capacity, or long index scans.

Bye bye bye.


Set MAXDOP in Azure SQL DB or You’ll Get This Cryptic Error.

Azure SQL DB
1 Comment

Max Degrees of Parallelism (MAXDOP) tells the database engine, “If you decide to parallelize a query, go parallel with this many worker threads.”

(It’s a little more complex than that – there is also a coordinating thread, plus a single plan might have multiple parallel zones that each consume MAXDOP worker threads, but for the sake of this blog post, let’s keep it simple.)

Microsoft’s recommendations on how to set MAXDOP are a little long-winded, but again, in the interest of brevity, I’m going to summarize it as setting it to the number of cores in each physical processor, up to 8. (Again, the rules are much more complex – but if you want the full story, click on that link. I’m going to keep moving.)

In Azure SQL DB, you set max degrees of parallelism at the database level. You right-click on the database, go into properties, and set the MAXDOP number.

I say “you” because it really is “you” – this is on you, bucko. Microsoft’s magical self-tuning database doesn’t do this for you.

And where this backfires, badly, is that Azure SQL DB has much, much lower caps on the maximum number of worker threads your database can consume before it gets cut off. You’ll get an error like this:

The request limit for the database is 2000 and has been reached.

Thing is, that error message is a vicious lie: you haven’t hit 2,000 requests. You’ve just hit 2,000 worker threads! In the case of my client, on their 20-core Azure SQL DB, queries were going parallel and consuming 20+ worker threads. All it took was ~90 simultaneous parallel queries, and they’d hit the worker thread limits. It was so confusing because they thought there was no way their app could possibly be sending in 2,000 requests – and they were right.

The Azure SQL DB resource limits page explains:

The request limit for the database has been reached

The first fix you should try: check your database’s maxdop setting, and if it’s 0, read how to set MAXDOP, and set it. Don’t waste money upsizing your server, and don’t waste time tuning code until you’ve corrected this terrible default setting first.


Here Are the YouTube Channels We Both Loved.

Company News
4 Comments

Last week, I announced that we’ve hit 50,000 YouTube subscribers, and I celebrated by letting y’all submit your favorite YouTube channels in the comments, and 5 of you would win my Fundamentals & Mastering Class Bundle.

You did not disappoint.

First, I’m giving 5 Fundamentals Bundles to these 5 folks who suggested channels that already happen to be among my favorites. I just wanted to reward them for their great taste, and also to share these channels with the rest of you, dear readers:

  • Alexey for suggesting Rick Beato, a really nice musician who conducts interviews with brilliant talent
  • Francesco for suggesting Mentour Pilot, who recaps aircraft crashes and disasters
  • HappyDBA for suggesting the Fire Department Chronicles, who uses comedy to recap real-life experiences of 911 calls
  • Jeremy for suggesting Chocodogger, a dog that eats things with human hands
  • Laurens Bracke for suggesting The OG Crew, a hilarious combination of game show and improv

Next, I couldn’t stop at 5 winners – I ended up picking TEN of them because there were so damn many good ones!

  • Alexander for suggesting The Tim Traveller, a nerdy travel show
  • Andreea Podariu for suggesting Shanbai, an artist with beautiful ASMR-style recordings of their art processes
  • An Gie Cech for suggesting Sumo Food, a chronicle of the daily life of sumo wrestlers practicing and eating, which sounds bizarre, but ends up being chill background material showing a lifestyle I didn’t know existed
  • Ben Belnap for suggesting Stuff Made Here, a cool inventor who doesn’t publish a lot of videos, but when he does, they’re of extremely high quality and production value
  • Joe for suggesting Erik Aanderaa, who sails through vicious storms, making for amazing background video material while I work
  • Joseph for suggesting Jacob Knowles, a lobster fisherman who shares behind-the-scenes stories of his work
  • Richard L. Dawson for suggesting Neural Derp, a channel that builds AI remixes like Redneck Star Trek TNG – these should not be as funny as they are
  • SabiBi for suggesting 40 Over Fashion, a channel with clear, simple advice for looking and feeling your best (I don’t necessarily agree with a lot of the advice, but I like the guy’s opinion and approach already)
  • Shane for suggesting Nat’s What I Reckon, a one-man very not-safe-for-work Aussie cooking show
  • Steve Earle for suggesting 3Dbotmaker, die cast car racing captured with absurdly high production values

Thanks again for making my work fun, for making it possible for me to do this stuff and share my work with y’all, and thanks for being such fun people to interact with. I really love my work, and I love that y’all join in.


We Hit 50,000 YouTube Subscribers! Let’s Run a Contest.

Company News
176 Comments

This is kinda amazing to me: our YouTube channel has broken the 50,000 subscriber mark!

We hit 50,000 YouTube subscribers!

That’s wild to me because I don’t put “please like and subscribe” type stuff in the videos, and I don’t try to build viral content. I just show up every week and publish answers to y’all’s questions from PollGab. And yet, our subscriber growth is slow and steady over time, growing by about 10% per year:

Subscriber growth over time

To celebrate, let’s give away 5 Fundamentals & Mastering Bundles! To enter, leave a comment with a link to your favorite YouTube channel (other than ours) – I love finding new unusual stuff to watch. On Sunday, I’ll pick 3 random winners, plus 2 winners with the YouTube channels I haven’t seen before that strike me as the most interesting.

I suppose I should inspire you with a few of my favorites that you might not have discovered yet:

When you submit your comment, it may not show up right away because comments with multiple links require moderation around here to prevent spam. I’ll moderate & approve ’em daily though.


Microsoft Now Recommends You Set Max Memory to 75% and Min Memory to 0%.

Configuration Settings
20 Comments

Somehow I missed this a few years ago, and I bet a lot of y’all did too. Note the new “Recommended” column in the memory settings documentation:

Min and max memory recommendations

These recommendations are also set by the SQL Server 2022 setup wizard if you choose the recommended settings on the memory step.

The documentation change was made in this pull request, and I don’t see a Github issue or documentation/comments around the recommendation, which is totally fine. I don’t expect (or desire) Microsoft to have a public discussion on every settings change they make – nobody would ever get anything done, hahaha.

When I noticed it, I posted about it on LinkedIn, and there was a vibrant discussion. Randolph West (of Microsoft’s Docs team) posted about it too. People talked about how this should probably be automatically managed, and I’d point out that if you follow Microsoft’s recommendations, it actually is automatically managed! SQL Server will size its memory up (and under pressure, down) based on what’s happening on the server.

I do wish Microsoft’s recommendations added another factor: a different recommendation for big servers. 25% unused memory is pretty high when the server has 512GB memory or more. For example, in my setup checklist, I recommend leaving 10% free or 4GB, whichever is greater. Here’s part of the script that my lab servers run on Agent startup, where I use a 15% number instead because I also run SSMS on those:

That way, when I change an instance’s size, I don’t have to worry about going back and touching max server memory.

I’m not saying 10%, 15%, 25%, or 4GB is the right number, but I think we do need to consider both a percentage and a floor at the same time. Otherwise, people with 16GB VMs are going to say 25% sounds about right, and the people with 1TB servers are going to say 5% is plenty of memory to leave unused.

I’m not opening a pull request to change the documentation recommendation (let alone the setup wizard) because I think y’all will have strong opinions, and I’d rather hear those first. I certainly don’t have the right definitive answer on this one – I’m confident enough to put it in my setup checklist, but not so confident that I’d ask my guidelines to be the official documentation, hahaha.


Announcing Free Fundamentals Week 2025!

Conferences and Classes
2 Comments

You’re a developer, analyst, database administrator, or anybody else working with SQL Server, Azure SQL DB, or Amazon RDS SQL Server. You want to learn how to make your databases go faster.

Good news! In October 13-16, I’m teaching totally free half-day versions of my classes from 9AM-1PM Eastern time, 8AM-Noon Central, 6AM-10AM Pacific:

  • Monday: Fundamentals of Index Tuning
  • Tuesday: Fundamentals of Query Tuning
  • Wednesday: Fundamentals of Columnstore
  • Thursday: Fundamentals of TempDB

These half-day classes are totally free, no strings attached, and they’re a great way of getting started on your formal database education journey, or catching up on things that you’re a little too cocky to admit you didn’t know.

To attend, register here, then grab the calendar invites to block out your coworkers from trying to schedule you for meetings, hahaha. At the time of the class, head here for the live stream. (That URL is in the calendar invites too.)

If you can’t make the live classes, the recordings won’t be on YouTube or free – you’ll need to buy my Recorded Class Season Pass: Fundamentals bundle, which includes the full day-long versions of each of those classes, PLUS additional courses on Azure networking, PowerShell, parameter sniffing, and more. You can either buy one year of access, or lifetime access.

Be there or be square – see you in class!


[Video] Office Hours: Database Questions & Answers

Videos
1 Comment

In between clients, I hopped onto my Twitch channel to take your top-voted questions from https://pollgab.com/room/brento. If you’d like to get notified whenever I do one of those live streams, you can follow my channel for free and you’ll get email notifications automatically. I do ’em whenever I have time, usually about once a week when I’m at home.

Here’s what we covered:

  • 00:00 Start
  • 02:26 DemandingBrentsAttention: Are you aware of any solutions to run SSMS or manage on-prem SQL servers from a Linux/*nix OS? There does not seem to be anything viable, but I’m not sure what I’m missing.
  • 05:24 DataBeardAdministrator: I’ve been tasked with building my first data warehouse. With goals like adding temporal fields and updating reports to use date ranges rather than filtering by active = 1. Any tips on getting started with data collection and ETL? Any personal favorite tools or books you like?
  • 06:56 Wrapped the Tenga with Hello Kitty and Kuromi: Do you ever see Distributed Availability Groups at all? Not even work on, just see.
  • 07:39 Karen from IT: Should DBAs even care about normalization anymore when storage is cheap and SSDs are fast?
  • 09:00 I’mTrying: Our company is pushing us to AWS, including all databases (currently MSSQL and Oracle). Obviously, I’m still learning, but is Amazon RDS for SQL Server a good idea? Are there any apparent gotchas I need to do further research on?
  • 10:05 Online transaction procrastinator: I want to move my 4-node AG to super fast local storage. This means losing the ability to offload corruption checks with SAN snapshots. Is there any other good way to offload them? Failing over to a corrupt database is my worst nightmare.
  • 11:49 MyTeaGotCold: Any advice for dealing with Tableau in particular? My users swear to me that they have no control over the queries it generates, so the only way I’ve found to get good performance is to throw clustered columnstore in wildly inappropriate places.
  • 12:46 JuniorDBA: We take care of multiple customers database servers, Standalone, FCI, Always on AG. Whenever we inherit servers within the company or the customer comes to our company and they migrate here, what kind of checklist should we go through before official accepting the handover?
  • 13:29 Petert: What is your favorite event to go to these days?
  • 14:48 Partitioning Pete: Is table partitioning actually worth it in real-world workloads, or is it just a resume-driven feature nobody really needs anymore?
  • 16:21 My Coffee Got COLD: Why do the run dates in SQL Agent – Job Activity Monitor show in DD/MM/YYYY format? As far as I know all language settings are US-English. Running SSMS Ver 15.0.1824
  • 17:46 SQLbuddy: I have multiple SQL Server instances on one cluster. SSAS, SSRS are also working on it. Is there an easy way to have it working in disaster recovery (in case of one of nodes will fail). Only move these services to separate machine/vm and buy new licenses?
  • 18:40 SQLbuddy: My company has 25TB DataWarehouse (already migrated from async 2016 to sync 2022). All of their jobs (related with Tabular, SSRS etc. only exists on primary node. What is the easiest way to have it replicated in case of failover (and working with minimal impact on business)?
  • 20:25 SQLClueless: We’re working on a problem where USERSTORE_TOKENPERM balloons out of control, even on SQL 2019. I’ve read your post about this and periodically free this memory using an agent job. Have you heard of any new solutions to this issue? Sometimes the job alone doesn’t keep up.
  • 21:14 Chakra: What is the future of SQL DBAs, required to switch to PostgreSQL, NoSQL DBAs, along with AI/ML Vector DBs?
  • 22:10 Dopinder: In SQL 2019, Is there a good way using first responder kit or other tool to see which queries are consuming the most tempdb?
  • 23:10 IWriteSQLInPortuguese: You reference of not using “You Should NoT Linked-Server”. But when inner joins are required, I still need the data. I’ve solved most with data replication, but with some big tables replication is not “possible”. What other solution recomended? TableFunctions? StoredProcedures?
  • 24:54 ParameterSneezing: If OPTION (RECOMPILE) solves parameter sniffing, why not just sprinkle it everywhere and be done with it?