Blog

SQLBits 2026 is Going Back to Newport, Wales.

SQLBits
7 Comments

SQLBits 2026 will be back at the International Convention Center Wales in Newport on April 22-25, and I’ll be there!

Bits was there in 2023, and they’ve got a recap video showing the venue.

This location is a little tricky for us Yanks: here’s where it’s at on Google Maps. For international visitors, you’ve got a few options:

  • Fly into London Heathrow, then take a 3-hour train to Newport. This is the option I chose because there are non-stop flights from Vegas to Heathrow. I’ll fly into London arriving April 19, spend a day and a half in London, and then take the train over to Newport on Tuesday the 21st, the day before the conference. After the conference, I’ll catch the train back and fly out of Heathrow on Sunday.
  • Fly into Cardiff (CWL, 15 mi away) or Bristol (BRS, 30 mi away), and take a taxi or train. If I lived on the East coast of the US, I’d check into this option, because there are affordable flights out of places like Atlanta. Nothing for Vegas though.
  • Fly into London, then rent a car. I wouldn’t recommend this unless you wanna spend more than a week roaming around the UK, and can take your time gradually getting used to driving on the other side of the road. It’s way more challenging than you might expect. I’m a pretty good driver – I’ve never been ticketed for an accident – but I had multiple close calls driving in England and the Isle of Man.

You don’t really want a car during the conference. This year, Bits is selling packages with your hotel room included, and they’re looking at getting buses to take you to/from your hotel. If you stay at the Celtic Manor, you can just walk over to the ICC.

SQLBits 2026 is Going Back to Newport, Wales.Plus, while you’re at the conference, the idea of this year’s event is to be really involved in the after-hours events at the venue itself. I’ll be at all of ’em this year.

When SQLBits is in London, the after-hours events like the Pub Quiz and the party aren’t very well-attended for a few reasons. Local transportation in London is kind of a pain in the butt, there are so many other distractions in London, and at the end of the day, a lot of locals just go home. In Newport, the venue is more all-encompassing: you’re going out there to have a good time with your fellow attendees during the day, AND after hours.

Early bird registrations will open up this month, and it’s time to start thinking about what sessions to submit. These days, there are so many tech topics: AI, Fabric, Azure, AWS, SQL Server 2025, and timeless dev & DBA topics. I need to think about what to submit, so I’ll put it to you, dear reader: what would be the topic that would compel your boss to whip out the credit card and buy your conference registration and plane ticket?


Helping Software Vendors Talk to DBAs

Software vendors have a wide variety of clients:

  • Some clients don’t have any IT staff at all, and just outsource everything. The client wants to hand an installation manual to their contractor and say, “Build whatever the software vendor wants.”
  • Some clients have a full time sysadmin or two, and those sysadmins don’t really know anything about Microsoft SQL Server. They want a 1-2-3 checklist of what needs to be done, and they’ll follow it to the letter, but they won’t put any independent thought into it, nor will they raise any objections.
  • Some clients have a big IT staff, including at least one full time DBA. Ironically, you’d think this kind of environment would be the easiest one when it comes to installations – but it’s the opposite! The sysadmins and DBAs raise all kinds of objections because they have internal standards, or they want things done a certain way.
That's a pair of baby potbelly pigs, sleeping atop each other on my lap, because they were inseparable.
I prefer mirrored pairs of highly available potbelly pigs

So when I’m working with software vendors to write installation instructions for their database back end, I encourage them to think about three tiers of SQL Server environment quality:

  • Good – typically a single VM, using whatever VM backup software that the client uses across all of their servers.
  • Better – adds in a method of high(er) availability, typically mirroring, because it needs to be simple enough to be managed by a non-DBA. I’d typically suggest manual (not automatic) failover here, which for a single database app is easy enough that a sysadmin can do the failover while following a checklist, or can call the vendor’s help desk to be walked through it.
  • Best – a complex high availability and disaster recovery topology, perhaps a failover cluster plus log shipping. This requires serious know-how on the client side, and the vendor isn’t going to be able to write instructions detailed enough for a n00b to follow along.

It might seem counterintuitive, but it’s easy to write the “Good” instructions. There just isn’t much work to be done. However, the more complex the environment is – and the more it relies on the client’s internal standards and processes – the less guidance a software vendor can give. It’s just too hard, too time-consuming, and too expensive to write a document that fully explains how to implement a multi-subnet cluster that works for every possible client.

I find that if the software vendor is honest about that, then it makes the situation easier for DBAs. The “Best”-tier installation guide starts with a simple disclaimer:

We want to be a great partner for you, so that means being flexible and working with your standards. In the next couple of pages, we’re going to describe a typical Best-tier infrastructure (a failover cluster plus log shipping) that works with our more complex and demanding clients. You don’t have to follow that scenario exactly! If there’s an infrastructure you’re an expert on – perhaps Availability Groups or SAN replication – we’d be glad to talk through that with you to make sure it works for everyone involved. Just understand that if you choose to implement your own infrastructure design, we can’t guide you on it, or troubleshoot it for you. We’re relying on you to do that part. But as long as the SQL Server service is up and we can connect to the database with SSMS, we’re happy!

That way, everybody’s on the same page: the vendor has one complex design that they’re familiar with, and they can guide you towards consultants for implementation or troubleshooting, and their support team will be comfortable working with it when the poop hits the fan. However, if you have a design that you’re comfortable supporting, the vendor is flexible and can work with that too – they just can’t do infrastructure support on that design.


[Video] Office Hours: Docked in Miami Beach

Videos
0

On the last day of my Caribbean cruise, as we pull into port in Miami, let’s go through your top-voted questions from https://pollgab.com/room/brento.

Here’s what we covered:

  • 00:00 Start
  • 01:45 Sean: How would you approach a task to remove 90% of the data to support lightweight test db servers. I’m thinking of a script with a recursion on foreign keys that would travel the keys and delete from the bottom up. Any thoughts to give me a kickstart?
  • 03:04 Symmetrical Docking DBA: What features does the AG Dashboard in SSMS lack that you expect to see in a paid monitoring tool?
  • 04:03 zlobnyfar: Clustered ColumnStore Indexes and Availability Groups on MS SQL Server 2014 Enterprise. Is it compatible? Will it impact AG(s)? any PROS and CONS? (some stats: 3 databases (~3 TB), each has up to 5 tables I want to convert to CCI, ~ 100000 updates per day on those tables)
  • 05:04 MyTeaGotCold: Any rules of thumb for when it’s time to stop using CAL licences? It seems like nobody ever talks about CAL.
  • 06:13 Lee: I got a slow query. I see that the pain point in the execution plan is a clustered index delete. I have ensured all foreign keys are indexed. I have tried batching the deletes. I have tried several of the common things to try. Any ideas what to try next to speed it up?
  • 08:02 5 hours without internet: I don’t trust myself to keep lists of waits up to date. So aside from the junk of Page Life Expectancy, how do I alert on memory pressure?
  • 08:43 WalkedIntoAPoll: Good day! What is your opinion of the use of the blue-green deployment method overall? If there is no problem to solve with its implementation, would you agree that it is a best practice to have anyway?
  • 09:41 AnonSoMyCoworkersDontFindOutLol: Previously you have suggested interviewing with employers even if content with current job. I agree and desire to, but am unable to make the first step. How far do you go with each employer? How do you decline a position you know you won’t take, if offered? Thanks for all you do.
  • 11:50 burnedmywatermakingtea: off-topic: forced aspiration or no? turbo or supercharger?. Yes I know it depends.
  • 13:30 LoGan The Librarian: What do you think if you heard someone say this? I personally love SQL, because I’m an algorithms person, so I love optimization. I love to know how the databases actually work, so I can match the SQL queries and the design of the tables to get optimal performance of the table.
  • 14:42 Culloden: What are the risks with using microsoft features that are in Preview and not GA?
  • 16:27 Sarkis: Hi Brent! I’ve never heard you talk about the Database Engine Tuning Advisor. Do you think it’s worth trying at all, or is it just not useful in real-world scenarios?
  • 17:07 Culloden: 3 years into Gen AI and no company is profitable. Is this turning into another tech company bubble for the big AI companies? Is this going to turnout to just be a feature enhancement to existing products rather than a whole new industry? Is AI just the next spell-check in Word?

AI is Like Outsourcing. #TSQL2sday

AI
12 Comments

When you ask data people how they feel about AI, you get pretty rabid, extreme reactions. People either love it or hate it. Set that aside for a second, and let’s zoom out and think about a bigger picture question.

What’s it like to work with someone you know, versus a stranger?

Known Person Unknown Person
Work Quality High Varies
Price Expensive Cheap
Availability Low High

With a known person, like a trusted employee, you know you can count on their work. However, they’re only available so many hours per week, and you have to pay a relatively high price for them to keep that same exact person on your staff.

With an unknown person, like contractors on Upwork or Fiverr, their work quality is highly unpredictable. You can put in a ton of work yourself to clarify the task at hand, build processes for them to follow, and build automated checks on their work – but unless you do that, you don’t really know what you’re going to get. However, companies are often willing to take that compromise in exchange for getting someone way cheaper, way more often, because it’s a giant pool of strangers. Over the last couple of decades, it’s become completely normal for companies to outsource as much as possible to other companies, with varying results. (I’m looking at you, Microsoft Support.)

You make this decision yourself too – compare having a dedicated car with your own full time chauffeur, as opposed to opening the Uber app and calling a car. Compare having a full time chef on your payroll, as opposed to going out to eat at a restaurant.

Now think about code:

Known Code Unknown Code
Work Quality High Varies
Price Expensive Cheap
Availability Low High

When a company builds their own app, they design the logic carefully, test it, and they’re confident that it produces the exact right result, every time, predictably. It’s expensive – especially if you do it right. Note that I said “Availability = Low” here because historically, you haven’t been able to just wave a magic wand and get yourself working app code. It takes time to produce. It’s not just instantly available.
Unknown person of dubious quality

AI is unknown, outsourced code.

When you send requests to large language models like ChatGPT and Claude, you can’t consistently predict the work quality over the course of everything you’re going to ask it to do, over time. Models change, prompts change, the quality of the data going in can vary, you name it.

Because you’re a technology professional – especially one that works with data, where we want very specific, predictable, accurate results – you probably read the above explanation and say to yourself, “Bingo, that’s everything I hate about AI. It’s as if I called a company’s support line – that sucks too, because I get some bozo who has no idea what they’re doing, and they keep asking me if I rebooted my router.”

I’m here to tell you it doesn’t matter what you think.

Many companies are willing to make that compromise because the work is available instantly, 24/7, and it’s cheap as hell. Just like they’re willing to outsource people in their call centers and other job roles (even tech ones.)

That’s why I expect to see SQL Server 2025’s AI used a lot in the wild.

SQL Server 2025 and Azure SQL DB’s new sp_invoke_external_rest_endpoint lets you call ChatGPT & friends directly via T-SQL. Clients have told me they want to try using it for tasks like:

  • Translating product descriptions, menus, etc into other languages (Spanish, German, Japanese) – one client told me they’ll just start all new translations this way, and then have humans review the results rather than build all new translations from scratch
  • Generating personalized customer emails – like passing in a customer’s details, order history, web site behavior, etc to deeply personalize email campaigns and transaction receipts, leading to better customer relationships and easier bypassing of spam filters
  • Improving customer support calls by summarizing a customer’s activity – when a call center operator takes a call, they’ll see a short, two-sentence description of the customer, plus get a customized greeting to use (“Hi Alice! It’s Charlie here in support. Congrats on your recent milestone of hitting six months straight! What can I help you with today?”)

AI is a hybrid of outsourcing people, but also outsourcing code. I’m excited to see what clients build – but also at the same time, I’m kinda gritting my teeth because as a consumer, I’m not all that fond of the results from outsourcing people. It’s coming, and it’s useless to try to fight it.

I’m also really not excited to see how that code ages. Hard-coding API calls into T-SQL is a serious form of technical debt, especially in the day and age of rapidly evolving AI. That custom model version you call today may not be available in 3 years, let alone 5-10.

This post was for T-SQL Tuesday #189, and you can read the comments on that post to see thoughts from other bloggers on the topic about how AI is changing their careers.


[Video] Office Hours: 15 Answers in 30 Minutes

Videos
2 Comments

The first few questions from this week’s crop at https://pollgab.com/room/brento are very much about what-problem-are-you-trying-to-solve, but they get more complex from there.

Here’s what we covered:

  • 00:00 Start
  • 02:49 My Erik is Strong: Do you suggest any steps after an upgrade to SQL Server 2022 in particular? I mean 2022 features to try, enjoy, or tweak. I’m not asking for generic migration advice.
  • 05:12 Pock Mages in Lemory: All of my SQL VMs only run SQL Server. How do I know when it’s time to turn on Lock Pages in Memory? How can I know that it’s safe?
  • 07:02 That’s_Not_Chocolate: On our Azure SQL Database park I discovered that most of the compatibility_levels are at 140 and 150. I will try to move to 160 hoping to make the query run faster. Is this a false hope?
  • 08:13 paul: Hi, multiple client’s outsource their infra support to us in a shared model and I’m on the DB team. With limited knowledge of their system design , how should I troubleshoot when they report the DB is slower than usual, so I can suggest next steps?” I have sys admin access.
  • 09:46 MyTeaGotCold: Your recent post resonated with me. I’m wondering: are DBAs doomed to misery? My experience is that any DBA job opening means the estate is unhealthy and will probably take YEARS of pain to fix. Is it worth it?
  • 11:36 chandwich: I’ve worked with SQL Server & T-SQL for 6 years (tuning, coding, backups, design, ETL, automation). I blog, post on LinkedIn, but can’t land a DBA job. At 28, I’m considering data engineering or DevOps instead. How do young professionals break into DBA or consulting roles today?
  • 14:04 NotaSqlQuestion: Sorry if this too off topic, you travels have me wondering, of all the places you haven’t been to, what’s top of the list to visit?
  • 15:26 That’s_Not_Chocolate: A few query are very slow and presents sometimes locking and blocking. After investigation I discovered with horror that the tables involved have over 1’000 columns. Can columnstore index be the low hanging fruit for now while we normalize the database?
  • 17:02 Simon Frazer: Hi Brent, late last year, I mentioned how much I’d love to see SQL Cruise make a return, and you hinted that you might be working on something similar, though more geared toward consultants. I was wondering if that idea has gained any traction since then?
  • 18:02 Dario-L: Hi Brent. Have you ever experienced a problem with multiple OPENJSON evaluations (evaluation occurred as many times as there were rows in the joined table) in a query? Changing (temporally) OPENJSON to WHERE IN speedup the query from 1m to 50ms. Unfortunately, we need this.
  • 19:30 Culloden: My company is looking to hire some BI consultants to design new analytics environment in fabric. I’ve lost the fabric battle! However, What questions would you ask consultants when interviewing them for potential work?
  • 22:13 Fuzzy: In the cloud consulting world, what are your pros / cons for acquiring deep knowledge in a specific database technology vs broader more shallow knowledge across multiple database technologies?
  • 23:51 Avi: Hi Brent, How do you see Database development changing with AI. We are seeing lot of AI tools are able to provide good database designs and SQL queries.
  • 26:20 Rob: Hi Brent, what is a common approach you’ve seen for hosting SQL Server databases for vendor applications. Suppose the database is less than 20GB. Do you think a dedicated Azure DB on a lower tier could be a good fit for this use case?
  • 27:57 Jrl: I am thinking about transitioning from working on database performance to a broader performance engineer career. With the limited facts I’ve given, does this strike you as a bad idea? Do you know of anyone else who has made a similar transition?

How to Make Leading Wildcard Searches Fast

Computed columns
11 Comments

99.9% of you are never gonna need this.

But let’s say you need to run queries with leading (not trailing) wildcards, like this search for all the different national versions of the Encabulator, each of which has different prefixes depending on which government it’s being sold to:

We’ll simulate it with a last name search in Stack Overflow Users table – granted, you would never store names in a single column, but let’s pretend we’re doing that for the sake of this post:

Even if you’ve got an index on DisplayName, SQL Server can’t dive into the particular area of the index where the Ozars are stored, because there’s everybody from Avery Ozar to Zion Ozar, scattered all through the alphabet, and an index on DisplayName is stored in alphabetical order.

However, if you’re only searching for leading wildcards, you can create an index on the reverse of DisplayName, like this:

I can almost hear your record-scratch reaction from here. Let me explain.

I’m using an indexed view here because I don’t want to modify the underlying table. I could have used a computed column instead, but I’m trying to maximize the goofy number of things that I can show you in one blog post.

Next up in the list of oddball things I’m showing: putting the ReverseDisplayName in the unique clustered index for the view. Normally when you create a clustered index, you wanna follow the SUN-E principles that we discuss in the Mastering Index Tuning class. However, here, the only reason we’re creating this data structure is to make our search query faster. Think of this view’s clustered index as just a nonclustered index on the Users table itself.

Then, modify our query to search for the reverse of the DisplayName:

If you’re using Enterprise Edition and you’ve been living a good clean life, SQL Server will automatically recognize that the function you’re trying to run is already computed and indexed in the view. If you’re unlucky like me or if you’re using Standard Edition, you’ll have to modify your query to get SQL Server to read from the indexed view:

In either case, you’ll end up with a nice tidy index seek. The data’s organized backwards from the end of the string to the beginning, so SQL Server can dive into the razO% area of the index and read the rows out with just a few logical reads:

Indexed view execution plan

Isn’t that cool? Like I said, you’re probably not gonna need that – but if you do, it’s fun to know that techniques like this exist. Of course, it only works for leading wildcards, because if the search predicate has both leading and trailing wildcards, then we’re right back where we started.

Indexed views have their own gotchas – to learn more about those, watch this module of the Mastering Index Tuning class.


Who’s Hiring in the Microsoft Database Community? August 2025 Edition

Who’s Hiring
8 Comments

Is your company hiring for a database position as of August 2025? Do you wanna work with the kinds of people who read this blog? Let’s make a love connection.

If your company is hiring, leave a comment. The rules:

  • Your comment must include the job title, and either a link to the full job description, or the text of it. It doesn’t have to be a SQL Server DBA job, but it does have to be related to databases. (We get a pretty broad readership here – it can be any database.)
  • An email address to send resumes, or a link to the application process – if I were you, I’d put an email address because you may want to know that applicants are readers here, because they might be more qualified than the applicants you regularly get.
  • Please state the location and include REMOTE and/or VISA when that sort of candidate is welcome. When remote work is not an option, include ONSITE.
  • Please only post if you personally are part of the hiring company—no recruiting firms or job boards. Only one post per company. If it isn’t a household name, please explain what your company does.
  • Commenters: please don’t reply to job posts to complain about something. It’s off topic here.
  • Readers: please only email if you are personally interested in the job.

If your comment isn’t relevant or smells fishy, I’ll delete it. If you have questions about why your comment got deleted, or how to maximize the effectiveness of your comment, contact me.

Each month, I publish a new post in the Who’s Hiring category here so y’all can get the latest opportunities.


[Video] Office Hours: Interview Horror Stories Edition

Videos
3 Comments

Wanna hear about my worst job interviews? I drove up to 8,000 feet where it’s 30 degrees colder than Las Vegas to bring you these answers to your top-voted questions from https://pollgab.com/room/brento. The audio’s a little odd on here because it was really windy, and I had to use noise reduction in post-production.

Here’s what we covered:

  • 00:00 Start
  • 01:56 neil: Our system admin team tried to quick format production SQL drive “because it says it doesnt lose data.” SQL admin stopped them. Would Windows admins be the team that would have to manage storage snapshot backups? Therefore I think I shouldn’t use it unless I had better sys admins
  • 03:03 myClusteredIndexSucks: Your 5 and 5 rule is great, but when the clustered index key is useless, do you ever create a non clustered index with a useful key and all or most of the columns to reduce key lookups?
  • 04:46 Mohit: Hi Brent, Have you ever gotten rejected in any of your interviews? What’s your favourite interview story? Last but not least, thanks for giving so much to the community.
  • 13:14 AG Avoider: Has the cloud made Failover Clustering any cheaper or more popular? Multi-attach EBS costs so little that I would predict it, but I haven’t seen any evidence.
  • 14:19 Kiwi_SQL: Hi Brent, Love your work since I started working with SQL Server. Who are the other consultants in the SQL Server world you follow and why?
  • 15:13 gserdijn: Hello Brent, is there a way to notify your followers a bit earlier when you are about to host Office Hours?
  • 16:06 Kulstad: Every Monday morning, I run sp_BlitzIndex @Mode=3, as well as Pinal Dave’s check index script. Pinal’s script usually returns many more index suggestions than sp_BlitzIndex. Is there really that much of a difference between what sp_BlitzIndex checks and what he checks?
  • 16:58 jrl: My impression is that your income went to the next level once you started to focus on delivering training, especially scalable forms of training. What would your plan B have been if training hadn’t grown your income to the level that you wanted?
  • 18:12 MyFriendAlwaysHasProblems: My friend has 2 SQL Servers in AOAG, 2 DBs (of several) in a replica set. CheckDB on the primary node DBs returns errors, but running several times comes up with different errors each time. Remove from AOAG (& set node Offline in WSFC) & no CheckDB errors. Ever see this?
  • 18:57 Neil: is it safe to bring an old version of msdb to a new server on sql 2022/2025? we have 1000’s of jobs we need to migrate
  • 19:41 TeaEarlGrayHot: Is there a less intrusive way to get page fullness than sys.dm_db_index_physical_stats in detailed mode? I have an app that suffers from low page density, detailed is too intrusive, sampled can be inaccurate and I have problems getting a picture of which indexes need attention

Poll Results: Yes, Your DBAs Can Read the Data.

Last week, I asked if your database administrators could read all of the data in all databases. The results (which may be different from this post, because I’m writing the post ahead of time and the poll is still open):

DBA security poll results
  • Yes: 61%
  • Yes, but we trust them not to: 29%
  • Yes, if they bypass tech restrictions, but we trust them not to: 4%
  • No, it would be impossible: 6%

90% of DBAs can easily read everything.

In small companies, there’s not really a way around this. At the end of the day, someone has to be personally accountable for things like setting up encryption, and in small companies, people wear multiple hats. In small to midsize companies, there’s often just one database administrator, period, and they have rights to all the databases.

SQL Server’s Always Encrypted feature was supposed to assist with that by encrypting data at the app database driver level before it even went across the wire to Microsoft SQL Server. I didn’t see much adoption of that, but not because it’s a bad feature or there’s anything technically wrong about it, but:

  • It takes both developer and DBA work to implement
  • It can result in some pretty stiff performance penalties if it’s not done correctly, or if your app has some oddball query patterns (like leading wildcard string searches)
  • Data often ends up getting decrypted and dumped in an unencrypted data warehouse or data lake anyway

Those aren’t unsolvable problems by any means. The larger companies get, and the more security regulations they’re subject to, the more likely they can afford the work of mitigating those risks and paying for other security best practices.

This makes it harder to get a DBA job.

The fact that DBAs can read everything means the company really wants to know someone before they trust ’em with the ability to see everything. In small to midsize companies, that trust is typically built up over time by hiring the employee first as a developer or sysadmin, and then gradually segueing them over to database administration after they’ve proven their trustworthiness.

I don’t really have anything to add to this. It’s just the way the data business works. But I like conducting polls like this so that people can read the poll results from their peers and say, “Oh, I get it – it’s not just my shop that has these kinds of problems.”


The Query Tuning Trick You Should Use More: Pagination

T-SQL
6 Comments

When I’m tuning queries, the normal answer is to make the query perform better – either via changing the T-SQL, adding hints, or adding indexes so that the data’s better prepared for the query.

Average RowsHowever, sometimes when I’m looking at the output of sp_BlitzCache, I scroll across to the Average Rows column and double-check that the query’s actually returning a reasonable number of rows out in the wild.

I’ve uncovered some amusing situations, like a back end process that was supposedly fetching the emails that were ready to be sent out. We noticed that it was one of the top resource-consuming queries, but no one had been complaining about it since it ran as a back end service. However, it was retrieving hundreds of thousands of rows every time it ran. Upon checking the query and the source data, it turned out that the source data had a ton of outgoing emails with null “to” addresses – which the back end process ignored each time it ran. The developers purged that invalid data, and the server’s CPU returned back down to normal.

Another situation was a typical date-range report, as in, “Show me the orders shipped for @SalespersonID between @StartDate and @EndDate.” However, the average number of rows was over 50,000! I had so many questions, and I blurted them all out at once on the call:

  • Is there a salesperson that’s actually had 50,000 shipped orders, in total? (That seemed like a really high number for the kind of high-ticket items that this company sold.)
  • What’s the shortest date range where someone actually had 50,000 shipped orders? (In this case, it was a 10-year date range.)
  • Who would actually run this report for a 10-year date range?
  • What would they learn from this report?
  • Even if they needed it once – why would they continue to run it multiple times per hour, for a 10-year range?

After we all got done chuckling and shaking our heads, we did some research, and to the best of our abilities, we guessed that someone had this report on a scheduled task. I asked if we could implement one of my favorite query tuning techniques – so good that it inspired Sir Rod Stewart’s hit song from 40 years ago, “Pagination“.

Okay, well, actually Rod’s song is called Infatuation, but every time I say the term “pagination”, I can’t help but sing it to this tune, and my training class students will recognize it: “It’s pagination, WOO!”

In the case of this client, we changed the query to use pagination like I describe in this module of the Mastering Query Tuning class, adding new parameters to the stored procedure for @PageNumber (defaulted to 1) and @PageSize (defaulted to 1000). We figured out that most users would never need to see more than 1,000 rows, so we defaulted to 1,000 rows for the first page. We figured that if we got a support call asking for the second page, we’d cross that bridge when we came to it.

We never got that call.


[Video] Building a Brand By Sharing Your Work

You’ve got a good job, and some spare time. You’re thinking about giving back to the community by blogging, presenting, live streaming, or working on Github projects – and you’d like to make your next job search easier. Let’s talk about it.

I apologize for the ghosty nature of my video! I was in the midst of tweaking my home studio setup, and I made the mistake of changing lighting settings right before recording. Doh! I felt bad about that, but like I mention in the video, “Artists ship,” hahaha. I couldn’t put another 41 minutes of my life into that video to fix the ghosting, but hope the content is still worth it.

For this session’s resources, check out my career & professional development page.


Poll: Can Your DBAs Read the Data?

It’s a simple question: can your database administrators read all of the data in all databases?

  • Yes
  • Yes, but we trust them not to – as in, we’ve got written policies in place, and they know they’re not supposed to go poking around in the payroll tables or the human resources database
  • Yes, if they bypass tech restrictions – like we’ve encrypted the data, but the DBA could technically log in with the app’s login or certificate and decrypt stuff they’re not supposed to see, or get access to our key vaults, or create a new login and then delete it afterwards
  • No, it would be impossible – the decryption keys are held in systems they can’t get access to, or they don’t have the capability to create users in those systems, the data is never copied into unencrypted systems like data warehouses, etc.

Take the poll here and view the results here.

I’ve turned comments off for this blog post because I know there’s gonna be a firestorm of people who wanna put long, drawn-out text responses with all kinds of gotchas and clarifications. Not gonna happen here, pardner, and don’t bother trying to get my attention on social media about this either – I’m gonna ignore it. I can already imagine all your “but but but” responses, and I’m not into but stuff – at least, not with you.

I’ll follow up with the results in a week, and comments will be enabled on that post.


Want Me to Train You & Your Coworkers Privately?

Conferences and Classes
0

Your company doesn’t want to send you off to conferences or training classes in other cities.

You’ve looked around locally, but the local options seem to be irrelevant to your needs. It’s generic stuff like how to pass certification classes, or boot camps to get your first DBA job.

Perhaps you’ve even talked to some trainers about coming to your company, but you’ve been left with the sinking feeling that you actually knew more than the trainer did. You guessed that the trainer was simply reading off training material that somebody else wrote.

You’re wishing that you could get real-world, useful training that will actually help you, your coworkers, your developers all do their jobs better. Training customized to the exact problems you’re having today, in production – not theoretical irrelevant trivia.

Hey, that’s me! That’s what I do!

Here’s how it works:

  • You pick the sessions from my training catalog, so they’re all relevant to you
  • If you like, you can also send me diagnostic data from your SQL Servers so I can recommend training sessions to help you solve real-world issues, fast
  • You pick remote or onsite – they’re both $5,000/day, but for onsite, there are additional travel expenses and travel time
  • I send over a list of available dates and a contract

Then, during our live sessions together:

  • You can ask questions that are relevant to your own apps & servers
  • We can do open Q&A so you can drill down into your challenges
  • It’s a private session, so you don’t have to worry about exposing your challenges to a public audience

To get started, fill out the training catalog spreadsheet with the classes you want, and email it to me at Help@BrentOzar.com. Let’s have some fun and raise your team’s SQL Server capabilities!


Microsoft Fabric Has a New Service Status Page.

Microsoft Azure
2 Comments

I’ve been pretty vocal here on the blog and on social media about the reliability problems with Microsoft Fabric. Today, I’ve got good news: Microsoft released a new Fabric status page and a known issues page, something that really does take guts given the current reliability situation. The status page still appears to be manually updated, not automated, but at least it acknowledges outages like how SQL endpoints are currently returning incorrect query results:

Fabric status page

They’re also keeping past issues on the page, although it’s not yet clear for how long.

The Fabric Known Issues page defaults to the Data Engineering category, and you have to click on each category to see the known issues in other categories, like Databases or Power BI. Those pages also show closed issues too. That’s exactly how it should be done – kudos!

If there are any changes you’d like to see made, Microsoft is listening to this Reddit post.


[Video] Office Hours in My Backyard

Videos
3 Comments

Let’s hang out, listen to the peaceful waterfall, and go through your top-voted questions from https://pollgab.com/room/brento. This is kinda the calm before the storm: in another week, I’ll be on the road again for a while, hopefully bringing you some fresh Office Hours locations from the Caribbean and Alaska!

Here’s what we covered:

  • 00:00 Start
  • 01:30 Simon Frazer: I’m curious to know if partitioned views are still used much. How often do you see these out in the wild?
  • 02:37 ConfinedLake41: Do you have any words of wisdom when moving from SQL Server-On-Premise to SQL Azure? A client wants to move all services to the cloud (imposed by the group). Also any tips, tricks, hints and or material would be really helpful! Thanks!
  • 03:00 Ess Tea DBA: I once heard about using the Resource Governor as a monitoring tool only. Have you ever seen this done in practice?
  • 03:45 Craig: I am running sp_BlitzFirst every 15 min to collect metrics as suggested in your “Running SQL Server in AWS & Azure” course. My problem is that sometimes it runs for over 10 min and I do not want this to affect the other processes. Does sp_BlitzFirst have an impact on the server?
  • 04:54 chris: When I was younger I had more time on my hands than ideas to use it. Now I find I’ve far more I want to do than I’ve time for. Did you have a hard time deciding to move away from production DBA work? Do you have any advice for making difficult decisions to give something up?
  • 09:09 Evan C: Besides using more disk space are there any gotchas with increasing the retention period for CDC?
  • 10:02 Northwind: Hi Brent, is there a way to automatically trigger a stored procedure or SQL script right after a database is restored? We have a vendor database which allows engineers to restore prod backups to test. Due to org policy, we can’t mirror prod security in test
  • 10:50 Adrian: Are there any plans for future videos regarding working with people (especially in development environments)? I find your talks about working with customers and/or colleagues really reach the same issues or challenges I sometimes find myself in. Would really like to hear more.
  • 12:24 neil: What will RCSI actually break in production for a third-party app? Test environment doesnt seem like enough of a test, if I beef up TempDB and turn on RCSI in Prod how do I know if its acceptable to the company? What tests/how do you decide if it’s working? Data errors? Downtime?
  • 13:34 Craig: How can I measure busyness by database. I am looking for something like batch requests per second — but, by database.
  • 14:17 Craig: At the moment, our webpages call multiple store procedures (one for dropdown, one for list etc). We are considering making each webpage call a single stored procedure that returns multiple sets of data. What would you use to test if this is an improvement (or not)?
  • 15:29 VegasHeatIsNoJoke: Brent what’s the sweet spot for free space in Azure SQL DBs? The problem my bosses are trying to solve: price of storage is now our pain point. Few of our large databases have free space over a TB while the % free is in the 15-20% range. I worry about performance.
  • 17:01 MyTeaGotCold: Do you often find that Basic Availability Groups meet the HA/DR needs of your clients? I struggle to picture picking them instead of FCI or log shipping.
  • 18:02 Dfens: Hi Brent, I need to prevert that 1 Sp of a DB takes all the recources of my full sql instance. Is there a way in Standart Edition to limit cpu, ram or disk usage somehow? This is one main reason that we do not put all DBs on one server but we have several sql servers.
  • 19:02 neil: Are storage snapshot backups available for SQL VMs in Azure ?

SQL ConstantCare® Population Report: Summer 2025

In this quarter’s update of our SQL ConstantCare® population report, showing how quickly (or slowly) folks adopt new versions of SQL Server, the data is very similar to last quarter. SQL Server 2019 still rules the market:

  • SQL Server 2022: 24%, up from 21% last quarter
  • SQL Server 2019: 45%, was 44%
  • SQL Server 2017: 11%, was 12%
  • SQL Server 2016: 12%, was 15%
  • SQL Server 2014: 5%, was 4%
  • SQL Server 2012 & prior: 1%, same
  • Azure SQL DB and Managed Instances: 2%, same

We had exactly one brave soul monitoring a new SQL Server 2025 instance as well! Here’s how adoption is trending over time, with the most recent data at the right:

SQL ConstantCare Population Report Summer 2025

In last quarter’s update, I said I expected to see a big jump in SQL Server 2022’s adoption rate as folks prepared for next year when SQL Server 2016 goes out of support. Sure enough, we saw about a 20% drop in SQL Server 2016 deployments, and the bulk of those numbers moved to SQL Server 2022’s market share.

Right now, a stunning 96% of the audience are running supported major builds – aka, SQL Server 2016 or newer! That’s great! I’m so happy with y’all. (Not to say the entire audience is running supported builds, though – a lot of folks are behind on their patching.)


Updated First Responder Kit and Consultant Toolkit for July 2025: Independence Release

I’mma wax philosophical for a second: I love the open source nature of the First Responder Kit because it lets anybody, anywhere, diagnose database problems faster. If you’re a full time employee somewhere today, you owe it to yourself to use the FRK scripts because if you ever switch from one company to another, your employment agreement will probably require that you leave your old scripts behind. (I’ve actually been involved in a lawsuit where a DBA took his T-SQL scripts to his next job, and got sued for it.)

Someday, I hope you celebrate your own personal independence and get to define your own success, and I hope the First Responder Kit scripts help in some small way.

Anyhoo, if you do live performance tuning of workloads under stress, you’ll find a lot of little quality-of-life enhancements in this quarter’s release.

How I Use the First Responder Kit
Wanna watch me use it? Take the free class.

To get the new version:

Consultant Toolkit Changes

Updated to this month’s First Responder Kit, but no changes to the spreadsheet template.

sp_Blitz Changes

  • Enhancement: add warning for AG replica more than 60 seconds behind. (#3635)
  • Enhancement: add warnings for SQL Server 2025’s new database scoped configurations and sys.configurations. (#3646 and #3657)
  • Enhancement: clarify warning text about AD groups in case they’re empty. (#3648, thanks Jane Palmer.)
  • Fix: was returning multiple warnings if trace flag 7745 wasn’t enabled and Query Store was enabled. (#3627)
  • Fix: exclude model database from Query Store checks. (#3654)

sp_BlitzCache Changes

  • Enhancement: can now do plan cache analysis on read-only secondaries. (#3632)

sp_BlitzFirst Changes

  • Enhancement: add warning for deadlocks happening during the live sample. (#3637)
  • Enhancement: add total thread time to headline news result set. (#3631)
  • Enhancement: move “Avg ms Per Wait” to the left in result sets because it’s so useful for troubleshooting storage, CPU, and blocking waits. (#3629)
  • Fix: warnings about restores happening now weren’t firing on Amazon RDS. (#3643)

sp_BlitzLock

sp_ineachdb

  • Enhancement: adds new parameters for @is_ag_writeable_copy and @is_query_store_on. (#3651, thanks bwiggin10.)

For Support

When you have questions about how the tools work, talk with the community in the #FirstResponderKit Slack channel. Be patient: it’s staffed by volunteers with day jobs. If it’s your first time in the community Slack, get started here.

When you find a bug or want something changed, read the contributing.md file.

When you have a question about what the scripts found, first make sure you read the “More Details” URL for any warning you find. We put a lot of work into documentation, and we wouldn’t want someone to yell at you to go read the fine manual. After that, when you’ve still got questions about how something works in SQL Server, post a question at DBA.StackExchange.com and the community (that includes me!) will help. Include exact errors and any applicable screenshots, your SQL Server version number (including the build #), and the version of the tool you’re working with.


[Video] Office Hours: Finally Back in the Home Office

Videos
0

I’m finally back in the home office after running around Europe for a month. Let’s talk through your top-voted questions from https://pollgab.com/room/brento.

Here’s what we covered:

  • 00:00 Start
  • 01:52 World Peace: Hey Brent, I am working in AZURE currently and the AZURE architect is placing all the mdf and ldf files on one drive and indicating it as a much better setup in AZURE then setting up a data drive and log drive. Are the previous ways of creating a DB server no longer valid.
  • 03:59 Bishal: I have Azure VM hosting SQL standard edition with multiple DBs. We reduce server specs during weekend & upgrade back on Monday to save costs as there would be very little usage during weekend but this has downtime of around 10 mins. Is there any option for this with 0 downtime?
  • 05:07 Kurama: My boss says you do not need to run index maintenance anymore outside of stats updates as fragmentation is not a concern anymore with all flash storage. We haven’t had any issues with it being used in this case (main db is 10tb~ with many billion row tables) What do you think?
  • 05:31 SQLHeinrich: Hi Brent, I didn’t have the budget for Data Saturday Croatia this year. How was it from your perspective? How is the SQL Server community there? And do you plan on being there next year too?
  • 07:19 So very tired: Neither transactional nor merge replication are in my toolbox. Do they take long to learn? Are they worth learning?
  • 08:19 Flash Sentry: Very specifically for AGs, what should I look for in a monitoring tool? Most of them just regurgitate sys.dm_hadr_database_replica_states.
  • 09:49 Tigger: What model and size do you use in your Local LLM?
  • 12:06 Jerry: Hey Brent! Could you explain data partitioning in SQL Server to someone new to it? When should we consider it, and what are the main things to watch out for?
  • 13:09 MyTeaGotCold: Have you found a use case for incremental statistics? As far as I can tell, nothing has changed since Erin Stellato was complaining about them in SQL Server 2014.
  • 13:25 marcus-the-german: Hi Brent, I have a simple select * from table, no filtering. The query runs for 8 sec (statistics io, time is on), but execution plan says just 900 ms. Any hints? The query is executed in SSMS 21.
  • 14:00 Dopinder: What’s the max DB’s you recommend for a single SQL availability group? What are the warnings for exceeding this number?
  • 15:17 Mirza: What is the technology in Azure that you recommend we focus on if we only had on premises experience as a DBA. Is it just Azure SQL database or do we need to have a working knowledge of other tools such as Azure data factory?
  • 15:56 OneCTEtoRuleThemAll: To CTE or not to CTE? Help me be kind to myself and others – other than recursion, are CTEs only a matter of preference and style? Or can you gain performance under the hood using them?