Blog

[Video] Office Hours in the Home Bar

Videos
0

On a sunny Saturday afternoon, I made myself a highball and went through your top-voted questions from https://pollgab.com/room/brento.

Here’s what we covered:

  • 00:00 Start
  • 00:57 jrl: Do you have any advice about communicating effectively in a fully remote workplace where all discussion is centered around a messaging platform like Slack or Teams? The company I work for has turned into a busy chatroom, and important messages are easily missed.
  • 02:24 Care-Giver: What is the benefit of using Constant Care instead of just running sp_Blitz once a week and emailing my team with the results?
  • 03:52 Marian: Hi Brent! What advice do you have for a SQL dev to improve when their job feels routine and too comfortable? I’m early in my career and worry my skills aren’t being challenged enough. How can I push myself to grow and stay sharp?
  • 05:02 Steve: Hi Brent, Any plans for a run of merchandise? I’d love a “what’s the problem you’re trying to solve” T-Shirt
  • 05:42 Planetscale Benchmarks: Ever thought about moving Constant Care to Aurora Optimized Reads? I don’t think that you’ve blogged on it.
  • 06:11 Dopinder: Do you have any class modules on generating synthetic data for SQL Server lower environments?
  • 07:02 Mr. SqlSeeks: When you encounter a client experiencing DB corruption, what tends to be the most common causes of that corruption?
  • 09:03 DamastaG: Hi Brent. When troubleshooting code that does things based on time, usually out of hours, (e.g. only run this part if it is a Sunday) is there a way to spoof your SPID’s session date/time so it will run as you need it to?
  • 11:45 My Friend Asks: I’ve been managing databases for a long time (perhaps too long), and now have a crazy notion that I might like to move towards managing a technical team instead. Dipped my toe in a little (vacation fill-in) and was ok with it. Any training you’d recommend for making that leap?
  • 13:38 DeveshDBA: Future Impact on DBA jobs due to the introduction of agentic AI. More and more companies will introduce database agents as well. Please share your thoughts.
  • 14:55 Sean: Hi Brent! People are sometimes overconfident because they don’t know what they don’t know (Dunning-Kreuger Effect). What do you know you don’t know about SQL Server (or in general) that you wish you knew more about?
  • 17:31 Kulstad: what is your stance on multiple installs of SQL server to keep important business processes running vs one SQL server to manage them all. Right now, we have 20+SQL server Standard installs, and I’m looking to save $$ with licensing
  • 18:50 Ynze: After I restart my sql server the write latency on the main DB rises slowly from 0 to 160 over the course of three days I have tried increasing IOPS but that does not seem to help. Query optimization is not an option because we run a third party ERP. Where can I look further?
  • 19:49 Starr Spencer: Your post today was amazing! You continue to be my hero in so many ways!
  • 21:04 Juche: What’s the worst DB security incident in your opinion?
  • 22:30 Bandhu: What are your thoughts on using file groups to periodically migrate older data from expensive azure storage to cheaper azure storage for cost saving purposes? Is this worth the potential headache?
  • 24:05 Bandhu: The developers want the DBA team to periodically review the TSQL generated by their ORM. What is the best tool to give to the developers so they can capture their own ORM TSQL without needing SA permission?
  • 25:07 Philippe: Hi, in your opinion what should be a typical SQL Server DBA calendar? What should be the regular scheduled tasks, at what interval, how much time should be set aside for support/emergencies, same for training and new stuff exploration. Something like “your typical DBA calendar”.
  • 27:58 Lost-In-TheMountains: I inherited Merge Replication. The setup involves a main lodge and ten backcountry, remote lodges. Main lodge publishes to remote lodges, and the merge runs every 4 hours. Network is ok, w\ Starlink as backup. What would you consider implementing as an alternative to Replication?
  • 30:11 Hypervisor has 6 TB spare: If I have almost no PAGEIOLATCH waits, is there any real reason to add more RAM?

Your SQL Server Needs Spring Cleaning, Too.

Indexing
5 Comments

First things first: pick up that filthy keyboard, take it over to the trash can, turn it upside down, and give it a good, vigorous shake.

Next, go to your biggest SQL Server and run:

This will give you an inventory of all of the objects in your databases, ordered from biggest to smallest. Here’s what to look for:

Ask simple questions about the 10 biggest tables. Are they supposed to be this large, and contain this much history? Do we have history-deletion jobs that are supposed to be running, but aren’t? I swear, this happens at almost every client I work with! By simply cleaning out the unused, unwanted history, we can make our maintenance jobs like DBCC CHECKDB run faster.

Look for objects with names like Backup_ or Temp_ or ToBeDeleted. We’ve all done a quick backup by copying data into another table, and then accidentally forgotten to go back and delete it afterwards. Check the Reads column to make sure it’s 0, and then rename the objects with a prefix like __To_Be_Dropped_On_20250425. Over the coming days, make sure you’ve got a good copy of the backup so that you can undelete those objects if necessary, and then on April 25, do the needful. (And this time, actually drop them – set yourself a calendar reminder.) Again, the goal here is faster maintenance jobs.

Next, look for indexes with Reads = 0, and Writes > 0. These are indexes that are slowing down your inserts, updates, and deletes, and they don’t have the payoff of making select queries go faster. Script out their definitions in an undo script so you can recreate ’em if necessary, and then drop ’em. (I’m not a fan of disabling them because I’ve seen too many homemade index maintenance scripts that simply rebuild indexes without checking the disabled status, and when you rebuild a disabled index, it goes back into place.)

Finally, look for tables with Reads = 0 and Writes = 0. Do we have old versions of tables lying around that used to be part of the production app, but no longer are? Do we still need those?

This work only takes an hour or two, and it reduces dead weight that’s lengthening your backups, DBCC CHECKDB, statistics updates, and more.


Updated First Responder Kit and Consultant Toolkit for April 2025

First Responder Kit Updates
0

This quarter’s release contains a few advanced warnings & tools, like table input for sp_BlitzLock, new warnings about paused resumable index operations that will be deleted automatically by SQL Server 2022, and a new option to maintain carriage returns and line feeds in sp_BlitzCache’s output.

How I Use the First Responder Kit
Wanna watch me use it? Take the free class.

To get the new version:

Consultant Toolkit Changes

Updated to this month’s First Responder Kit, but no changes to the spreadsheet template.

sp_Blitz Changes

  • Enhancement: added warning for non-default database scoped configuration changes for SQL Server 2022’s options, like PAUSED_RESUMABLE_INDEX_ABORT_DURATION_MINUTES. (#3620, thanks Reece Goding.)
  • Fix: @OutputType = ‘markdown’ would fail if the output had an ampersand in it. (#3610, thanks Roger Dawkins.)

sp_BlitzCache Changes

  • Enhancement: new @KeepCRLF parameter lets you retain the carriage returns & line feeds in query text if you want. (#3611, thanks Vlad Drumea.)

sp_BlitzIndex Changes

sp_BlitzLock Changes:

  • Enhancement: add table mode input. New parameters for @TargetDatabaseName, @TargetSchemaName, @TargetTableName, etc. (#3614, thanks Erik Darling!)

sp_DatabaseRestore Changes:

For Support

When you have questions about how the tools work, talk with the community in the #FirstResponderKit Slack channel. Be patient: it’s staffed by volunteers with day jobs. If it’s your first time in the community Slack, get started here.

When you find a bug or want something changed, read the contributing.md file.

When you have a question about what the scripts found, first make sure you read the “More Details” URL for any warning you find. We put a lot of work into documentation, and we wouldn’t want someone to yell at you to go read the fine manual. After that, when you’ve still got questions about how something works in SQL Server, post a question at DBA.StackExchange.com and the community (that includes me!) will help. Include exact errors and any applicable screenshots, your SQL Server version number (including the build #), and the version of the tool you’re working with.


The Waitlist is Open Now for My SQLBits Workshop!

SQLBits
3 Comments

I’m excited to return to London for SQLBits again this year!

I’ve written about why SQLBits is my favorite conference, and it’s hard to believe it’s been over a decade since I wrote that post. I still feel the same way about Bits – it’s fun, different, and unafraid to run big experiments.

This year, they’re running a BIG experiment: the conference runs Wednesday, Thursday, Friday, and Saturday. You’re used to pre-conference and post-conference all-day workshops running before or after the conference, but that’s not how it goes this year. This year, the workshops are during the conference!

My workshop is all day Thursday. That means if you choose to attend mine, you’re going to miss the Microsoft keynote (which is usually just marketing stuff anyway), and you’re going to miss the general sessions that run on Thursday. I know that’s going to be a tough choice for folks – you’re going to have fear of missing out (FOMO) no matter what you choose.

Unfortunately, my workshop sold out as soon as registration opened – before I could even get a blog post out – but they’re running a waitlist. Register for the conference, use coupon code ozar102025 to save 10% on your registration, and after you register, email contactus@sqlbits.com with the email address you used for registration, and tell them that you want to join the waitlist for my session. They’re looking at adding my workshop on another day – and if they do, the space will instantly get sold out to the people on the waitlist.

Thursday Workshop:
Faster, Cheaper Cloud Databases

You’re managing databases that live in cloud whether it be AWS, Azure, and Google, and you’re getting a lot of questions about slow performance and crazy costs. Your users want everything faster and cheaper, and you’re not finding good answers.

Join me, Brent Ozar, for a day of real-time demos, honest war stories, and practical fixes. I’ll show how I use the First Responder Kit to assess the bottleneck, mitigate as much as I can with index and query tuning, and then write up a business case for management for better future budgeting.

Throughout the day, I’ll include real-life stories from my cloud clients, with names changed to protect the innocent. I’ll explain what worked, what didn’t, and what went up in expensive flames.

I’ll be using different flavours of SQL from across the clouds like Azure SQL DB Managed Instance and Amazon RDS SQL Server to highlight how each has nuances but also lots of similarities

You’ll learn:

  • How Brent uses the First Responder Kit to assess a cloud SQL Server’s wait types
  • Why each cloud vendor’s VM quirks make performance tuning tricky, and how to adapt
  • How to fix blocking, logging, and CPU issues when the hardware isn’t yours
  • How to write a persuasive performance report for management to fairly lay out where the problem is, and what steps to take next

By the end of the day, you’ll be armed with real-world techniques for keeping your cloud databases faster, cheaper, and more future-proof.

Space is limited, and you can register here. See you in London!


[Video] Office Hours in North Bay, Canada

Videos
2 Comments

Every year, Aaron Bertrand runs a curling tournament up in scenic North Bay where he grew up. I joined in this year, and before the tournament, sat on the shores of Lake Napissing to answer your top-voted questions from https://pollgab.com/room/brento.

Here’s what we covered:

  • 00:00 Start
  • 01:25 MyTeaGotCold: Kendra Little’s recent article on the decline of SQL Server scared me and it matches much of what I’ve been thinking privately. I’m in my late 20s and heavily specialised into SQL Server. Should I be worried?
  • 04:03 sqlknitter: I have a chunky 17TB database, largest table 2TB (1TB IDX). CheckDB takes days to run (5 days in, 27% complete per sp_whoisactive), with a weekly reboot. How can I tell which tables have actually been checked, before I make the decision to refactor the job to use checktable?
  • 06:11 Smart Python: What metrics can I use to determine if my manual failover AG can safely be promoted to automatic failover?
  • 07:31 Always Query With TABLOCKX: I have an app that churns temp tables outside tempdb so often that schema locks are my biggest problem. I can ask devs to not create temp tables this way if I back the reason with stats—but how do I gather stats on something blocked for minutes yet only used for seconds?
  • 08:45 SaintBear: What’s your opinion of the SQL upgrade assessment functionality in SSMS21?
  • 09:08 Mike: Hi Brent! You mentioned you go to bed and get up very early. How and why you started doing this, were there any specific reasons or benefits for you ?
  • 09:52 Dick_Bowen: I restored a sql 2008 db to sql 2022 and the size doubled. What would cause this to happen?
  • 10:40 Venkat: What is your opinion of Uber’s new QueryGPT for converting natural language to SQL queries?
  • 11:32 Vinneet: Is throwing more hardware at DB performance problems more likely in PostgreSQL than In SQL Server? What are your thoughts?
  • 12:29 SilentKnight: What’s your take on standards other than “good to have”. What effort would you expend to enforce standards on an environment that is very inconsistent. e.g. DB locations, physical file names, table names. PROD: 30 srv 300 DBs of similar design (table structures, SPROCs, views)

Tap Tap Tap – Short, Silent SQL Server Tutorials

Videos
6 Comments

One of the things I love about TikTok is that it exposes me to wildly new video ideas like this:

@tiipititap

? original sound – tiipititap

There’s a whole genre of videos in that style – people using tapping plastic pointer fingers on a grainy screen with no voiceover. One sub-genre (like the video above) likes to draw your attention to something that seems bad, only to have wild plot twists or last-minute surprises. Other sub-genres purport to “analyze” something completely obvious. I laugh way too hard at these, and the comments are usually hilarious too.

So, I thought to myself, hey, these look relatively easy to film – why not try it with SQL Server?

@brentozarultd

Why you should use TOP 100 or below, not TOP 101 or higher #sqlserver #sql #dba #database #azure #microsoft #tips #brentozar #pollgab #computerscience #computerconsulting #database #databaseadministrator #careergoals #spanishsubtitles #Postgres #Postgressql

? original sound – Brent Ozar Unlimited

@brentozarultd

Next title: Why you probably shouldn’t be using NOLOCK #sqlserver #sql #dba #database #azure #microsoft #tips #brentozar #pollgab #computerscience #computerconsulting #database #databaseadministrator #careergoals #spanishsubtitles #Postgres #Postgressql

? original sound – Brent Ozar Unlimited

I’ve put several of ’em on our LinkedIn page, TikTok channel, YouTube shorts, and Instagram reels so you can follow me wherever you like to get your social media.

My home office filming setup is pretty amusing. I put my phone on a video tripod so that it’s held in place, but I can smoothly turn it from left to right – useful for when I want to run multiple demos in the same video.

Home office filming setup

I hope y’all enjoy these as much as I love making ’em. I giggle to myself whenever I set these up!


Free Webcast: Avoiding Deadlocks by Query Tuning

Company News
2 Comments

Free Webcast: Avoiding Deadlocks by Query TuningDeadlocks used to scare me.

I’d get the alert from a monitoring tool, and I’d look around to see if anyone was watching me. If the coast was clear, I’d… delete the notification.

Deadlocks were a mystery to me because I didn’t know how I was supposed to fix them. Sometimes the same queries could cooperate, and sometimes they couldn’t? The database server decided which one could pass, and which one would fail? What was I supposed to do about any of that? Besides, most of the time it wasn’t a problem – apps retried their queries, or users just ignored error messages and tried the process again.

But eventually, when I became a consultant, I had to confront my ignorance head-on and do something about it because some clients were having serious deadlock issues that they couldn’t ignore. I got help from smart people like Kendra Little, Erik Darling, and Jeremiah Peschka. I solved those client issues, claimed victory, and today deadlocks don’t scare me at all.

You can solve deadlocks in (at least) 3 ways:

  • Having enough indexes to make your queries go fast, but not so many that they cause problems for inserts, updates, & deletes
  • Using the right isolation level for your app’s needs, which typically isn’t the default – it’s RCSI
  • Or worst case, tuning your queries

I say worst case because tuning your queries is usually the most labor-intensive option. It requires knowing the queries involved in the deadlocks, being able to understand what about them is causing a problem, and knowing how to rewrite/reorder them in a way that’ll avoid deadlocks.

In this free session on April 29th sponsored by Idera, I’ll show you why your queries need to work through tables in a consistent order while minimizing the number of times you touch a table. I’ll use a real-world deadlock example and rewrite the query to resolve the issue. Whether you’re a DBA or developer, you’ll walk away with practical strategies you can apply immediately to make your queries deadlock-resistant and your applications more resilient.

If you can’t make it live, Idera will email registrants the recording later to watch on their own time. See you there!


Free Webcast Tomorrow: Fundamentals of Columnstore

Company News
3 Comments

Fundamentals of Columnstore IndexesIt’s time for SQL Server Spring Training! I’m teaching 60-90 minutes of my most popular classes for free over the next few months.

Tomorrow I’m teaching Fundamentals of Columnstore, and we’re going to be covering the internals of how SQL Server and Azure SQL DB store data in columnstore indexes. You’ll learn why people think it’s an index on every column, but in reality, it’s something wildly different.

There’s a lot more to cover in columnstore, like how columnstore indexes are rebuilt, why it’s so important to do index maintenance on columnstore indexes (unlike rowstore), the differences between clustered and nonclustered columnstore indexes, and why partitioning makes so much sense for columnstore indexes. We cover those in our full Fundamentals of Columnstore class.

If you’ve never attended a formal SQL Server training class before, let’s fix that and get you started for free tomorrow morning. Register now and I’ll see you at 11AM Eastern, 10AM Central, 8AM Pacific. The webcast will not be recorded – be there, be square, or buy the full class for $89.

Free Training Registration

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Name*
Things I want*


[Video] Office Hours in My Vegas Backyard

Videos
0

I’m doing a lot of travel this month, but I took a break in my Vegas backyard to go through the top-voted questions from https://pollgab.com/room/brento.

Here’s what we covered:

  • 00:00 Start
  • 00:29 Nick H: I’m a new starter at an organisation who have lots of rules, one of which is that the use of Intellisense is forbidden. I asked why and was told that this was a decision taken “a long time ago” and is due to some apparent negative impact on servers. Is there any truth in this?
  • 01:28 Adrian: Hi Brent, have you ever recommended Capture Data Change or Change Tracking to your customers? If yes, what for? What is your opinion about these features? I have an internal team pushing for this to replicate the data to a different system and I am not sure it is a good idea.
  • 02:18 Frozt: What can you say about SQL 2025? do you recommend upgrading from 2016 to 2025 or you recommend 2022 for stability? Any SQL 2025 gotchas we should think about and are you excited with something for SQL 2025? Thank you
  • 03:43 Snowman: I recently learned that you can only take copy-only backups on AG secondaries. What is the use case for this? I don’t see how it reduces my need to backup the primary.
  • 04:14 Dopinder: Is it worth it to be a dual stack DBA (MSSQL / Postgres)? How do you keep them separate in your brain?
  • 05:18 sqlpadawan: Came here after your post about your sexuality. It takes guts to bring that subject publicly and receive the virtual stones. Q: How was “that” conversation with your ex-partner? Are you still close/friends? Have you lost clients/followers/friends after sharing that in public?
  • 06:12 NikTrs: When corruption happens on non clustered indexes or non important tables which can be restored from backups, is it safe to drop the objects and recreating them to remove corruption instead of restoring the database from backup
  • 07:29 Sysadmim: Is there a way to remove cached statistics on temp tables?Trying to solve an issue with skewed row-count on temp table by 1 odd-out execution resulting in massive scans, I’m not able to change the stored procedure to add an option recompile or update stats on the said temp table.
  • 07:46 Trapped in the cloud: Do you still distribute “SQL Server Performance Tuning in Google Compute Engine”? It’s not in the First Responder Kit, your website, or Google’s.
  • 08:33 Uncle: In terms of SQL DB Backups, is it better to rely on Enterprise Backup solution like Networker or Veritas (knowing it requires sysadmin, and the DBA does not have access to it) or have conventional (Ola’s script) backups to a network share and enterprise backup the shared drive?
  • 09:40 MyTeaGotCold: I’ve never seen anyone talk about Contained Availability Groups since they were announced. Were they any good?
  • 10:37 RoJo: Can you discuss your current guidelines for Antivirus on the SQL server. Some say don’t do it at all and harden access to it so nothing gets in. Agents can make SQL slow. Where is your balance. Trying to solve security vs performance. (remembering CloudStrike-shudder)
  • 11:39 Mr. Zbd409: Is there any reason to keep data forever? With cheap storage (archive tier azure) , I feel like this is a consideration for businesses. is there any reason to delete data? Is it cheaper to store archive data in json rather than proprietary sql server backup file?
  • 12:30 Bandhu: What are your pros / cons for implementing multi – tenancy at the server level (each tenant has their own database server)? Do you see this much in the field?
  • 13:00 30-years-till-retirement: I’m about 10 years into my data career. I’ve heard some SQL MVPs on a podcast say it takes 3-6 months to find a new job. I’d assume it was 6 weeks at most. Is this a case of me having low standards for future employers? In case it wasn’t clear, I’m not a SQL MVP!
  • 14:45 Ricardo: We use chromebooks instead of windows laptops. Whats the best way to administer cloud based (mostly azure) sql servers?
  • 15:02 What’s the Problem You’re Trying to Solve?: “My friend” recently ran sp_BlitzIndex on a table with 1.5 M rows, result: 450 indexes, mostly “IN-MEMORY/HYPOTHETICAL” with no usage. Also 5,200 Statistics, mostly Last Updated 5 years ago. Both indexes and stats are named with “_dta” prefix. Should he be worried?
  • 16:32 Mike: How do you treat people who do not use “sp_” prefix when creating their stored procedures ? Is it OK for you, or you kind of strict and think everything should be with a prefix ?

Fundamentals Week 2025 is Coming This Month! Get Your Tickets Now.

Conferences and Classes
0

You’ve attended some of my free online streams, read my blog posts, and you use the First Responder Kit. You enjoy my laid-back, humorous approach to sharing what I’ve learned over the years.

You love live classes. You’ve tried watching recordings or stepping through demos yourself, but…you just can’t block out the time, and you can’t stay motivated while you’re watching a recording. You like the fast-paced energy of seeing me live, asking me questions, and seeing me respond to your chats.

You hate free conferences that feel like Zoom meetings. You’ve tried attending a few online events, but they felt just like sitting through one meeting after another. They didn’t dive deeply into any one subject – they just hopped around from one topic to another, and many of those topics just weren’t relevant to you. They were good for free stuff, but…

You’re ready for Fundamentals Week! You’ll spend the week with me learning about indexes, query tuning, columnstore, how I use the First Responder Kit, and more.

The conference dates are April 22-25, 2025, and classes are 11AM-5PM Eastern, 8AM-2PM Pacific. Here’s what we’ll cover:

Register now for $1,595. See you in class!


What’s New in SQL Server 2025

Humor
26 Comments

This year’s release of Microsoft’s enterprise-grade relational database focuses on the most highly requested features from customers. Highlights include:

Microsoft’s Corporate Vice President of Azure Data, Rohan Kumar, explained in an interview for the Data Exposed podcast, “For years, we’ve been telling folks to post their feature requests at feedback.azure.com, and to upvote the features they found most compelling. We’ve heard their requests loud and clear, and we’re proud to announce that SQL Server 2025 delivers on what customers have really been asking for.”

This approach marks quite the departure from the last few versions, in which the flagship features like Machine Learning Services, Ledger (blockchain-ish history), and Big Data Clusters have suffered from dramatic adoption problems. “We learned from our mistakes, and we know database engine competition is tougher than ever,” Rohan explained in the podcast. “We know we can’t afford to just slap AI on it and call it a day.”

When asked about pricing and feature limitations on Standard Edition, Rohan couldn’t promise anything, but he did say, “We’ve heard from customers loud and clear that SQL Server price increases have to stop, and that today’s hardware has outpaced our restrictions on Standard Edition. We’ll have information closer to release, and I think customers will be very excited about our willingness to compete.”


Free Webcast Tomorrow: Fundamentals of TempDB

Company News
0

It’s time for SQL Server Spring Training! I’m teaching 60-90 minutes of my most popular classes for free over the next few months.

Tomorrow I’m teaching Fundamentals of TempDB, and we’re going to be covering temp table statistics, execution plans, and plan reuse.

There’s a lot more to cover in TempDB, like table variables, memory-optimized table variables, cursors, index builds, AG statistics, and provisioning, but tomorrow’s focus will just be temp tables.

If you’ve never attended a formal SQL Server training class before, let’s fix that and get you started for free tomorrow morning. Register now and I’ll see you at 11AM Eastern, 10AM Central, 8AM Pacific. The webcast will not be recorded – be there, be square, or buy the full class for $89.

Free Training Registration

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Name*
Things I want*


[Video] Office Hours: Back in the Home Office

Videos
1 Comment

I’m back home after a 2-week Iceland vacation. Let’s catch up with your top-voted questions from https://pollgab.com/room/brento.

Here’s what we covered:

  • 00:00 Start
  • 02:04 Suresh: Hi Brent . Question about 128 GB memory limit for SQL server standard edition.If I set ‘Max SQL Server Memory’ to 228 GB on a 256 GB VM , SQL will use the additional 100 GB for anything other than buffer pool . Is that correct assumption ?
  • 03:41 Jens Herrmann: Two 20-year-old databases without any clustered indexes. The key columns are uniqueidentifier and some int columns. A colleague of mine changed all the table key columns to clustered indexes, and everything became noticeably slower. Is it a bad Idea to have a clustered index?
  • 04:52 PAul: A production database is approaching 16TB. Adding another datafile to the PRIMARY filegroup is a solution. Is there any (dis)advantage to creating a new Filegroup on a different drive and moving large tables/indexes to it?
  • 07:07 TokTik: What are your top recommended books on SQL server performance tuning for someone planning to explore this field for the first time?
  • 07:51 Gary: Hi B. We have a Prod db that periodically experiences hundreds of long LCK_M_X waits and blocking on the same proc. Oddly, every one of those waiting sessions and the lead blocker is a COMPILE wait on the proc’s objectid. No RECOMPILE used anywhere. Why is sql trying to compile?
  • 09:22 MyTeaGotCold: My devs use temporal tables a lot. Can you tell me any horror stories about them? I hit a SQL Server bug with system-versioned tables today and I’m scared.
  • 10:30 Bandhu: What’s your faith level in third party SQL backup software? Considering it to get table level restores since Microsoft won’t add this any time soon.
  • 10:53 Archivist: I’m trying to make backups faster by making cold data read only. How do I decide if I should make an RO database, make archive tables in an RO filegroup, or partition tables and make part of them RO?
  • 12:10 neil: A frequently used dashboard randomly chooses horrible plan and kills production server. I tried forcing plan with Query Store. Works for a while then fails, says the index it’s using no longer exists? But I didn’t drop any indexes that I know of. Not sure where to look next.
  • 13:33 thetechguy23: The role of a DBA today seems very blurred, at least at my organisation. As well as “keeping the lights on” we’re often asked to develop integrations exchanging data using REST API’s and then own that process. Data comes from the database, right? Where do you stand on that?

Query Exercise Answer: Finding Email Addresses

For this week’s Query Exercise, your mission had two parts:

  1. Do a first pass search through the Users table looking for columns where people had snuck in email addresses where they didn’t belong
  2. Write a more high-quality query to identify exactly which rows we needed to handle

I wrote the challenge that way because in database work, we often write a quick-and-dirty proof of concept query to get a rough idea of what we’re dealing with. Sure, in a perfect world, we’d write a query that could capture any email address, but first let’s just get a rough idea if there’s anything that looks like an email address in the Location column:

That runs nearly instantly and returns results – some of which aren’t actually email addresses, like “@bloginfoguide.com”, but there are more than enough genuine email addresses in that list to confirm that we’ve got a real problem. (And that’s only .com emails!)

If we need to check all string columns, we could concatenate them together into one big long string, and check to see if that big long string has @ and .com in it. However, that won’t tell us which columns are problematic, and that’s going to be important if we need to do this kind of research at real-life scale – across not just all the string columns in one table, but all the tables in the database, too.

When we hand our analysis over to someone (because it’s usually a dev manager or compliance officer who needs it), they probably want it in a format like this:

PII report proof of concept

That’s where dynamic SQL and metadata come in.

For challenges like this, it’s tough to beat dynamic SQL. (Trust me, I’ve tried to beat it repeatedly. I put on boxing gloves and punch my screen all the time when I’m working with dynamic SQL, but I’m still the one who ends up huddled on the floor, crying. Wait, where were we?)

Let’s start by taking that query we just wrote, but modifying it to produce output similar to what we’re looking for, and let’s build it with dynamic SQL – but not make it metadata-driven just yet:

That produces this query:

Now, let’s get a little fancier and build this with metadata. We’ll source our object definitions from the system object INFORMATION_SCHEMA.COLUMNS:

INFORMATION_SCHEMA.COLUMNS

For my quick-and-dirty purposes, I’m going to filter for schema <> ‘sys’, because those are typically system objects. I’m also going to filter for only string columns (varchar and nvarchar), but if you’ve gotten fancy with user-defined data types, you’ll have some additional work. I trust you’ll be smart enough to figure that part out, since you’re the one who got fancy there. Anyhoo, back in my environment, this gives me enough to metadata-drive my dynamic SQL:

So now let’s put those two things together – our dynamic SQL peanut butter and our metadata chocolate:

When executed, that returns the queries we need to run against our database:

Metadata-driven queries

I like building my dynamic SQL gradually like this and testing it first manually to see if the queries are even correct. Given my typing skills and my use of alcohol, that’s not always the case, but here, these queries actually compile and run, woohoo! So now let’s turn them into dynamic SQL.

If I just built and executed them one by one, then our SSMS results tab would be chock full of lots of different result sets. I want one result set that I can copy/paste into Excel and hand to the project manager who will be dealing with this. (Ideally, I’d rather just give them the metadata-driven query so they can run it, and I won’t be responsible for copy/pasting PII into Excel, but managers sometimes just wave their hands and say, “Enough already, just put it in Excel and email it to me.”)

So I need it in one result set – to do that, I’m going to stage the data into a table. You could use a temp table for this, but I’d rather have it be restartable in case my dynamic SQL hits a snag, or in case I want to check its progress from another session.

So our script gets a little more complicated again:

Test that, and it runs. Now, it’s time to think about actually executing it.

Executing this sucks in the real world.

In small-world situations like the Users table, I wouldn’t even bother with dynamic SQL. I’d simply take the output of the above query, and run it manually in SSMS. It’d finish in a few seconds, populating the table, and I’d take that output and run with it. However, the larger your data set becomes, the more problematic this is going to be.

You could build a cursor and loop over the contents of INFORMATION_SCHEMA.COLUMNS, executing the dynamic queries one at a time, one per column. However, you don’t wanna do that because I’ve hit situations where read-only metadata queries, even with READ UNCOMMITTED and NOLOCK hints, have actually either been blocked by other queries, or been the cause of blocking outages. (My favorite was when a client had a production outage because someone was running sp_BlitzIndex at the same time someone else was creating a columnstore index.) I wouldn’t want your query to be the cause of the next production outage.

So if you want to take the cursor approach, dump the contents of INFORMATION_SCHEMA.COLUMNS into a user table or temp table first, and then iterate over that, using Aaron Bertrand’s tips on cursor tuning.

Another consideration is query performance. The way we’ve written this, we’re doing multiple passes against the same table over and over. This technique works great if you have a lot of nonclustered indexes on your string columns. However, if you don’t, or if you purposely want to do a table scan instead, AND if you have hardly any PII, this is going to suck because you’re going to do a full table scan for each column you want to check. In that situation, it’d be better to write your PII-checking query differently, more like this pseudocode:

That would require a different result-capture method, though, since it doesn’t identify which column is the one with the PII, and returns all columns. However, if you’ve got a lot of string columns, and no indexes, this query might execute faster (albeit be harder to build.)

That brings me to my last consideration: the goal of this Query Exercise wasn’t to give you a ready-to-go PII analysis solution, because after all, we’re only checking for email addresses here! We haven’t even gone into things like social security numbers or dates of birth or whatever.

Let’s zoom out.

Your original query challenge had two parts:

  1. Do a first pass search through the Users table looking for columns where people had snuck in email addresses where they didn’t belong
  2. Write a more high-quality query to identify exactly which rows we needed to handle

And we’ve already faced a lot of complexity with just working on #1, and only for email addresses! That’s why in the real world, when clients ask me about this kind of thing, I walk them through a quick-and-dirty check like this:

And if email addresses show up, our proof-of-concept check is done. We’ve got a problem: PII is where it isn’t supposed to be, and frankly, IT ALWAYS IS. Users are boundlessly creative in how they sneak PII into systems.

At that point, the project scope just exploded. We’re going to have to identify all of the PII types we’re looking for, and write robust queries to detect that PII hidden in various formats. This work is genuinely hard and not particularly performant, especially when we try to do it in T-SQL, as we covered in the recent regex post.

This is where a dedicated team, and perhaps buying a third party app, make sense. You and I aren’t going to write queries that do a good job detecting all the types of PII, especially when they’re cleverly hidden in columns like Location or masses of text in AboutMe. If our work needs to be legally defensible – and in PII cases, it does – then there’s no substitute for a team with the time to really do this topic justice.

Or, you know, just, uh, sweep it under the rug. Time to go build another feature for our app!


Why Your Azure SQL DB Hyperscale Bill is Higher Than You’d Expect

Azure SQL DB, Hyperscale
3 Comments

tl;dr: if you use Azure SQL DB Hyperscale’s auto-scaling, it’s probably not cutting your bill down because like SQL Server, Hyperscale doesn’t automatically relinquish memory, and it’ll be up to you to manually manage your own memory if you wanna cut your bill.

Now for the long story.

Microsoft’s Azure SQL DB Hyperscale has an auto-scaling option that’s supposed to automatically scale up and down. It’s perfect for bursty workloads like Have I Been Pwned, a free online service where you can put in your email address and see if your login details have been leaked. (Mine’s been part of 44 data breaches, which is one of the many reasons why I use 1Password to create separate logins for each web site, and sync my logins across all devices, plus manage my one-time passwords.)

Have I Been Pwned’s back end architecture relies on Azure SQL DB Hyperscale for data storage, and uses autoscaling to automatically add more horsepower (like when there’s a big breach to import), and cut horsepower during low-load periods to save money.

However, Stefán J. Sigurðarson discovered why their bill wasn’t going down. It turns out that Hyperscale’s automatic cost savings only kicks in when your CPU usage is low and when your memory usage is low, too. That’s where the problem comes in.

Microsoft SQL Server uses all your available memory by default, and doesn’t give it up automatically. I’m simplifying – there are some small guardrails, and SQL Server will back down its usage when other apps fire up and demand memory. But the vast majority of users will look at the vast majority of servers and say, “Whoa, SQL Server sure is using a lot of memory, and it never seems to go down.”

That’s a good thing for well-managed database servers (as I explained in that post above, more than a decade ago) – and that design served Microsoft well for decades. However, it’s a problem for the cloud, especially for cloud services that are supposed to automatically scale up and down based on demand. Azure SQL DB Hyperscale is built with Microsoft SQL Server’s code, and inherits some of those design choices.

Azure SQL DB Hyperscale’s memory documentation says they fixed SQL Server’s problem, saying things like:

Unlike provisioned compute databases, memory from the SQL cache is reclaimed from a serverless database when CPU or active cache utilization is low.

Except as Stefán discovered and documented in his post, that doesn’t appear to be the case. Stefán’s post explains how to work around it by creating a stored procedure to manually down-scale your Hyperscale database based on low demand, and to run it in an Elastic Job. Good stuff. You should subscribe to his blog (button at the top right of his site) because it’s hard to find folks publicly sharing real-world Hyperscale knowledge, especially at scale.

I do expect Hyperscale’s behavior to change as it gains more public awareness, and just over time in general, Microsoft’s strategy for how they scale down (and scale up) will change. That’s just a normal part of any development process, and hopefully they document those changes and share ’em with customers.


[Video] Office Hours: Waiting for the Hot Tub to Fill

Videos
2 Comments

While waiting for my Husavik hot tub to fill up, I took your top-voted questions from https://pollgab.com/room/brento.

Here’s what we covered:

  • 00:00 Start
  • 01:28 1440×1080: always on with 2 nodes 1db, readable secondary. I’ve optimized both nodes to a point where the primary can withstand both workloads.i want to downgrade licensing from enterprise to standard. What is the process to switch from AOAG to BAG ?Do i just change the license ?
  • 04:05 Compatability Level 140: Soon after auto-update stats triggers, I get single-row estimates for date ranges starting with yesterday. I thought the modern cardinality estimator was supposed to prevent this major error. What do I need to study?
  • 05:11 Chase: When would having tempdb configured to only have a single file (mdf) be favorable, even if you more than 1 CPU? I was taught to have an mdf per CPU up to 8, then test for balance. A vendor system imports faster using temp tables using only 1 mdf, which makes no sense. Any ideas?
  • 07:00 Manolis: Do you foresee a specific DB technology exploding in popularity due to close alignment with AI?
  • 07:22 MySpecialHoneyPot: Hi, in various cases SQL Server chooses to use the clustered columnstore index where a suitable nonclustered index exists that delivers better performance. Besides index hints, is there a way to push Query Optimizer to not use the clustered indexes?
  • 08:37 green tissue: What breaks when databases are owned by windows users? People always say it’s evil but I have never found a demo or specific example.
  • 09:40 JerseyDBA: I was looking through sp_blitz on a server and found all the DBs have the legacy target recovery time set to 0 not 60. Auto checkpointing isn’t causing issues, any harm in leaving this and only flipping to 60 seconds if I start seeing I/O issues?
  • 10:55 Austin Powers Shagadelic Box: My big server’s CPU never hits 70%, but I see lots of parallelism waits. Will adding more cores help? I’m just checking to see if I understand how SQL server uses CPU.
  • 12:15 James: Hi Brent, In your opinion, would creating triggers to enforce cross database referential integrity be something you would consider, and not fall in the generic realm of ‘bad practice’?
  • 14:08 chris: Do we know what Azure Managed Instance is using “under the hood” for HA/DR? If not Availability Groups is it something proprietary?
  • 15:03 ChompingBits: What are you thoughts on how to work with vendors that don’t follow best practices. Our environment is mostly SQL isntances for vendor software we buy. We run SP_Blitz weekly and see lots of configuration warnings. How do you tell a vendor they don’t know what they’re doing?
  • 16:44 AzureNewb: Do most of your clients use Azure Managed Instance or Azure SQL?
  • 18:21 Contineo: Not all RDS instance types support putting tempdb on the local storage. What would you choose between: – a r5d or m5d instance with tempdb on the local storage but a EBS Bandwidth or 4750Mbps or – a r6i instance without tempdb on the local storage but an EBS Bandwidth of 10Gbps
  • 20:05 SGU: So while I know that “Missing Indexes” are not 100% reliable as to the exactly index to be created, is there a was to determine what query “caused” that missing index recommendation to be made?
  • 20:58 Depthcharge: Hi Brent! I’ve been trying to reduce PAGEIOLATCH_SH on a 25TB production database. Monitoring tool identified a 25TB+/hour of logical reads from 300 exec/hr “Task Manager” commands in the master DB, no associated SQL. Worth investigating, or should I focus on individual queries?

T-SQL Has Regex in SQL Server 2025. Don’t Get Too Excited.

SQL Server 2025, T-SQL
23 Comments

Regular expressions are a way of doing complex string searches. They can be really useful, but they have a reputation: they’re hard to write, hard to read, and they’re even harder to troubleshoot. Once you master ’em, though, they come in handy for very specific situations.

This post isn’t about their complexity, though. This post is about Azure SQL DB & SQL Server 2025’s regex performance.

In this case, regular expressions are like Everclear.

Everclear is a brand of alcohol, and when you read that, you might think, “Sounds interesting, I wonder what it tastes like.” It’s marketed as odorless, flavorless, and colorless. I say “marketed” because if you’ve ever been around it, you will not use terms like odorless or flavorless. To get a rough idea of what it’s like, watch The Unemployed Wine Guy try it.

See, Everclear is 95% pure alcohol by volume. It’s so potent that many states have outlawed it. You know Nevada, the state with gambling, weed, hookers, and assault weapons? Yeah, no Everclear allowed here. Gotta draw the line somewhere.

You’re not supposed to drink Everclear straight, but that trained professionals are supposed to use it – in very small quantities, in combination with other ingredients.

This? It's just coffee, as far as you need to know.

Let’s give regex a taste.

We’ll use the Stack Overflow 2010 database (the small one) in Azure SQL DB. The Users table on it has about 9M rows amounting to about 1GB in size.

Say we’re trying to find out if any sneaky users have been putting PII in places that it shouldn’t belong, like this week’s Query Exercise challenge. We want to find all Users.Location values where the user put in something resembling an email address. We’ll put an index on the Location column to give Azure SQL DB a fighting chance, and to give it statistics so it can better estimate how many rows might match a predicate, then run our query:

The resulting execution plan is really disappointing:

Regex query plan doing clustered index scan

SQL Server could have scanned the Location index rather than the full table, but it chose not to.

Update 2025/12/02 – Good news! SQL Server 2025 RTM fixes half of the problem here. In the actual query plan, it still scans the table rather than touching the index, but at least the CPU usage is waaaay down.
SQL Server 2025 RTM

I blogged more about SQL Server 2025’s implementation (and its remaining problems) here.

Why’s the performance taste so bad?

Reader: “It’s because you asked for SELECT *, and your index only has the Location column.” Not so fast – I only asked for 100 rows, which means that Azure SQL DB could have scanned the index to find the first 100 matching rows, and then done key lookups to fetch the columns, and it would have been faster. Let’s prove it:

And the abridged version of STATISTICS IO shows that the index would have been way, way less page reads:

And the query plan can indeed happily use the index. The top plan is the default, the bottom plan is with my index hint:

Default plan on top, index hint on the bottom

Reader: “So why is Azure SQL DB avoiding the index? Is the tipping point involved?” If Azure SQL DB believes a lot of rows will match your query’s predicate, then can veer towards clustered index usage rather than nonclustered indexes. However, look at those plans again: they can’t produce more than 100 rows! My query is specifically asking for the first 100 rows. There’s no sort. Just gimme the first 100 rows that match. I absolutely guarantee that this query could not produce more than 100 key lookups.

Reader: “Then why is Azure SQL DB avoiding the index then?” Well, when Microsoft coded the row estimates for regex, they made some unbelievably drunken decisions, perhaps involving chugging straight Everclear. Let’s run a different query to expose what’s happening:

The number to pay attention to here is the estimated number of rows that will match. Hey, Azure SQL DB, how many rows in our Location column will match this regex search?

Bad estimate

Azure SQL DB believes that 2,675,250 rows will match – when only 68 do. It’s off by a factor of 39,342X!

Performance is bad because Microsoft got lazy – again.

That 2,675,250 row estimate will be familiar to those of you who shelled out money for my Fundamentals of Query Tuning class. That number just happens to be exactly 30% of the row count of the Users table.

When Microsoft coded the row estimates for their fancy new regex functions – like many other kinds of search predicates – they simply hard-coded it and called it a day. No matter what your regexp_like query is looking for, Azure SQL DB guesses that 30% of the rows match. Sometimes, that estimate is fine – but most of the time, it’s not.

This is the kind of performance issue that doesn’t seem bad at small data sizes, but as your data grows, it –

wait a second

hang on

Our table only has 9 million rows, and it’s less than 1GB in size, and the regexp_like takes sixty seconds of CPU time!

High CPU time

That’s… terrifying. Sixty seconds of CPU time to query a 1GB table. That isn’t gonna scale, full stop. I mean, it’s fine for one-off utility queries on really small amounts of data, like if you’ve filtered the rowset down to a manageable number (like, 1,000 rows), but nothing you’d want to put into production on an OLTP system.

Plain ol’ queries are way better.

If you only need to find out if PII is getting out there, try a regular like query instead:

Reader: “Whoa Brent, you can’t start a like search with a leading wildcard! That’s bad!” Seriously, you gotta shut your pie hole every now and then, or at least go buy my Fundamentals classes, because you’re in for a big surprise. That query doesn’t have an index hint, and yet, Azure SQL DB is smart enough to use the index:

LIKE query plan uses the index

And it does less logical reads, and runs in just a couple seconds:

Are the results the same? No – but it’s enough to solve the business problem and tell us that yes, people are indeed using email addresses in their Location columns. (I’m not putting those here on the blog, but you’re welcome to run the query yourself to see the results.)

That’s why I say regex is like Everclear.

Everclear isn’t inherently bad or evil – at least, not any more than gambling, weed, hookers, and assault weapons. (You hear me, Nevada?) You just have to be really careful when you use it, and understand what you’re in for. It shouldn’t be your first resort on a typical Saturday night – try other solutions like gin & tonic instead.

Similarly, regex in Microsoft databases can be fine, too – in combination with other ingredients. It shouldn’t be your first resort, and it certainly shouldn’t be the only thing in your where clause or joins. If you have a bad experience with it, your first resort should be to take it out, and try an alternative solution.

And whatever you do, don’t use regex and Everclear together.

Update 2025-07-05: Louis Davidson experimented with the new ASSUME_FIXED_MIN_SELECTIVITY_FOR_REGEXP and ASSUME_FIXED_MAX_SELECTIVITY_FOR_REGEXP query hints. It doesn’t improve the performance of queries like this blog post, but if you’re going to use REGEXP_LIKE in your code, they could be useful hints for tuning other queries.


Query Exercise: Looking for Email Addresses

Query Exercises
12 Comments
Brent Ozar reading
“They put their password WHERE?!?”

Personally identifiable information (PII) is freakin’ everywhere.

When companies first start looking to identify and lock down their data, they think it’s going to be as easy as identifying common columns like EmailAddress, DateOfBirth, SocialSecurityNumber, and so forth. They think, “We’ll just encrypt those columns and we’ll be fine.”

Ho ho, my sweet summer child. No, that’s not nearly enough.

Your users are completely diabolical. If you give a user a text box, they will find amazingly creative ways to stuff your database chock full of PII that you didn’t ask for, and you certainly don’t want. One of my favorite client examples was a help desk ticketing system where the support team routinely logged things like usernames, passwords, and other PII mixed in with their case notes. (Example: “Person needs help resetting their account, but can’t find their account. Tried searching under Jane Doe, Doe Jane, 123 Main Street, even tried their DOB of 1/2/1999, but I can’t find them, even though their credit card is getting billed under 5432-1234-9876-4567.”)

So when you start the journey of making your database safer by identifying and handling PII, it’s up to you to proactively dig through your database looking for this stuff. This kind of research work is often handed to database administrators because they’re trusted to look everywhere in the database, in every row and column. Plus, doing this kind of search is more efficient inside the database itself rather than on the app side – since it involves looking at every row and column.

This week, you’re that DBA, assigned that project.

Take any Stack Overflow database and examine the Users table. To keep things simple, I’m only going to ask you to look for email addresses.

This week’s challenge is a two-parter:

  1. Write a one-time query to check the Users table for email addresses. Do we have a problem with users putting email addresses where they don’t belong, and if so, what columns do we need to dig deeper into?
  2. Assuming we have a problem, write a query to find exactly which rows we need to address. Perhaps we’ll send them emails (ha ha ho ho) to log in and correct their profile, but we’re not actually going to modify their data.

Put your queries in a Github Gist and include those link in your comments. Check out the solutions from other folks, compare and contrast your work, and next week I’ll circle back with my answers & thoughts. Have fun!


Free Webcast Tomorrow: Fundamentals of Query Tuning

Company News
0

It’s time for SQL Server Spring Training! I’m teaching 60-90 minutes of my most popular classes for free over the next few months.

Tomorrow I’m doing Fundamentals of Query Tuning. It’s for folks who are comfortable writing queries to get the data they want, but wildly uncomfortable reading query plans and trying to explain why queries aren’t going faster. If you look at a query plan and think that the percentages shown mean anything, then it’s time for you to get the fundamentals under your belt.

That’s part of the fun of my Fundamentals classes: folks often think, “I’ve been doing this database work for years – I know all those fundamentals.” But when they attend the Fundamentals classes, they’re surprised by how a lot of their basic assumptions were incorrect.

It’s not your fault! You’ve never attended a formal SQL Server training class before. Let’s fix that and get you started for free tomorrow morning. Register now and I’ll see you at 11AM Eastern, 10AM Central, 8AM Pacific. The webcast will not be recorded – be there, be square, or buy the full class for $89.

Free Training Registration

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Name*
Things I want*


What’s Coming in SQL Server 2025, DMV Edition

SQL Server 2025
5 Comments

As we approach the first public releases of SQL Server 2025, it’s fun to start spelunking through things Azure SQL DB has today – announced and unannounced – that aren’t yet present in the boxed product version of SQL Server.

Microsoft staff used to say that Azure SQL DB is “the next version” of the boxed product, meaning that Microsoft tested, tweaked, and proved new features in the cloud while they could still rapidly iterate over code, shipping updates to make the product more reliable and scalable. In practice, that’s not entirely true: some things never leave the cloud, and some things actually ship to SQL Server long before they’re available up in Azure SQL DB.

Today, we’re going to go through documented & undocumented system objects that are available up in the cloud today (March 2025), but keep in mind that some of these may be cloud-only. I’m just dumping out the list of Azure SQL DB objects that aren’t present in SQL Server 2022 CU17.

New Configurations

This is stuff you’d set with sp_configure. I’m going to use a screenshot here rather than text results because I think you’re gonna wanna see all of the columns:

New configurations

Zooming in for folks reading via email, or too lazy to click on the image to see the full size:

ENHANCE

The “availability group commit time (ms)” reminds me of Postgres’s commit_delay option that lets you group transactions together to reduce writes to disk. In theory, if you can stage transactions together in groups of, say, 10 milliseconds, then you can get one big write to disk instead of a bunch of tiny chatty writes that take forever. I can see how that’d improve AG communication by a lot! I’ve worked with storage vendors who wanted a dial that would increase the size of SQL Server’s batched log file writes, and this would have a similar effect.

The drawback is that if you make transactions wait for, say, 10 milliseconds to batch their writes together, and if the server is bored, then you’ve just added 10 milliseconds to your transaction for no reason. This only makes sense on busy servers.

New Database-Scoped Configurations

  • READABLE_SECONDARY_TEMPORARY_STATS_AUTO_CREATE – like the below, sounds fairly straightforward. We don’t have these today in SQL Server, and it’d affect AGs. Turning this off might make query performance easier to troubleshoot since you wouldn’t have to worry about different plans on different replicas, but then on the flip side, performance would just be… consistently bad.
  • READABLE_SECONDARY_TEMPORARY_STATS_AUTO_UPDATE
  • OPTIMIZED_SP_EXECUTESQL – documented, from Books Online, “When OPTIMIZED_SP_EXECUTESQL is ON, the first execution of sp_executesql compiles and inserts its compiled plan into the plan cache. Other sessions abort waiting on the compile lock and reuse the plan once it becomes available. This allows sp_executesql to behave like objects such as stored procedures and triggers from a compilation perspective.”

New Tables and Views

I’m skipping some that have been publicly documented before, or are part of Azure services that wouldn’t make sense to include in SQL Server.

  • sys.change_streams_destination_type – guessing this has to do with the private preview for Change Event Streaming because the only entry in this table is AzureEventHub.
  • sys.change_streams_encoding – only options are JsonPretty and Binary.
  • sys.change_streams_partition_scheme – only options are Default, StreamGroup, Object, and Column.
  • sys.column_master_key_definitions
  • sys.database_connection_stats_ex – been in Azure for a couple of years, and would be pretty nifty to have on-premises, but hard to get data accurately on-premises, since it includes connection failure counts. If they can’t connect to SQL Server, it’s not like we’d know about it in a way that we could increment this counter.
  • sys.devops_database_access – this and the next couple are security-related, probably for some Azure DevOps product.
  • sys.devops_database_principals
  • sys.devops_principals
  • sys.dm_db_logical_index_corruptions – this is interesting in light of some new messages that we’ll see later in the post. It appears that Azure SQL DB is logging cases where nonclustered index contents are bad, or don’t match the clustered index. In this past, this has required manual intervention to fix the index – typically by dropping & recreating the entire index. Might Microsoft automatically correct just the corrupt portions of the index, similar to how Automatic Page Repair works?
  • sys.dm_db_objects_impacted_on_version_change – warns if an object is going to be impacted by a major release upgrade in Azure SQL DB.
  • sys.dm_db_wait_stats – wait stats have always been tracked at the server level for SQL Server, and at the database level for Azure SQL DB. I would love to get database-level stats in the boxed product, but I would be surprised if that ever trickled down.
  • sys.dm_db_xtp_undeploy_status – interesting because since its release, In-Memory OLTP has been what I call a DDLTD: a data definition language transmitted disease. Once you turn on Hekaton, you can’t turn it off. This DMV has columns for database_id, deployment_state, hk_undeploy_lsn, and start_of_log_lsn, leading me to think we might be able to someday turn it back off.
  • sys.dm_exec_ce_feedback_cache – interesting, with columns for databae_id, fingerprint, feedback, observed_count, and state. Might we be moving towards a world where Microsoft adapts the database’s overall cardinality level separate from the compatibility level?
  • sys.dm_io_network_traffic_stats – interesting in the cloud because it shows send & receive data for these categories: BlobAccess_Default, BlobAccess_DiffBackup, BlobAccess_FullBackup, BlobAccess_LogBackup, BlobAccess_XEL, FCB_BackupRestore, FCB_UserIo, RemoteFCB, TDS, UCS, UCS_BlockData, UCS_BrokerMirroring, UCS_CloudDB, UCS_DbCopy, UCS_DBMirroring, UCS_EndpointBroker, UCS_FidoTransport, UCS_GlobalTransactions, UCS_Hadr, UCS_TieredStorage. Oddly, it doesn’t track errors, timeouts, or retries – just sends & receives and sizes.
  • sys.dm_os_memory_allocations_filtered – with columns for memory_object_address, sum_bytes, line_num, and source_file.
  • sys.dm_os_parent_block_descriptors – looks like it’s for memory troubleshooting.

New Stored Procedures

I didn’t do any digging here, but I’m amused by a couple of them that I bolded:

  • sp_change_feed_enable_tables_after_reseed
  • sp_change_feed_is_slo_allowed
  • sp_change_feed_reseed_db_init
  • sp_change_feed_reseed_db_start_replication
  • sp_change_feed_reseed_table
  • sp_change_streams_add_object_to_group
  • sp_change_streams_create_group
  • sp_change_streams_drop_group
  • sp_change_streams_remove_object_in_group
  • sp_cloud_connection_set_sds
  • sp_cloud_extensions_get_federated_users
  • sp_cloud_extensions_update_federated_username
  • sp_cloud_get_synapse_sql_pools
  • sp_cloud_impersonate_user
  • sp_cloud_notify_dw_event
  • sp_cloud_upgrade_in_post_sterling_migration
  • sp_cloud_upgrade_partition_from_v1
  • sp_cslr_delete_entry
  • sp_cslr_upsert_entry
  • sp_detour_top_memory_allocations
  • sp_execute_global_tran
  • sp_get_jobs_database_scoped_credential
  • sp_has_change_feed_permissions
  • sp_help_change_feed_settings
  • sp_help_change_feed_table_groups
  • sp_help_fabric_mirroring
  • sp_help_fabric_mirroring_table
  • sp_help_fabric_mirroring_table_groups
  • sp_maintenance_workflow
  • sp_move_first_fixed_vlf
  • sp_predict_next_activity – clearly, the robots are not only coming for your job, but for you, and they’re trying to predict where they can find you so they can eliminate you.
  • sp_process_detour_memory_allocation_record
  • sp_repair_logical_index_corruption – I had to try executing this one for laughs, and it says “The operation failed because the Index Repair Manager is not available.”
  • sp_rsc_evict_all
  • sp_update_iq_object_version
  • sp_wait_for_database_copy_sync

New Table-Valued Functions

  • sys.dm_db_column_store_redirected_lobs
  • sys.fn_xe_telemetry_blob_target_read_file

New Columns in Existing Objects

This list is a lot bigger, and there’s a lot of nifty, useful stuff in here:

  • sys.all_columns.vector_dimensions – related to the new vector data type, same with the next couple of new columns.
  • sys.all_columns.vector_base_type
  • sys.all_columns.vector_base_type_desc
  • sys.columns.vector_dimensions
  • sys.columns.vector_base_type
  • sys.columns.vector_base_type_desc
  • sys.databases.is_data_lake_replication_enabled
  • sys.databases.is_change_streams_enabled
  • sys.databases.data_lake_log_publishing
  • sys.databases.data_lake_log_publishing_desc
  • sys.databases.is_vorder_enabled – documented, for Fabric.
  • sys.databases.is_optimized_locking_on
  • sys.dm_database_backups.in_retention
  • sys.dm_db_column_store_row_group_operational_stats.row_group_elimination_count
  • sys.dm_exec_query_memory_grants.query_hash
  • sys.dm_exec_query_memory_grants.query_plan_hash
  • sys.dm_exec_query_profiles.row_requalification_count
  • sys.dm_exec_sessions.contained_availability_group_id
  • sys.dm_os_buffer_descriptors.buffer_address
  • sys.dm_os_buffer_descriptors.latch_address
  • sys.dm_os_buffer_descriptors.latch_desc
  • sys.dm_os_memory_allocations.allocation_rva_stack
  • sys.dm_os_memory_cache_counters.extended_properties
  • sys.dm_os_ring_buffers.ring_buffer_group
  • sys.dm_os_ring_buffers.datetime
  • sys.dm_os_schedulers.total_waits_completed
  • sys.dm_os_schedulers.total_enqueued_tasks
  • sys.dm_os_schedulers.total_completed_tasks
  • sys.dm_os_schedulers.spinlock_wait_time_ms
  • sys.dm_os_schedulers.spinlock_max_wait_time_ms
  • sys.dm_os_schedulers.spinlock_wait_count
  • sys.dm_os_threads.description
  • sys.dm_os_workers.cpu_used
  • sys.dm_resource_governor_workload_groups – a bunch of columns in here, skipping this for clarity and because most folks don’t use Resource Governor.
  • sys.dm_tran_database_transactions.database_transaction_first_repl_lsn
  • sys.dm_tran_persistent_version_store_stats.pvs_off_row_page_skipped_oldest_aborted_xdesid
  • sys.fulltext_indexes.data_space_id
  • sys.index_columns.data_clustering_ordinal
  • sys.query_store_plan_forcing_locations.timestamp
  • sys.query_store_plan_forcing_locations.plan_forcing_type
  • sys.query_store_plan_forcing_locations.plan_forcing_type_desc
  • sys.stats.replica_role_id
  • sys.stats.replica_role_desc
  • sys.stats.replica_name
  • sys.system_columns.vector_dimensions
  • sys.system_columns.vector_base_type
  • sys.system_columns.vector_base_type_desc

New Messages

Okay, this section is a doozy. I’m just going to bold the ones I find interesting, and I’ll leave the discussions to y’all. There are big hints in here of fun feature stuff.

  • 370: Unknown object name.
  • 810: The operating system returned error %ls to SQL Server during a %S_MSG at offset %#016I64x in file ‘%ls’. This is intended to raise only in Azure SQL DB. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.
  • 837: The operating system returned error %ls to SQL Server during a %S_MSG at offset %#016I64x in file ‘%ls’. This error can be caused when a remote IO fails due to timeout.
  • 900: Failed to start buffer pool extension because the file configuration has changed and is incompatible with the existing one.
  • 1137: The number of database files for Next-Gen General Purpose instances cannot exceed (%d).
  • 1138: Could not allocate a new page for database ‘tempdb’ because that would exceed the limit set by the GROUP_MAX_TEMPDB_DATA_PERCENT parameter of the ‘%ls’ workload group.
  • 2578: Json corruption found in column “%.*ls” in table ‘%ls’ (ID %d). Data row (%d:%d:%d) identified by (%ls) is corrupted. Please update the column with a valid Json.
  • 2819: Plan Cache Hydration application login failed because feature switch PlanCacheHydrationReplay is not enabled on this server
  • 2950: The operation encountered an integer overflow exception.
  • 2951: The operation encountered a bad format exception.
  • 2952: The operation encountered a bad array length exception.
  • 3470: The recovery of the database “%.*ls” could not redo the file size change for file “%.*ls” because of the missing connection.
  • 3614: Batch Statistics: Overall Duration = %f ms, CPU time = %f ms, Total logical reads = %lu, physical reads = %lu, remote reads = %lu, logical writes = %lu, Compile CPU time = %f ms, Login time = %lu ms, Total Waits = %f ms, Log Bytes = %I64u, SQLHandle = %hs
  • 3645: Batch Connectivity Statistics: Packets Read Time = %f ms, Packets Read Time Before Exec = %f ms, Packets Write Time = %f ms, Packets Write Time Before Exec = %f ms, TCP_INFO_v1 = %ls
  • 3646: Thrown by a utility function documented with “fails by throwing an exception that caller MUST catch.” Users should not see this error.
  • 3697: Batch Waits Statistics: %ls
  • 3765: Could not delete native shuffle storage ‘%ls’. See the SQL Server error log for more information.
  • 3766: Cannot drop %S_MSG ‘%.*ls’ using old ‘Table.Index’ syntax, use ‘Index ON Table’ syntax instead.
  • 3767: Could not drop the primary key constraint ‘%.*ls’ because the table has a JSON index.
  • 3768: Could not drop the primary key constraint ‘%.*ls’ because the table has a vector index.
  • 4713: Truncate table failed for ‘%.*ls’ with error number %d
  • 4714: Truncating global temporary tables and transient user tables in TEMPDB is not supported.
  • 7451: Retrieval of the MWC Token for Onelake/DFS access failed with error ‘%ls’.
  • 7452: Initialization of the MWC Token Cache failed. Lock Mode: %.*ls.
  • 7453: Sending of statistics request failed with error ‘%lu’.
  • 7454: SQL internal communication failed with error ‘%ls’.
  • 7687: ALTER TABLE SWITCH statement failed because table ‘%.*ls’ and table ‘%.*ls’ have non-matching fulltext indexes.
  • 7688: ALTER TABLE SWITCH statement failed because there are active crawls on the fulltext index, wait until crawl completed or disable fulltext index.
  • 8069: The incoming tabular data stream (TDS) remote procedure call (RPC) protocol stream is incorrect. Parameter %d (“%.*ls”): JSON data type is not supported in TDS on the server side.
  • 8583: DTC fully qualified domain name (FQDN) resolution setting is ‘%ls’.
  • 8760: Recompile with replay script for long haul testing for Optimization Replay feature
  • 8761: Unsupported virtual column in memo XML.
  • 8762: Table-valued function ‘%s’ is not supported in this version of Synapse Sql.
  • 8763: TABLESAMPLE with ‘ROWS’ clause is unsupported in this version of Synapse SQL.
  • 8764: Error occurred when sending metadata over the network from distribution %d to distribution %d. Please try to run the query again. If the error persists, please contact support.
  • 8765: Error 0x%X occurred when %S_MSG a connection from distribution %d to distribution %d. Please try to run the query again. If the error persists, please contact support.
  • 8766: An error occurred when reading data moved from one distribution to another. Try to run the query again. If the error persists, contact support.
  • 8767: Error occurred when reading metadata from distribution %d. Please try to run the query again. If the error persists, please contact support.
  • 8768: Recursive CTEs are unsupported in this version of Synapse SQL.
  • 8769: Warning: The estimated result set size for this query is %I64u. Consider adding filters to reduce the result size or sending result to a file or external table.
  • 8770: Unsupported expression in Memo XML.
  • 8771: Unsupported operator for resource actuals feedback.
  • 8772: Recompile for Trident Result Set Cache creation
  • 8773: Reserved error message. Should never be issued.
  • 8778: Query execution has been aborted because the ABORT_QUERY_EXECUTION hint was specified.
  • 8779: Failed to initialize in-memory storage for operator metrics feedback.
  • 8780: Schema mismatch during native data movement. If this error persists, contact support.
  • 8781: Query processor could not produce a query plan because of the error in reusing multiconsumer spool in the plan.
  • 8782: Memo deserialization has been completed successfully.
  • 8783: Table metadata could not be found.
  • 8784: Unsupported operator in Memo XML.
  • 8785: NCCI in broadcast mode unsupported error
  • 8786: Query processor could not force query provided in USE PLAN hint because it couldn’t generate an index seek close enough to the specified in the showplan XML.
  • 8787: Updating the row count of a distributed table is not allowed in this version.
  • 8788: REPLICATE_LEFT hint not allowed in this version.
  • 8789: %s Nested CTE internal diagnostic message
  • 8790: One or more non-scalable operation is detected. Check product documentation for details.
  • 8791: %s
  • 8800: [DOP Feedback] Internal error has occurred for: plan hash: %I64x, plan id: %d.
  • 8801: Warning: The FORCE DISTRIBUTED PLAN query hint was applied, but the query was executed using a single node plan. Check product documentation for details.
  • 9052: The database ‘%.*ls’ is in a state that does not allow new log to be generated until a new VLF can be created. Please release the active log by cancelling long running transactions.
  • 9053: The transaction log for database ‘%.*ls’ is full due to the database being on a higher service level objective (SLO) than one of its geo-secondary databases. Please update the SLO of all geo-secondary databases to match the SLO of current database.
  • 9054: Invalid request to move the first VLF while transforming the log file to Hyperscale format.
  • 9130: Warning: Ignoring update of ‘%.*ls’ statistics. ACE statistics are auto updated, UPDATE STATISTICS DDL statement is not supported.
  • 9131: Stats messages cannot be sent because the feature was disabled during SQL startup.
  • 9132: Stat messages cannot be sent because the stats manager is still starting.
  • 9133: Stat messages could not be processed, because the database does not exist or could not be opened.
  • 9134: Stat messages could not be processed, because the message version is not understood by the server.
  • 9135: Stat messages could not be processed, because the message failed to parse.
  • 9136: Stat messages could not be processed, because the table or index has been dropped or modified.
  • 9137: The statement failed because the necessary statistics could not be automatically created for ‘%.*ls’ due to a change in the table schema since the snapshot transaction started. Retry executing the statement.
  • 9138: Stat were updated recenty, skipping another update.
  • 9139: Statistics are too large to be sent to the primary.
  • 9140: The existing statistics and the incoming update have different INCREMENTAL settings, discarding the update.
  • 9141: There was an error updating the statistic. Please contact customer support.
  • 9841: An invalid Unicode escape sequence was specified in the input. Unicode escape sequences should be specified as XXXX or +XXXXXX where X is a .
  • 9842: Invalid Unicode escape character was provided. The cannot be .
  • 9843: Unicode escape character ‘%.*ls’ is invalid. Unicode escape character should be of 1 character length and a code point ranging from U+0020 up to U+007E.
  • 9844: The char/varchar input type uses an unsupported collation. Only a UTF8 collation is supported with char/varchar input type in UNISTR function.
  • 9992: Failed to retrieve full-text drop batch size configuration. A drop batch size of %d will be used instead.
  • 9993: Informational: Failed to create Fulltext index internal table to track failed DocId on table ‘%.*ls’.
  • 9994: Informational: Failed to insert failed Doc Id ‘%d’ from table ‘%ls’ into the internal table, error: %d.
  • 9995: Internal table access error: failed to access the Fulltext failed DocId internal table with HRESULT: 0x%x.
  • 9996: Failed to create DML AFTER trigger to track DML operations on failed Doc Id for table ‘%ls’.
  • 10682: Cannot convert the %S_MSG ‘%.*ls’ to a relational index by using the DROP_EXISTING option.
  • 10988: The %ls limit for the workspace is %d and has been reached.
  • 11448: Transaction with read committed snapshot isolation failed in database ‘%.*ls’ because the object accessed by the statement has been modified by a DDL statement in another concurrent transaction since the start of this snapshot. A concurrent update to metadata can lead to inconsistency if mixed with read committed snapshot isolation.
  • 11573: The value of the %.*ls parameter should be in the range %.*ls.
  • 11747: Value of parameter ‘%.*ls’ is out of expected range.
  • 11805: Operation was aborted because LastBackupName parameter contains forbidden characters (‘\n’ or ‘\r’).
  • 11806: The operation was aborted because the ‘LastBackupName’ parameter contains invalid characters. Only ASCII characters are allowed.
  • 11907: The operation cannot be completed because a connection to the native shuffle storage could not be made. Please try to run the operation again. If the error persists, please contact support.
  • 12137: Optimized Locking is not yet available in this Azure region. Please visit the Microsoft documentation for more details.
  • 12506: The distribution option used is not supported in the CREATE TABLE query.
  • 12507: The index option selected is not supported in a CREATE TABLE AS SELECT query.
  • 12508: Cross-database table clone is not supported.
  • 12509: The user either lacks required permissions on the source or target database, schema, or table, otherwise one or more does not exist.
  • 12510: Feature ‘%.*ls’ is not supported by table clone.
  • 12511: Database Clone is not enabled for this instance.
  • 12512: Internal error. Encounter unexpected error while cloning trident dw database.
  • 12513: An internal error occurred while cloning the specified table.
  • 12514: The specified point-in-time is invalid. The timestamp must not be after the current system time.
  • 12515: The specified point-in-time is invalid. The timestamp must not be before the object was created.
  • 12516: The specified point-in-time is invalid. The timestamp must not be before the object was last altered.
  • 12517: The specified point-in-time is outside of the data retention period for the table. The retention period defined for this table is %d days.
  • 12518: The specified point-in-time is invalid. The point-in-time must be before the current transaction’s begin time.
  • 12519: The FOR TIMESTAMP AS OF query hint can only be specified once per statement.
  • 12520: A non-versionable data source was specified in the time travel query.
  • 12521: The FOR TIMESTAMP AS OF query hint can only be used with read-only SELECT statements.
  • 12522: The specified point-in-time is outside of the data retention period for the database. The retention period defined for this database is %d days.
  • 12523: The specified point-in-time is invalid. The timestamp must not be after the current snapshot time.
  • 12713: OPENROWSET is not allowed to read local files. Path: ‘%ls’.
  • 13641: There is not enough resources to perform the operation. One or more JSON character strings in the query exceeds the supported limit for the JSON type.
  • 13642: Encountered an unsupported version of the JSON type. Please run DBCC CHECKTABLE with EXTENDED_LOGICAL_CHECKS to identify the rows. The remedy is to delete the rows and insert the JSON documents as character strings.
  • 13643: Encountered corrupted JSON type. Please run DBCC CHECKTABLE with EXTENDED_LOGICAL_CHECKS to identify the rows. The remedy is to delete the rows and insert the JSON documents as character strings.
  • 13644: String value length exceeds maximum %d bytes in JSON type.
  • 13645: Nested level of JSON document exceeds limit %d.
  • 13646: Key string length exceeds limit %d bytes in JSON type.
  • 13647: Number of items in one object/array exceeds limit %d in JSON type.
  • 13648: Length of numeric string around ‘%.*ls’ exceeds limit %d characters in JSON type.
  • 13649: Number of unique keys exceeds limit %d in JSON type.
  • 13650: JSON path exceeds limit of %d bytes.
  • 13654: JSON column(s) cannot be natively imported through BULK INSERT or OPENROWSET(BULK) statement. Please use a format file to explicitly specify the type of the JSON column(s) and specify the format file in the BULK INSERT or OPENROWSET(BULK) statement.
  • 13655: JSON data type is not supported in distributed queries. Remote object ‘%.*ls’ has JSON column(s).
  • 13656: JSON data type cannot be used when its feature switch is off.
  • 13657: Cannot create alias types from a JSON data type.
  • 13658: JSON data type cannot be used in OpenJson function.
  • 13659: Index %d provided at position %d is not within the array of size %d.
  • 13660: %s not yet supported for advanced JSON array accessors.
  • 13661: JSON data type is not supported in CDC tracked table.
  • 13662: JSON data type is not supported in transactional replication.
  • 13663: JSON data type is not supported in generic replication features.
  • 13664: JSON path not valid for Json index.
  • 13665: %s not yet supported for JSON native data type.
  • 13666: %s does not support CLR type as parameters
  • 13667: Json aggregates do not support order-by within group when specified with grouping sets, cubes and rollups. Try your query without order-by within group.
  • 13670: Input JSON is not a valid Vector : ‘%ls’.
  • 13671: Given Vector size is not supported. Maximum allowed size ‘%ld’.
  • 13672: Table ‘%.*ls’ needs to have a clustered primary key with less than %d columns in it in order to create a JSON index on it.
  • 13673: Maximum size of primary index of table ‘%.*ls’ is %d bytes. JSON requires that such size should be limited to %d bytes.
  • 13674: DROP JSON INDEX does not support any options.
  • 13675: Cannot create a JSON index on temp objects. ‘%.*ls’ is identified as a temp object.
  • 13676: Internal error occurred while creating the JSON index. Please run DBCC CHECKCATALOG and retry if it does not report any error. Please contact support in case of error.
  • 13677: Failed to create internal table for the JSON index.
  • 13678: Could not create the JSON index on object ‘%.*ls’ because that object is not a table. Create the index on the base table column.
  • 13679: ‘%.*ls’ is not a valid JSON index name because it starts with ‘%c’ character. JSON index name should not start with ‘#’ or ‘@’
  • 13680: Column ‘%.*ls’ on table ‘%.*ls’ is not of JSON data type, which is required to create a JSON index on it.
  • 13681: A JSON index ‘%.*ls’ already exists on column ‘%.*ls’ on table ‘%.*ls’, and multiple JSON indexes per column are not allowed.
  • 13682: Cannot create JSON index ‘%.*ls’ on table ‘%.*ls’, column ‘%.*ls’, because the column is computed.
  • 13832: Path ‘%ls’ has URL suffix which is not allowed.
  • 13833: Multiple paths are only allowed from the same storage account and container.
  • 13834: Bulk load data conversion error (NULL in non-nullable column) for row %ls%I64d, column %d (%ls) in %ls.
  • 15721: Could not build Automatic Tuning’s internal tables on database (ID %d).
  • 15722: Internal table access error: failed to access the Automatic Tuning internal table with HRESULT: 0x%x.
  • 15723: Failed to execute Automatic Tuning’s stored procedure for error %d.
  • 16214: Incorrect syntax near ‘%.*ls’. Nested CTE cannot be used in this query.
  • 16215: Conflicting Query Hints Detected The query contains conflicting hints that cannot be processed together: – FORCE SINGLE NODE PLAN – FORCE DISTRIBUTED PLAN Please revise the query to use either single node or distributed execution hints, but not both simultaneously.
  • 16600: Invalid or unknown dateformat ‘%.*ls’.
  • 16655: Cannot add column ‘%ls’ from table ‘%ls’ to schema because the data type of this column is ‘%ls’.
  • 16656: Please specify only one user managed identity per sync group or member.
  • 16723: An error has occurred while enabling Long-term backup retention for this database. Please reach out to Microsoft support to enable long-term backup retention.
  • 16724: The long-term retention backup with the desired storage access tier already exists.
  • 16725: Restoring archived long-term retention backup is not supported. Restore can be performed only on rehydrated or regular long-term retention backups.
  • 16726: Archiving long-term retention backups on zone redundant backup storage is not supported. Use locally redundant or geo-redundant storage types instead.
  • 16727: Archiving long-term retention backups is not enabled.
  • 16728: Archiving long-term retention backups on Hyperscale databases is not enabled.
  • 16729: ChangeLongTermRetentionBackupAccessTier is not supported for the desired (%ls, %ls) combination. Only (Archive, Move) and (Hot, Copy) are allowed.
  • 16730: Setting a long-term retention policy with the backup storage access tier set to ‘archive’ is not supported on zone-redundant backup storage. Use either locally redundant or geo-redundant storage types.
  • 16731: Updating the backup storage redundancy type to zone-redundant storage is not supported when a long-term retention policy with the ‘archive’ backup storage access tier is set at the %ls level.
  • 16732: The subscription ID is required to update the long-term retention backup.
  • 16733: The server name is required to update the long-term retention backup.
  • 16734: The database name is required to update the long-term retention backup.
  • 16735: The provided long-term retention backup resource ID or backup name is incorrect.
  • 17443: SOS Boot failed during stage: %ls.
  • 18788: Column data not found as scan task has been aborted.
  • 18791: The conditions for reading the log record of type %d have not be met.
  • 18859: Provided value for %ls parameter is invalid. Allowed values are %ls.
  • 18860: No RE Schema information found in cache for the table id %ld and transaction id %I64d.
  • 18861: RE Schema node could not be created in memory after 5 retries for Table id %ld and transaction id %I64d.
  • 19300: An invalid Pattern ‘%.*s’ was provided. Error ‘%.*s’ occurred during evaluation of the Pattern.
  • 19301: ‘%.*ls’ value should be greater than or equal to %d but ‘%d’ is provided in ‘%.*ls’ function.
  • 19302: Maximum of ‘%d’ flags can be provided. But you provided ‘%d’ flags.
  • 19303: Invalid flag provided. ‘%.*s’ are not valid flags. Only {c,i,s,m} flags are valid.
  • 19304: Currently, ‘%.*s’ function does not support NVARCHAR(max)/VARCHAR(max) inputs.
  • 19305: Failed to replace ‘%.*ls’ with replacement string ‘%.*ls’ from start position ‘%d’.
  • 19306: The provided Pattern is too complex. Please retry by reducing the complexity of Pattern – ‘%.*ls’.
  • 19307: Encountered an unexpected ‘%.*ls’ in the Pattern %.*s.
  • 19308: Missing ‘%.*ls’ in the Pattern %.*s.
  • 19309: Invalid trailing backslash (\) provided at the end of the Pattern %.*s.
  • 19437: Cannot set READ_ONLY_ROUTING_URL to NONE for availability replica ‘%.*ls’ as it is being used in READ_ONLY_ROUTING_LIST of replica ‘%.*ls’. Please remove the replica from the READ_ONLY_ROUTING_LIST and retry. For more information, see SQL Server Books Online.
  • 19531: Database [%ls] failed to refresh lease info with fabric, request returned with [%ld]. Check the SQL Server error log on the exact failure.
  • 19532: While reverting to the common recovery point, database “%.*ls” retrieved a page that may not be consistent with existing pages or log records. To gain access to the database, you need to determine what has changed in the session configuration and undo the change.
  • 19534: The attempt to remove IP address from the listener failed. If this is a WSFC availability group, the WSFC service may not be running or may be inaccessible in its current state, or the values provided for the network name and IP address may be incorrect.
  • 19535: The attempt to remove IP resource ‘%.*ls’ from the listener failed since cluster resource is not offline. Cluster resource state is %d.
  • 19537: The WSFC is already taking action against availability group ‘%.*ls’. SQL Server will attempt to online the availability group without interfering with the WSFC action. This is an informational message. No user action is required.
  • 19538: Availability group commit time has been set to %d millisecond(s). This value will take effect when the current replica is primary.
  • 19539: Unexpected error %d severity %d, state %d occurred while creating distributed availability group ‘%.*ls’. Cleanup distributed availability group and try again.
  • 19540: Unexpected error occurred while dropping distributed availability group ‘%.*ls’. Cleanup distributed availability group and try again.
  • 19541: Cannot create distributed availability group ‘%.*ls’ on top of availability group ‘%.*ls’. An availability replica of the specified availability group with same endpoint url already exists. Verify that the specified endpoint url is correct and unique, then retry the operation.
  • 19715: Resolving schema for Delta Lake table ‘%ls’ failed with error: Unable to read schema from Delta logs.
  • 19716: Data path was not properly set or is missing in the DDL statement.
  • 19717: SP ‘%.*ls’ should be called under the explicitly started transaction (started with BEGIN TRAN).
  • 19718: Resolving Delta logs on path ‘%ls’ warning: Partitioning column ‘%.*ls’ not found in inferred or provided schema.
  • 19719: Failed to acquire mutex for DW DB attachment with result %lu.
  • 19720: Filter pushdown not supported for column type-filter param type pair.
  • 19721: DW database attachment task failed for database with ID %d (HRESULT = 0x%x).
  • 19722: DW database attachment task failed to notify MS upon its completion (HRESULT = 0x%x).
  • 19723: Database attachment failed because another database with ID %ld already exists.
  • 19724: Database could not be successfully attached to appropriate DB ID.
  • 19725: Column mapping is not enabled.
  • 19726: Feature ‘%ls’ is not supported for table ‘%ls’.
  • 19727: Resolving schema for Delta Lake table ‘%ls’ failed with error: Empty column name found in Delta logs.
  • 19728: Database could not be successfully detached.
  • 19729: Database(s) could not be successfully attached.
  • 19730: Database state could not be successfully fetched.
  • 19731: Failed calculating cardinality for ‘%.*ls’.
  • 19732: Login failed for user ‘%.*ls’ due to no proper resource access. Only backend connections are allowed.%.*ls
  • 19778: Error encountered while parsing data: ‘%ls’. It might happen for one of the various reasons: \n1. Parquet file is corrupted. \n2. The file size defined in the metadata is smaller than the actual size of the file. \n3. Wrong file type ingested. \n4. The file changed during the query execution. \nUnderlying data description: %ls.
  • 19795: Failed to start database ‘%ls’ for recovery.
  • 19796: Failed to read parquet file because the column segment for column ‘%ls’ is too large. Underlying data description: ‘%ls’.
  • 19797: Operation %ls is not allowed in dbo schema.
  • 19798: An unexpected system error was reported (%ls, %ls).
  • 19799: Restore for database failed.
  • 19801: Lease metadata inconsistency has been detected on blob.
  • 19802: Partition column ‘%ls’ has unsupported type.
  • 19803: Login failed for user ‘%.*ls’ because the DMS moniker was not found. Only backend connections are allowed.%.*ls
  • 19804: Restore for database “%.*ls” is in progress.
  • 19805: Fetching DW database properties failed.
  • 19806: The provided file format is not supported. Please ensure the file is in one of the supported .ss formats (up to v5). Underlying data description: %ls.
  • 19807: Error encountered while parsing data: ‘%ls’. Underlying data description: %ls.
  • 19808: Error encountered while reading data: ‘%ls’. Underlying data description: %ls.
  • 19809: Error encountered while reading data: ‘%ls’. Underlying data description: %ls.
  • 19810: Login failed for user ‘%.*ls’ as storage SAS token refresh was delayed or failed. Only backend connections are allowed.%.*ls
  • 19811: Maximum LOB value size has been exceeded for column ‘%ls’. Underlying data description: %ls.
  • 19812: CODEPAGEs different than 65001 and 1200 are not supported when CSV 2.0 is specified.
  • 19813: Failed to break the database lease.
  • 19814: Database could not be successfully detached for deletion.
  • 19815: Workspace state could not be successfully fetched.
  • 19816: Complex types not supported for parquet data unit. Underlying data description: %ls.
  • 19817: DataBlock size is too small. Underlying data description: %ls.
  • 19818: DataBlock column offset is incorrect. Underlying data description: %ls.
  • 19819: DataBlock column sizes offset is incorrect. Underlying data description: %ls.
  • 19820: DataBlock column options are incorrect. Underlying data description: %ls.
  • 19821: DataBlock size is incorrect. Underlying data description: %ls.
  • 19822: Decimal rescale failed. Underlying data description: %ls.
  • 19823: Empty column name found while resolving schema for CosmosDB collection: %ls.
  • 19824: Error encountered while reading data: ‘%ls’. Underlying data description: %ls.
  • 19825: Column count mismatch, the column count in the file doesn’t match column count in the table schema when reading row %d. Set the MATCH_COLUMN_COUNT option to OFF if you want to skip this check. Underlying data description: %ls.
  • 19826: Error encountered while updating One Security mode for database ‘%ls’ under workspace ‘%ls’.
  • 19827: Column count mismatch, the column count in the file doesn’t match column count in the table schema. Column ‘%ls’ is not present in the table. Set the MATCH_COLUMN_COUNT option to OFF if you want to skip this check. Underlying data description: %ls.
  • 19828: Column count mismatch, the column count in the file doesn’t match column count in the table schema. Column ‘%ls’ is not present in the file. Set the MATCH_COLUMN_COUNT option to OFF if you want to skip this check. Underlying data description: %ls.
  • 19829: Error encountered while trying to lock a shared object %d.
  • 19901: An internal error occured.
  • 22022: An internal error occurred while getting physical foreign file size.
  • 22023: Used by testshell test in failpoint simulation.
  • 22126: Change tracking auto cleanup encountered an error when getting list of large side table IDs
  • 22127: Change tracking autocleanup encountered an invalid side table object id
  • 22128: Could not allocate memory for Change Tracking operation. Verify that SQL Server has sufficient memory for all operations. Check the memory settings on the server and examine memory usage to see if another application is excessively consuming memory.
  • 22601: This SQL Database can only be mirrored once across Fabric workspaces
  • 22602: This SQL Database is not supported for Fabric mirroring.
  • 22603: Fabric Mirroring feature is currently enabled only on Azure SQL DB.
  • 22604: Fabric Mirroring is not enabled on database ‘%s’.
  • 22605: Fabric Mirroring is not supported on Free, Basic or Standard tier Single Database (S0,S1,S2) and Database in Elastic pool with max eDTUs < 100 or max vCore < 1. Please upgrade to a higher Service Objective.
  • 22606: A Managed Identity is required to successfully enable Fabric Mirroring. Please configure either System Assigned Managed Identity or User Assigned Managed Identity on server before enabling Fabric Link.
  • 22607: %ls not supported on the ‘Microsoft Fabric’ platform.
  • 22608: ‘%ls’ statement is not supported on the ‘Microsoft Fabric’ platform.
  • 22609: Publishing the Database Reseed notification for the Fabric Mirrored Database to Fabric OneLake failed. Retry this operation later.
  • 22610: Cannot initiate the Database Reseed operation. This operation is currently not required for already reseeded Fabric Mirrored Database ‘%s’.
  • 22611: Parameter ‘%s’ has an invalid value. Specify a valid value for the named parameter and retry the operation.
  • 22612: This Database Reseed operation can’t be completed because Fabric Mirrored Database ‘%s’ is no longer in Reseed state.
  • 22613: This Database Reseed operation can’t be completed for Fabric Mirrored Database ‘%s’ because Reseed operation is still in initializing phase. Please retry this operation later.
  • 22614: Could not determine if the database is enabled for Fabric Mirroring, Synapse Link or Change Event Streaming.
  • 22615: Tables with ‘%ls’ column type are not supported on the ‘Microsoft Fabric’ platform.
  • 22616: %ls are not supported on tables on the ‘Microsoft Fabric’ platform.
  • 22617: Tables with primary key that has ‘%ls’ column type are not supported on the ‘Microsoft Fabric’ platform.
  • 22618: %ls cannot be used as primary key on the ‘Microsoft Fabric’ platform.
  • 22619: ‘%ls’ collation is not supported on the ‘Microsoft Fabric’ platform.
  • 22620: Cannot perform this operation as the Fabric Mirrored database ‘%s’ is in Reseeding state at Database level.
  • 22621: Table reseed operation on source table ‘%s.%s’ with link table id ‘%s’ corresponding to reseed_id ‘%s’ is already completed. A table can only be reseeded once for the given reseed_id.
  • 22622: Cannot re-enable mirroring for the table because table is not in Reseed state.
  • 22623: sp_rename on %ls is not supported on the ‘Microsoft Fabric’ platform.
  • 22624: SQL Database on the ‘Microsoft Fabric’ platform requires tables to have a primary key.
  • 22625: The Switch Partition Operation is currently blocked/not supported for table ‘%.*ls’ that is Mirroring in Fabric.
  • 22626: Could not perform Database Reseed operation for Fabric Mirrored Database ‘%s’ because mirroring is not enabled for any table groups of the Database.
  • 22627: Column name ‘%.*ls’ contains one or more ‘,;={}()’, space, tab or new line character(s), which are not supported on the ‘Microsoft Fabric’ platform.
  • 22628: Column name ‘%.*ls’ contains one or more ‘,;={}()’, space, tab or new line character(s), which are currently not supported for a table that is Mirroring in Fabric.
  • 22629: Tables with a nonclustered primary key are not supported on Microsoft Fabric platform.
  • 22630: ALTER INDEX ALL ON table REBUILD statement is currently not supported on non-unique clustered tables on Microsoft Fabric platform.
  • 22631: ALTER PARTITION FUNCTION statement is currently not supported if function is used on heap tables on Microsoft Fabric platform.
  • 22632: ALTER TABLE REBUILD PARTITION statement is currently not supported on heap tables on Microsoft Fabric platform.
  • 22633: SHRINKDATABASE and SHRINKFILE statements are currently not supported on Microsoft Fabric platform.
  • 22634: Publishing MSSQL_System_Uniquifier column failed. Invalid uniquifier value detected. Please contact Microsoft Support for assistance.
  • 22635: Maxtrans cannot be changed for fabric mirrored database as it is managed and optimized automatically.
  • 22636: Repldone failed while performing the reseed db operation
  • 22637: Reseed db failed. The failure occurred when executing the command ‘%s’. The error/state returned was %d/%d: ‘%s’. Use the action and error to determine the cause of the failure and resubmit the request.
  • 22638: Publishing the create table notification to Fabric OneLake failed. Retry this operation later.
  • 22639: Aborting Fabric Link Commit Notification task for this database timed out. Retry this operation later.
  • 22640: The table could not be removed from the Change Feed. Please try the operation again shortly.
  • 22641: Disabling of mirroring for the table ‘%.*ls’ failed. Please retry this operation later.
  • 22642: Can not enable Fabric Mirror on a table where primary key uses columns of the following types: user-defined types, geometry, geography, hierarchyid, sql_variant or timestamp, datetime2(7), DateTimeOffset(7) and Time(7)
  • 22697: Cannot enable fabric link on the database because the metadata tables are corrupted.
  • 22698: Cannot add encrypted column on table ‘%.*ls’ in SQL Database on the ‘Microsoft Fabric’ platform
  • 22699: Cannot alter column ‘%.*ls’ on table ‘%.*ls’ in SQL Database on the ‘Microsoft Fabric’ platform
  • 22789: Could not revoke changefeed user control db permission or alter changefeed user on changefeed enabled db.
  • 22790: A System Assigned Managed Identity is required to enable change feed or mirror SQL database in Fabric. Please configure System Assigned Managed Identity on SQL Server before enabling change feed or mirroring this SQL database.
  • 22791: User cannot drop Change feed user for proper functioning of Change feed features.
  • 22792: Data masking is not supported for this column as this column is already enabled for Fabric Mirroring.
  • 22793: Fabric Mirroring is not supported on this table as the primary key column contains masking functions.
  • 22794: Failed to set table version during snapshot operation
  • 22795: Enabling Change Feed for an external table ‘%s’ is not allowed.
  • 22796: Error %d occurred while interacting with the destination.
  • 22890: Could not remove cdc user from db_owner role or alter cdc user on cdc enabled db.
  • 22893: Change data capture scan failed on processing transaction log
  • 22894: sp_cdc_disable_db_internal caught an exception in try block when executing command : ‘%s’. The error returned was %d: ‘%s’
  • 22895: Could not drop cdc user. CDC user is required for proper working of CDC.
  • 22896: sp_cdc_disable_db caught an exception in try block when executing command : ‘%s’. The error returned was %d: ‘%s’
  • 22897: Enabling Change Data Capture for an external table ‘%s’ is not allowed.
  • 22912: sp_repldone failed
  • 23118: Either %ls, or %ls, or both must have a value. These parameters cannot be both set to NULL.
  • 23119: The index specified by the @index_id is an XML Index. Omit the @data_compression parameter or set it to NULL, XML Indexes do not support data compression, but support XML Compression.
  • 23634: Change Streams feature is not supported for this database.
  • 23635: Change Streams feature is currently enabled only on Azure SQL DB and Azure SQL MI.
  • 23636: Change Streams is not enabled on database ‘%s’.
  • 23637: Change Streams is not supported on Free, Basic or Standard tier Single Database (S0,S1,S2) and Database in Elastic pool with max eDTUs < 100 or max vCore < 1. Please upgrade to a higher Service Objective.
  • 23638: Change Streams encountered an Avro Exception: ‘%s’.
  • 23639: Change Streams encountered a serialization exception: ‘%s’.
  • 23640: Change Streams serializer not initialzied.
  • 23641: The argument ‘%s’ failed validation. Expects ‘%s’ to be ‘%s’.
  • 23642: Destination location parameter is not in the expected format. Expected format is [Event Hubs Namespace Host name]/[Event Hubs Instance] or [Event Hubs Namespace Host name]:/[Event Hubs Instance]
  • 23643: Change Streams is already enabled for ‘%s’.
  • 23644: Change Streams is not currently enabled for ‘%s’.
  • 23645: Port number in the destination location parameter is invalid. It must be a numeric value between 0 and 65535.
  • 23646: Change Streams message exceeds the configured message size limit.
  • 24543: Cannot begin a transaction with the given isolation level. Please specify the snapshot isolation level when explicitly beginning the transaction.
  • 24544: The current isolation level isn’t supported. Please change the transaction isolation level for this session to snapshot then re-run the operation.
  • 24545: An unexpected error occurred during Transcoder scan.
  • 24546: An unexpected error occurred during execution.
  • 24547: There was an error updating runtime physical metadata information needed to execute the statement.
  • 24548: A conversion error occurred during Transcoder scan.
  • 24549: Encountered internal error %d (of category ‘%ls’) while attempting to process physical metadata.
  • 24550: Encountered operating system error %ls while attempting to write physical metadata.
  • 24551: Encountered operating system error %ls while attempting to read physical metadata.
  • 24552: Delete operation failed.
  • 24553: Invalid number of target backend instances for pool resize.
  • 24554: Pool is not in a valid state for resize.
  • 24555: Keep list for pool resize legth/content is invalid.
  • 24556: Snapshot isolation transaction aborted due to update conflict. Using snapshot isolation to access table ‘%.*ls’ directly or indirectly in database ‘%.*ls’ can cause update conflicts if rows in that table have been deleted or updated by another concurrent transaction. Retry the transaction.
  • 24557: Internal error. Encountered an unexpected error while working with an internal table. Result [%x].
  • 24558: Failed physical metadata notification action ‘%ls’ with result %lu for row %lu data from the temp table %ld.
  • 24559: Data Manipulation Language (DML) statements are not supported for this table type in this version of SQL Server.
  • 24560: Internal error. Encountered an issue removing physical metadata.
  • 24561: Internal error. Invalid physical metadata for CREATE TABLE operation.
  • 24562: Input not provided. Could not create vdw service principal. Please provide mandatory parameters to create service principal.
  • 24563: Invalid input. Could not create vdw service principal. The number of provided values does not match the mandatory parameters for the environment.
  • 24564: sp_get_min_xdes is unable to get and validate min xdes
  • 24565: sp_get_delta_lake_storage_properties is unable to return
  • 24566: Failed to update physical metadata with result %lu because of invalid partition values present in row %lu of temp table %ld.
  • 24567: Failed to parse ‘%ls’ from physical metadata with result %lu for row %lu of temp table %ld.
  • 24568: Internal DW transaction error.
  • 24570: CREATE TABLE executed for the database %s without workspace on Trident instance. Normal for Witness databases
  • 24571: Failed to process extended property ‘%ls’ for DB %ld table %ld. More details: %ls.
  • 24572: Commits up to ‘%ld’ have already been applied for DB %ld table %ld. Skipped %lu/%lu row(s) of temp table %ld.
  • 24573: Invalid row group metadata for row group id “%ls”.
  • 24574: The %ls ‘%ls’ is not supported in this edition of SQL Server.
  • 24575: Manifest for DB %ld table %ld is not available, commit is not possible.
  • 24576: Table name cannot contain ‘%.*ls’.
  • 24577: Invalid column chunk metadata for column “%ls”. Wrong parameter: %ls. Underlying data description: %ls.
  • 24580: Invalid value %d was provided for the DML config parameter %ls.
  • 24581: Global temporary tables and transient user tables in TEMPDB are not supported in this edition of SQL Server.
  • 24583: Enforced constraints are not supported. To create an unenforced constraint you must include the NOT ENFORCED syntax as part of your statement.
  • 24584: The %ls keyword is not supported in the %ls statement in this edition of SQL Server.
  • 24585: The specified ALTER TABLE statement is not supported in this edition of SQL Server.
  • 24586: Distributed transactions are not supported in this edition of SQL Server.
  • 24590: Encountered incompatible delete format.
  • 24591: An invalid value is present in the data.
  • 24592: Invalid metadata present in the file.
  • 24593: Internal error. Unable to Serialize Usage Data.
  • 24594: Internal error. Buffer is full.
  • 24595: SQL internal tasks encountered failure.
  • 24596: Failed to complete the command because the underlying location does not exist. Underlying data description: %ls.
  • 24597: An integer precision value between 0 and 6 must be specified.
  • 24599: Failed to ingest data since a BE connection instead of FE connection was used.
  • 24601: A Native Shuffle storage provider for the URL ‘%ls’ could not be found.
  • 24602: Error 0x%X – Could not allocate space while transferring data from one distribution to another.
  • 24603: Error 0x%X occurred when sending data over the network to another distribution. Please try to run the query again. If the error persists, please contact support.
  • 24604: Error occurred when distribution %d wrote metadata for distribution %d. Please try to run the query again. If the error persists, please contact support.
  • 24605: The storage entity does not exist when transferring data from one distribution to another. If the error persists, please contact support.
  • 24606: The storage entity already exists when transferring data from one distribution to another. If the error persists, please contact support.
  • 24607: Error occured when setting the owner for the storage location ‘%ls’. If the error persists, please contact support.
  • 24608: Storage location for transfering data to another distribution is not defined. If the error persists, please contact support.
  • 24701: The query failed because the access is denied on %ls.
  • 24702: The query failed because the following location is changed during the query execution: %ls.
  • 24703: The query failed because an unexpected error occurred in distributed query processing phase. File a support ticket and provide the error code and statement id. Underlying data description: %ls.
  • 24704: The query processor ran out of internal resources. Underlying data description: %ls.
  • 24705: Error when converting partition column value ‘%ls’ to ‘%ls’ column type.
  • 24706: Snapshot isolation transaction aborted due to update conflict. You cannot use snapshot isolation to access table ‘%.*ls’ directly or indirectly in database ‘%.*ls’ to update, delete, or insert the row that has been modified or deleted by another transaction. Please retry the transaction.
  • 24707: There was an error reading runtime physical metadata information needed to execute the statement.
  • 24708: ‘%ls’ is an unsupported partition column type.
  • 24709: There are null values present in the column which was specified as ‘not null’.
  • 24710: There is insufficient system memory to read the data: %ls.
  • 24711: The query failed because parquet file is corrupted and cannot be deserialized while attempting to read it from the location: %ls.
  • 24712: The query failed because parquet file is corrupted and cannot be deserialized while attempting to read it from the location: %ls.
  • 24713: Oredered index is not supported.
  • 24714: Unexpected Error in DW FrontEnd Stress Testing.
  • 24715: The specified ordered column count %d is greater than the maximum ordered column count %d.
  • 24716: sp_get_managed_delta_table_log_files_metadata is unable to generate the output xml.
  • 24717: sp_check_file_cleanup_eligibility is unable to generate the output resultset.
  • 24718: Lakehouse tables cannot be renamed from the SQL Endpoint. Please use the Lakehouse to rename tables.
  • 24719: sp_trigger_expired_files_cleanup is unable to trigger the system event for expired files cleanup.
  • 24720: Internal error. Unable to interact with an internal metadata table. Please try the operation again and contact Customer Support Services if this persists.
  • 24721: The operation could not be completed due to I/O error. This error can occur when the file size defined in the metadata (or delta log) is larger than the actual size of the file. Underlying data description: %ls.
  • 24722: The remote storage service is not active or available at this time (HTTP 500). This could be a temporary issue. Please try the operation one more time. Underlying data description: %ls.
  • 24723: The operation has timed out (HTTP 503) due to a lack of response from the remote storage service. This could be a temporary issue. Please try the operation one more time. Underlying data description: %ls.
  • 24724: Cannot bulk load because of an error writing file. Underlying data description: %ls.
  • 24726: Service is currently experiencing high demand and is unable to process your request at this time. Please try again later, as retrying may help.
  • 24727: Table name cannot end with ‘.’.
  • 24728: Schema name cannot contain ‘%.*ls’.
  • 24729: Schema name cannot end with ‘.’.
  • 24730: The operation encountered an HTTP error while trying to read data. This could be a temporary issue. Please try the operation one more time. Underlying data description: %ls.
  • 24731: The specified clustering column count %d is greater than the maximum clustering column count %d.
  • 24732: CLUSTER BY is not supported.
  • 24733: An unexpected error occurred during execution.
  • 24734: There was an error during stored procedure execution.
  • 24735: Only nullable columns can be added to an existing table.
  • 24736: There are multiple empty string indexes present in the dictionary. Column name: ‘%ls’.
  • 24737: Data Lake Log publishing is supported only for warehouse databases.
  • 24738: There was an error reading runtime ingestion information needed to execute the statement.
  • 24739: No statistics exist for clustering column at ordinal %lu.
  • 24740: VORDER is supported only for warehouse databases.
  • 24741: Identity column ‘%.*ls’ must be of data type BIGINT.
  • 24742: Identity column ‘%.*ls’ does not support specifying SEED or INCREMENT.
  • 24743: An error occured while attempting to update discovered table properties.
  • 24744: StringCchCopyNW failed with HRESULT = 0x%x.
  • 24745: Found column mapping properties, but no column mapping values.
  • 24746: The MaxColumnId value %u is less than the number of column mappings %lu.
  • 24747: The MaxColumnId value %u is less than the largest column mapping Id value %lu.
  • 24748: The ‘MaxColumnId’ value %u is out of bounds.
  • 24749: The column mapping ‘Id’ value %u is out of bounds.
  • 24750: Found %lu column mapping values, but did not find the required column mapping properties.
  • 24751: Found duplicate column mapping id = %lu at index %lu.
  • 24752: Found duplicate column mapping logical name = ‘%ls’ at index %lu.
  • 24753: Found duplicate column mapping physical name = ‘%ls’ at index %lu.
  • 24754: An internal error occured when truncating table ‘%.*ls’.
  • 24755: An internal error occured when truncating user table.
  • 24756: The query failed because handling end of file happened before whole IO buffer was populated: %ls.
  • 24757: Could not create table ‘%.*ls’ because the type (collation) of the column ‘%.*ls’ is not supported in a table with Data Clustering.
  • 24758: sp_cleanup_dropped_table_metadata is unable to clean up internal metadata for the trident DW table.
  • 24759: Data insertion into Parquet-backed table failed (%d,%ls).
  • 24760: The file is temporarily unavailable due to a high number of requests in the lake. This is a transient issue. Please try again shortly. Underlying data description: %ls.
  • 24761: There are null values present in the partition column ‘%ls’ which was specified as ‘not null’.
  • 24762: Error converting values NaN or Infinity to type ‘%ls’ in the column ‘%ls’. NaN and Infinity are not supported. Underlying data description: %ls.
  • 24763: There was an error reading column mapping information for column: ‘%ls’.
  • 24764: No histogram steps exist for clustering column at ordinal %lu. Please check the content of the table.
  • 24765: %ld rows processed for compaction did NOT match %ld rows from the source.
  • 24766: TIMESTAMP can only be set for snapshot databases.
  • 24767: CREATE DATABASE AS SNAPSHOT OF supported only for DataWarehouse.
  • 24768: TIMESTAMP is invalid or missing in the DDL statement.
  • 24769: Internal error with lock serialization logic during DW pre-commit transactional metadata handling.
  • 24770: Identity %.*ls value %I64d is out of range. Value must be between 0 and (2^%d – 1). Retry may not help. Please contact Microsoft support.
  • 24771: The %.*ls JSON parameter %.*ls is missing or incorrectly formatted. Please check the formatting of the JSON.
  • 24772: Failed to initialize TempDB.
  • 24773: The query failed because the file is too small. Underlying data description: %ls.
  • 24774: Internal error occurred during DW table metrics retrieval.
  • 24775: Internal system error when attempting to open or create remotely stored delta log file. This error is usually intermittent. Please try the operation again and contact Customer Support Services if this persists.
  • 24776: The current version of data being accessed in [%s] is as of timestamp ‘%s’.
  • 24777: TIMESTAMP must not be before the source database was created. Creation timestamp: ‘%s’, specified timestamp: ‘%s’.
  • 24778: Internal error occurred during Trident DW mix-mode query execution.
  • 24779: The specified ALTER TABLE statement is not supported in this edition of SQL Server. Columns that are part of a data clustered index may not be dropped.
  • 24780: Columns in Lakehouse tables cannot be renamed from the SQL Endpoint. Please use the Lakehouse to rename columns.
  • 24781: Desired column data type: ‘%ls’ is not supported, please retry column creation with a supported data type.
  • 25665: This target does not support the NO_EVENT_LOSS event retention mode. The ALLOW_SINGLE_EVENT_LOSS retention mode is used instead.
  • 25757: Could not stop orphan session (session which is running but has no definition in metadata) with name ‘%s’
  • 31207: Invalid value for Full-Text index version is specifed. Valid values are 1 or 2.
  • 31637: Connections to the domain %ls are not allowed using a credential object with the identity ‘SHARED ACCESS SIGNATURE’.
  • 31638: The specified ‘%.*ls’ value is not supported in the @headers parameter.
  • 31639: The specified ‘%ls’ value is not supported in the @credential secret.
  • 31640: The %.*ls XML string could not be parsed. %.*ls.
  • 31641: The charset value of the response’s Content-Type header is not supported.
  • 31642: The value of one of the key-value pairs in the %ls is empty.
  • 31643: ‘sp_invoke_external_rest_endpoint’ is disabled on this instance of SQL Server. Use sp_configure ‘external rest endpoint enabled’ to enable it.
  • 31701: Received param ‘%.*ls’ successfully.
  • 31702: Parameter ‘%ls’ must be specified. This parameter cannot be NULL.
  • 33337: SQL MI SSB Dialog Timer delete action occurred during dispatch or registration.
  • 33338: UCS transport default sending capacity is overridden with the value of %d. Override source: %.*ls. This is an informational message only. No user action is required.
  • 35333: ORDER failed because columnstore index ‘%.*ls’ on table ‘%.*ls.%.*ls’ is already ordered.
  • 35385: DROP ORDER failed because columnstore index ‘%.*ls’ on table ‘%.*ls.%.*ls’ is not ordered.
  • 35530: opening
  • 35531: closing
  • 35533: norm function failed
  • 35534: string_split
  • 35535: unpack_int
  • 35536: vector index
  • 35537: JSON index
  • 37544: The length of the AAD bearer token exceeded the maximum allowed length. Ledger URL ‘%ls’ and error code %ld.
  • 37547: The user attempting to perform this operation does not have permission as it is currently logged in as a member of an Azure Active Directory (AAD) group but does not have an associated database user account. A user account is necessary when creating an object to assign ownership of that object. To resolve this error, either create an Azure AD user from external provider, or alter the AAD group to assign the DEFAULT_SCHEMA as dbo, then rerun the statement.
  • 37548: The user attempting to perform this operation does not have permission as it is currently logged in as a member of an Azure Active Directory (AAD) group but does not have an associated database user account. A user account is necessary when setting the ownership of an object. To resolve this error, create an Azure AD user from external provider, then rerun the statement.
  • 37557: A server principal with the object ID ‘%ls’ already exists with the name ‘%ls’.
  • 37558: A login with the object ID ‘%ls’ already has an account with the user name ‘%ls’.
  • 37560: Encryption scan cannot be suspended because it is already in suspended state.
  • 37561: Database encryption key cannot be altered because the database is not encrypted.
  • 37562: When key change is in progress, only suspend and resume operations are allowed.
  • 37565: Microsoft Graph is currently experiencing a high volume of requests. Please wait for %d seconds and try again.
  • 37575: SQL Server Audit [%ls] could not write to the Otel (OpenTelemetry) Audit Target (Error Code: %d).
  • 37601: [Auditing][%ls] Could not allocate memory for %ls.
  • 37602: [Auditing][%ls] Error while reading AuditPolicy from WinFab property. [ErrorCode: %d].
  • 37603: [Auditing][%ls] The ‘State’ in AuditPolicy is ‘UNKNOWN’. [ErrorCode: %d].
  • 37604: [Auditing][%ls] Error while constructing AuditQueryParameters from AuditPolicy. [ErrorCode: %d].
  • 37605: [Auditing][%ls] Could not configure auditing to %ls. DetailedMessage: [%ls]. [Error: %d].
  • 37606: [Auditing][%ls][Informational] Error while trying to get Audit Policy (%ls) from WinFab properties [ErrorCode: %d].
  • 37607: [Auditing][%ls][Informational] Property ‘[%ls]’ under Namespace ‘[%ls]’ failed to retrieve. [Error: %d].
  • 37608: [Auditing][%ls][Informational] Xpath [%ls] for WinFab Property ‘[%ls]’ under Namespace ‘[%ls]’ failed to retrieve. [Error: %d].
  • 37609: [Auditing][%ls][Informational] Property ‘[%ls]’ under Namespace ‘[%ls]’ has invalid value [Len: %lu] [Error: %d].
  • 37610: [Auditing][%ls][Informational] Property ‘[%ls]’ under Namespace ‘[%ls]’ is empty or null [Len: %lu] [Error: %d].
  • 37611: [Auditing][%ls] Could not combine paths. DetailedMessage: [%ls]. [path1: %ls] [path2: %ls] [separator: %lc] [Error: %d].
  • 37612: [Auditing][%ls] Error: One or more of ‘baseStr’, ‘searchStr’ is nullptr. [baseStr: %ls] [searchStr: %ls] [Error: %d].
  • 37613: [Auditing][%ls] Error: Length of ‘searchStr’ is zero [Error: %d].
  • 37614: [Auditing][%ls] Could not inject query parameter. [Parameter: %ls] [Value: %ls] [ErrorCode: %d].
  • 37615: [Auditing][%ls] Could not get query template. [auditQueryType: %ls] [ErrorCode: %d].
  • 37616: [Auditing][%ls] Could not inject parameters into audit query template. [auditQueryType: %ls] [ErrorCode: %d].
  • 37617: [Auditing][%ls] Error occured while constructing %ls [ErrorCode: %d].
  • 37618: [Auditing][%ls] Could not generate audit query to execute. [auditQueryType: %ls] [ErrorCode: %d].
  • 37619: An unexpected error occured. Please try again after some time. If the error persists, please contact Customer Support.
  • 37620: StartTimeFilter is not in a supported format. Please either use DEFAULT or a UTC based timestamp in this format – ‘YYYY-MM-DDThh:mm:ssZ’. For example – ‘2020-12-31T15:30:00Z’.
  • 37621: EndTimeFilter is not in a supported format. Please either use DEFAULT or a UTC based timestamp in this format – ‘YYYY-MM-DDThh:mm:ssZ’. For example – ‘2020-12-31T15:30:00Z’.
  • 39155: The maximum acceptable number of columns is %d while %ld number of columns are in the schema.
  • 40598: ‘%s’ clause is not supported in this version of Synapse SQL.
  • 40826: %ls is not available for %ls service tier.
  • 41680: An error has occurred while executing an asynchronous operation (HRESULT 0x%08x).
  • 41877: A user defined function referencing memory optimized tables on a MARS connection with an explicit user transaction is currently disallowed. Use a statement level transaction instead.
  • 41878: In-Memory OLTP database ‘%.*ls’ is Checkpoint V2 Undeployed. No further action is necessary.
  • 41879: Cannot perform operation since In-Memory OLTP database undeployment is in progress. Complete database undeployment before running the file operation.
  • 41880: In-Memory OLTP database undeployment failed. There are in-memory database objects present, or being created.
  • 41979: Managed database cannot be replicated from Managed Instance to SQL Server, as this managed database is encrypted with service managed key. Either unencrypt the database on Managed Instance, or switch the encryption service protector to customer-managed (BYOK) and try again.
  • 41980: Database cannot be encrypted using the service managed key as it is being replicated from Managed Instance to SQL Server. Switch the encryption protector from service managed to customer managed (BYOK) on Managed Instance and ensure the encryption key from Managed Instance is imported on SQL Server.
  • 41981: Switching the encryption protector from customer managed (BYOK) to service managed is not allowed as there exists database replication from Managed Instance to SQL Server supporting customer managed encryption only.
  • 41982: Database cannot be replicated from Managed Instance to SQL Server as there already exists database geo-replication from this Managed Instance to another Managed Instance using auto-failover groups. Disconnect this Managed Instance from the failover group, and try again.
  • 41983: In-Memory OLTP database undeployment is not allowed on SQL Database Managed Instance.
  • 41984: The link cannot be created because database format of the source SQL Managed Instance is not compatible with database format of the destination SQL Server.
  • 41985: Forced failover is not supported on this managed instance link. For database migration, please delete the link instead.
  • 41987: Managed Instance link cannot be created because Managed Instance side AG name ‘%.*ls’ is already used on Managed Instance ‘%.*ls’. Choose different name and retry to create link.
  • 41993: Cannot execute failover of Managed Instance link ‘%s’ because the replica of database ‘%s’ on SQL MI is in the inaccessible state. Please refer to https://aka.ms/sqlmi-inaccessible-database-troubleshooting to understand and correct the database inaccessibility root cause and retry executing link failover.
  • 42037: Initialization of http session handle for fetching cert public keys for MWC token signature validation during signature validation service initialization.
  • 42038: Initialization of http connect handle for fetching cert public keys for MWC token signature validation during signature validation service initialization.
  • 42119: This database has reached the monthly free amount allowance for the month of %.*ls and is paused for the remainder of the month. The free amount will renew at 12:00 AM (UTC) on %.*ls. To regain access immediately, open the Compute and Storage tab from the database menu on the Azure Portal and select the “Continue using database with additional charges” option. This will resume the database and bill you for additional usage charges the rest of this month. For more details, see https://go.microsoft.com/fwlink/?linkid=2243105&clcid=0x409.
  • 42120: The connection was denied because the server’s Public Network Access is set to ‘SecuredByPerimeter’, but the resource is not in a Network Security Perimeter. For more information, please see .
  • 42121: The operation could not be completed on because attempts to connect were denied by the configured Network Security Perimeter.
  • 42122: Cannot resume database ‘%.*ls’ in any elastic pool because all eligible pools are at capacity.
  • 42123: Login failed as it was determined that the common alias login was invoked, but the common alias login is not currently supported.
  • 42124: Login failed as it was determined that the common alias login was invoked, but the database name has not been provided.
  • 42125: Enabling IPv6 is currently not supported on your server, detailed reason is: %.*ls.
  • 42126: This serverless database is paused and cannot be resumed by this connection.
  • 42127: The login failed due to proxy throttling
  • 42128: The database ‘%ls’ cannot be scaled into Hyperscale Service Tier because it has multiple GeoDR links. Only one GeoDR link is supported for migration.
  • 42129: The database ‘%ls’ cannot be scaled into Hyperscale Service Tier because it has chained GeoDR links.
  • 42130: The database ‘%ls’ cannot be scaled into Hyperscale Service Tier because it is on a secondary replication role. Please retry the operation on the primary database.
  • 42131: This SQL database has been disabled. Please reach out to your Fabric Capacity administrator for more information.
  • 42132: The login failed due to database name not being found in alias DB
  • 42133: The firewall rule name cannot contain special characters: ‘%ls’.
  • 42201: The requested distance metric ‘%ls’ is not supported by vector_distance. Please provide a valid distance metric.
  • 42202: The provided dimension for the vector (%d) does not match allowed value (%d).
  • 42203: The provided dimension for the vector (%d) is not within the allowed range [1-%d].
  • 42204: The vector dimensions %d and %d do not match.
  • 42205: The vector types ‘%ls’ and ‘%ls’ do not match.
  • 42206: The vector is invalid. Error details: ‘%ls’.
  • 42207: The distance metric specified for vector_distance is unsupported. Please provide a valid string for distance metric.
  • 42208: The vector LOB type is not supported. Please use a supported vector type.
  • 42209: A system error occurred.
  • 42210: The requested norm function ‘%.*hs’ is not supported by vector_norm/vector_normalize. Please provide a valid norm function.
  • 42211: Truncation of vector is not allowed during the conversion. Ensure the vector size is appropriate before conversion.
  • 42212: Cannot create alias types from a vector datatype.
  • 42213: The vector data types cannot be compared or sorted, except when using the IS NULL operator.
  • 42214: Could not create the vector index on object ‘%.*ls’, because the object is not a user table.
  • 42215: Could not create the vector index on the column ‘%.*ls’ on table ‘%.*ls’, because it is not of type vector.
  • 42216: Internal error occurred while creating the vector index. Please run DBCC CHECKCATALOG and retry if it does not report any error. Please contact support in case of error.
  • 42217: The base table doesn’t have a single column, non NULLABLE and unique Key Clustered Index.
  • 42218: Failed to fetch metadata for the vector index. Please retry and contact support if the error persists.
  • 42219: Failed to create internal table %d for the vector index.
  • 42220: Cannot create the vector index on temp objects. ‘%.*ls’ is identified as a temp object.
  • 42221: DROP VECTOR INDEX does not support any options.
  • 42222: Vector data type is not supported in CDC tracked table.
  • 42223: Vector data type is not supported in transactional replication.
  • 42224: Vector data type is not supported in generic replication features.
  • 42225: VECTOR column(s) cannot be natively imported through BULK INSERT or OPENROWSET(BULK) statement. Please use a format file to explicitly specify the type of the VECTOR column(s) and specify the format file in the BULK INSERT or OPENROWSET(BULK) statement.
  • 42226: The column ‘%.*ls’ is not of vector type. Vector search cannot be performed on a non-vector column.
  • 42227: Cannot find a vector index with metric ‘%.*ls’ on column ‘%.*ls’.
  • 42299: Vector Internal error : %S_MSG.
  • 45611: Configuration of backup retention policy is not supported on Named Replica.
  • 45612: It is not possible to stop the instance that has Managed Instance link configured. Refer to the following article for limits of managed instance start/stop feature: https://go.microsoft.com/fwlink/?linkid=2169085
  • 45613: The ImportExport operation failed because of invalid storage credentials.
  • 45614: The ImportExport operation failed because of invalid storage auth type.
  • 45615: The ImportExport operation failed because of invalid storage key format.
  • 45616: The ImportExport operation failed because the storage URI is too long.
  • 45617: The Export operation failed because database is larger than max supported size.
  • 45618: The ImportExport operation failed because it is targeting a ‘%ls’ edition.
  • 45619: The ImportExport operation failed because the storage URI is invalid.
  • 45622: Geo-redundant backup storage is not permitted for databases under subscription ‘%s’.
  • 45623: Managed Instance start/stop feature is available only for instances that are running with Feature Wave November 2022. Refer to the following article for limits of managed instance start/stop feature: https://go.microsoft.com/fwlink/?linkid=2169085
  • 45624: TR-CMS geo-secondary server/database does not exist.
  • 45625: The ImportExport operation failed because the SQL authentication type is invalid. Please use either SQL Server or Active Directory.
  • 45626: The ImportExport operation failed because the storage blob already exists.
  • 45627: Restoring a Hyperscale database to an elastic pool which is not Hyperscale or restoring a database which is not Hyperscale to a Hyperscale elastic pool is not supported.
  • 45628: The specified target database name, ‘%ls’, is reserved and cannot be used in a restore operation.
  • 45629: Restore and database recovery are not supported for system databases.
  • 45630: The source database, ‘%ls’, dropped on ‘%ls’, has not existed long enough to support restores.
  • 45631: The source database, ‘%ls’, has not yet accomplished its first backup in order to support restore operations. Please wait until at least the indicated earliest restore date ‘%ls’ and resubmit the request.
  • 45632: For point-in-time restores, the target logical server must be the same as the source logical server. Consider restoring to the source database’s server, or using database copy or database recovery.
  • 45633: Same-server recovery is currently not supported. Consider using point-in-time restore or recovering to a different server.
  • 45634: Restore from long-term retention backup to ‘%ls’ edition not supported.
  • 45635: Restore from external backup to ‘%ls’ edition not supported.
  • 45636: Enabling long-term backup retention for a serverless database is not supported if auto-pause is enabled.
  • 45637: Enabling long-term backup retention for a database during migration to the Hyperscale service tier is not supported.
  • 45638: Enabling long-term backup retention for a database during a reverse migration from Hyperscale is not supported.
  • 45639: Enabling auto-pause for a serverless database is not supported if long-term backup retention is enabled.
  • 45640: The subscription ID is required to delete a long-term retention backup.
  • 45641: The server name is required to delete a long-term retention backup.
  • 45642: The database name is required to delete a long-term retention backup.
  • 45643: The provided long-term retention backup resource ID or backup name is incorrect.
  • 45644: A restore request with the same target database name ‘%ls’ on server ‘%ls’ already exists.
  • 45645: The external backup storage container URI is malformed.
  • 45646: The host name for the external backup storage container URI is not allowed.
  • 45647: The external backup storage container URI uses an invalid scheme or port. Please use https and port 443.
  • 45648: Hyperscale does not support external backup restore requests.
  • 45649: For database ‘%ls’, specified point-in-time ‘%ls’ is prior to the Hyperscale migration time of ‘%ls’. Please specify a target edition or service level objective which is not Hyperscale when restoring to a point earlier than the Hyperscale migration.
  • 45654: Azure Key Vault key URI ‘%s’ is required to successfully restore the database ‘%s’ under server ‘%s’.
  • 45655: Restore of Hyperscale database where source database is geo-secondary is not allowed.
  • 45656: The operation could not be completed because the Azure Key Vault Key name ‘%ls’ is currently set as encryption protector on geo-primary.
  • 45657: DTC for Azure SQL Managed Instance not ready at the moment.
  • 45658: This operation cannot start because specified target managed instance ‘%ls’ already has maximum number of user databases.
  • 45659: No backups were found to restore the database to the point in time %s (UTC). Please contact support to restore the database.
  • 45660: Target server ‘%ls’ does not have access to all AKV Uris created between %s (UTC) and %s (UTC). Please retry operation after restoring all AKV Uris.
  • 45661: Unable to connect to the instance in order to perform restore operation.
  • 45662: Backup set in %ld is broken. Sql error code is %d. Error message: %ls
  • 45663: A timeout was encountered while trying to connect to the restore target. Please retry or retry with a higher service objective for higher reliability. After restore, the database can be downgraded to the original service objective.
  • 45664: The restore plan is broken because firstLsn (%ls) of log backup ‘%ls’ is not equal to lastLsn (%ls) of prev log backup %ls. Restore to point in time: %ls (UTC) failed.
  • 45665: Full backup ‘%ls’ is missing checksum. Please provide full backup with checksum.
  • 45666: Migration cannot be completed because provided backup file name ‘%ls’ should be the last restore backup file ‘%ls’.
  • 45667: The database backup contains incompatible physical layout. Non-online data files exist.
  • 45668: The database backup contains incompatible physical layout. Multiple log files are not supported.
  • 45669: Database ‘%ls’ operation failed because there is no network connectivity between source instance ‘%ls’ and target instance ‘%ls’.
  • 45670: Database ‘%ls’ operation failed because there is no connectivity between source and target Azure SQL Managed Instance. This is because the target instance subnet has an overlapping address range with the subnet of the source instance.
  • 45671: Microsoft Distributed Transaction Coordinator (MS DTC) cannot be enabled because required outbound traffic on port 445 to the MS DTC log on a remote storage cannot be enabled due to a conflict with an existing route. Please check the route table for the subnet hosting the instance to identify and solve the conflict. See https://learn.microsoft.com/en-us/azure/azure-sql/managed-instance/doc-changes-updates-known-issues?view=azuresql for more information.
  • 45672: Cross subscription ‘%ls’ operation, from subscription id ‘%ls’ to subscription id ‘%ls’ is not allowed because these two subscriptions don’t belong to the same Azure tenant ID.
  • 45673: Cannot perform delete operation because of the following resource lock: ‘%ls’. Please remove the lock and try again.
  • 45674: Cannot drop database because a database ‘%ls’ operation is completing.
  • 45675: Combination of unspecified target edition (sku.name/sku.tier parameters) and the value TRUE for the isGeneralPurposeV2 parameter is not allowed in the create operation.
  • 45676: Combination of unspecified target edition (sku.name/sku.tier parameters) and the value TRUE for the isGeneralPurposeV2 parameter is not allowed in the update operation of General Purpose V1 Managed Instance.
  • 45677: Combination of unspecified target edition (sku.name/sku.tier parameters) and the value FALSE for the isGeneralPurposeV2 parameter is not allowed in the update operation of General Purpose V2 Managed Instance.
  • 45678: Combination of General Purpose target edition (sku.name/sku.tier parameters) and unspecified value for the isGeneralPurposeV2 parameter is not allowed in the update operation of General Purpose V2 Managed Instance.
  • 45679: Combination of Business Critical target edition (sku.name/sku.tier parameters) with a specified isGeneralPurposeV2 parameter is not allowed in the create/update operation.
  • 45680: Combination of unspecified target edition (sku.name/sku.tier parameters) with a specified isGeneralPurposeV2 parameter is not allowed in the update operation of Business Critical Managed Instance.
  • 45681: Standby replicas are not supported for hyperscale databases.
  • 45682: No backups were found to restore the database to the point in time %s (UTC). Please retry the operation. If the problem persists, contact support to restore the database.
  • 45683: Memory size in GB parameter is not allowed in the instance create/update operation.
  • 45684: Memory size in GB parameter is not allowed in the instance create/update operation for the service tier or hardware generation.
  • 45685: Could not perform the operation due to capacity group resource constraint based on the requested service level objective and storage size. Plus try again in 24 hours.
  • 45686: Could not complete the operation due to capacity group resource constraint check failed due to the following reason: {0).
  • 45687: General Purpose v2 edition of Managed Instance is not compatible with the specified subnet. Target subnet must be either empty, or all instances in it must run with November 2022 Feature Wave.
  • 45688: The specified memory value is invalid. Please ensure the memory size in GB is within the allowable range of 28 to 870 and corresponds appropriately to the number of vCores and service tier. The available memory sizes for the selected vCores and service tier are {0}. For details around resource limits visit: https://go.microsoft.com/fwlink/?linkid=2293407
  • 45689: Cannot enable public endpoint for the failover group because managed instance ‘%ls’ doesn’t have public endpoint enabled. Please enable public endpoint on the managed instance and try again.
  • 45690: Another operation is in progress on virtual network firewall rule {0} on server {1}.
  • 45691: The operation could not be completed because there is at least one database whose number of files is exceeding the limit of %ls files per database on General Purpose v2 edition.
  • 45692: The operation could not be completed because Next-Gen General Purpose edition doesn’t support Custom Maintenance Window
  • 45693: The database ‘%.*ls’ on server ‘%.*ls’ is already being dropped.
  • 45694: Database ‘%ls’ was not found on SQL managed instance ‘%ls’. Please check the database name and ensure it matches a database that is currently in ‘Restoring’ state, then try again.
  • 45695: Database ‘%ls’ is not in ‘Restoring’ state on SQL managed instance ‘%ls’. Please check the database name and ensure it matches a database that is currently in ‘Restoring’ state, then try again.
  • 45696: Database ‘%ls’ was already restored on SQL managed instance ‘%ls’. Please check the database name and ensure it matches a database that is currently in ‘Restoring’ state, then try again.
  • 45697: An existing complete restore request for database ‘%ls’ is already in progress on SQL managed instance ‘%ls’. Please wait for the current request to complete, or try again with a different database that is in ‘Restoring’ state.
  • 45698: Cannot create a database with special characters in its name. Special Characters include \u003C, \u003E, \u002A, \u0025, \u0026, \u003A, \u005C, \u002F, \u003F.
  • 45699: Database copy and geo replication are blocked on databases migrated by whiteglove restore.
  • 45714: Cross tenant CMK can only be configured when the server is configured with a User Assigned Managed Identity. Refer to https://aka.ms/sqltdebyokumi to configure User Assigned Managed Identity.
  • 45715: The preferredEnclaveType is not supported for dedicated SQL Pools.
  • 45716: Configuring the ‘VBS’ preferredEnclaveType for databases using the DC-series hardware configuration is not supported. DC-series databases are pre-configured with Intel SGX enclaves. Virtualization based security (VBS) enclaves are not supported.
  • 45717: Database-level encryption protector must be set on the target database while attempting to copy or replicate a database encrypted with a database level key.
  • 45718: While attempting to replicate a database configured with database level encryption protector, the current keys being used by the primary must be passed to the secondary database.
  • 45719: The geo-primary database must be configured with a database level encryption protector before setting a database level encryption protector on the geo-secondary.
  • 45720: The database {0} on server {1} cannot be safely dropped because it is encrypted with a customer managed key that is no longer accessible to take the last backup before drop. Please restore Azure Key Vault Access on the server and revalidate the keys to make this database accessible and then proceed with the drop. For details see https://aka.ms/tdecmkrestorability
  • 45722: Virtualization-based security (VBS) enclaves are not supported for elastic pools.
  • 45723: Configuring geo-replication is not supported for databases using different values of the preferredEnclaveType property. The primary ‘{0}’ database on the ‘{1}’ server uses the ‘{2}’ preferredEnclaveType, whereas the specified preferredEnclaveType for the secondary ‘{3}’ database on the ‘{4}’ server is ‘{5}’.
  • 45724: Planned failover between primary and secondary database with different preferredEnclaveType is not supported for Virtualization-based security (VBS) enclaves.
  • 45725: The geo-primary database must not be configured with a database level encryption protector when failing over to a geo-secondary without a database level encryption protector.
  • 45726: Operation on job agent is in progress. Please wait a gew minutes before retrying again.
  • 45727: The create database operation can not be performed at this time due to ongoing Transparent Data Encryption (TDE) key rotation on the server. Please try the operation again later.
  • 45728: Adding a database with ‘{0}’ preferredEnclaveType to an elastic pool ‘{1}’ with ‘{2}’ preferredEnclaveType is not supported. Before adding the database to the elastic pool, ensure that the preferredEnclaveType is the same for both the database and the elastic pool. More information can be found on https://aka.ms/AlwaysEncryptedEnableSecureEnclaves
  • 45731: The elastic pool contains inaccessible databases which have lost Azure Key Vault access required for TDE configuration. Please move the accessible databases to a different pool for scaling or restore the lost Azure Key Vault access for the inaccessible databases.
  • 45735: Elastic job agent ‘%.*ls’ SLO’s assignment is in progress
  • 45736: The given SLO is not supported for job agent. Please retry with the supported SLO.
  • 45746: The operation could not be completed because the Azure Key Vault Key name ‘%ls’ is currently in use.
  • 45747: The operation could not be completed because the Azure Key Vault Key material is different than the current encryption protector key.
  • 45748: Using an old version of AKV key as TDE protector is not allowed. Please use the latest version of the AKV key as the encryption protector.
  • 45749: Per Database CMK identity not setup on the target database.
  • 45750: The create elastic pool operation cannot be performed at this time due to ongoing Transparent Data Encryption (TDE) key rotation on the server. Please try the operation again later.
  • 45771: Import/Export with Private Link cannot be used on resources with locks. Please remove locks on the following resources and try again. %ls.
  • 45772: Perform cutover action is not supported at this time. Details: ‘%.*ls’.
  • 45773: The action failed because no active update-slo operation detected for server: ‘%.*ls’, database: ‘%.*ls’. Please retry the operation by specifying valid parameters.
  • 45774: Invalid use of manual-cutover option for server: ‘%.*ls’, database: ‘%.*ls’. Please check current and requested service objective and retry the operation by specifying valid parameters.
  • 45775: One or more apps involved in UpdateSLO management operation are unhealthy for database: ‘%.*ls’, on server: ‘%.*ls’.
  • 45777: %ls is not yet enabled. %ls
  • 45781: Database: {0}, on server: {1}, in resource group: {2}, in subscription: {3}, is a data warehouse and cannot use upgrade me now.
  • 45782: Database: {0}, on server: {1}, in resource group: {2}, in subscription: {3}, belongs to an elastic pool and cannot use upgrade me now.
  • 45783: Database: {0}, on server: {1}, in resource group: {2}, in subscription: {3}, was not found.
  • 45784: Database: {0}, on server: {1}, in resource group: {2}, in subscription: {3}, does not have UpgradeMeNow enabled.
  • 45785: ElasticPool: {0}, on server: {1}, in resource group: {2}, in subscription: {3}, was not found.
  • 45786: ElasticPool: {0}, on server: {1}, in resource group: {2}, in subscription: {3}, does not have UpgradeMeNow enabled.
  • 45787: ElasticPool: {0}, on server: {1}, in resource group: {2}, in subscription: {3}, does not contain any database.
  • 45788: The operation cannot be performed since the database ‘%ls’ is geo-replicated and is in secondary role. Database size limit updates are not allowed on a geo-secondary. Please try updating the primary database instead. See ‘https://learn.microsoft.com/en-us/sql/t-sql/statements/alter-database-transact-sql-file-and-filegroup-options’ for more information.
  • 45789: The limit of one free database offer for subscription ‘%.*ls’ has been reached. Drop the existing free database or use a different subscription to continue.
  • 45790: Operation on Long Term Retention backup failed because the backup is immutable.
  • 45791: Server ‘%.*ls’ cannot be moved out of subscription ‘%.*ls’ because it contains a free database. Drop or upgrade the existing free database to continue.
  • 45792: Standby Replication limit reached. The database ‘%ls’ cannot have more than %d standby replicas.
  • 45793: You cannot create Freemium SQL Managed Instance with {0} {1}. For more details visit aka.ms/SQLMIFreemium.
  • 45794: You cannot create Freemium SQL Managed Instance on this region. For more details check aka.ms/SQLMIfreemium.
  • 45795: The operation could not be completed because there is an ongoing maintenance on your Managed Instance subnet. All Managed Instance create/update operations for this subnet are prohibited until the maintenance is completed. Please, retry the operation after the current maintenance window is closed. See ‘https://learn.microsoft.com/en-us/azure/azure-sql/database/maintenance-window’ for details on maintenance windows.
  • 45796: A data-file max size consistency error was detected during Hyperscale migration. %ls
  • 45797: Freemium is not enabled for this subscription. For more details check aka.ms/SQLMIfreemium.
  • 45798: The specified edition %ls is not consistent with the instance pool edition %ls.
  • 45799: The specified subnet %ls is not consistent with the instance pool subnet %ls.
  • 45900: The specified hardware family %ls is not consistent with the instance pool hardware family %ls.
  • 45901: The specified license type %ls is not consistent with the instance pool icense type %ls.
  • 45902: Changing the database format from ‘{0}’ to ‘{1}’ is not supported. Check the database format property value specified and visit https://aka.ms/sqlmidatabaseformat for more details.
  • 45903: Changing the database format from ‘{0}’ to ‘{1}’ is not supported. Check the database format property value specified and visit https://aka.ms/sqlmidatabaseformat for more details.
  • 45904: Changing the database format is not supported for this managed instance. Check the database format property value specified and visit https://aka.ms/sqlmi-fwnov2022 for more details.
  • 45905: Specifying the database format property value is not supported at this time.
  • 45906: A freemium instance can not be created in the specified subnet %ls.
  • 45907: GeoDR links are not supported for managed instances inside managed instance pool.
  • 45908: Moving Azure SQL Managed Instance with existing auto-failover group configured into an instance pool is not supported. Remove auto-failover group from this instance, and attempt moving to the instance pool again.
  • 45909: Creation of ZoneRedundant SQL MI is temporarily disabled on this region. You can create Zone Redundant SQL Managed Instances in many other regions in Azure. https://learn.microsoft.com/en-us/azure/azure-sql/managed-instance/high-availability-sla?view=azuresql-mi#zone-redundant-availability
  • 45910: Creation of ZoneRedundant SQL MI is temporarily disabled for this hardware generation on this region. You can create Zone Redundant SQL Managed Instances with different harware generation on this region or many other regions in Azure. https://learn.microsoft.com/en-us/azure/azure-sql/managed-instance/high-availability-sla?view=azuresql-mi#zone-redundant-availability
  • 45911: Failover group cannot be created because database format of the primary instance is not compatible with the database format of the secondary instance. See https://go.microsoft.com/fwlink/?linkid=2251601 for more details.
  • 45912: Database format change must be performed on the secondary instance of the failover group first. See https://go.microsoft.com/fwlink/?linkid=2251601 for more details.
  • 45913: Operation cannot be performed because database format of the source instance is not compatible with the database format of the destination instance. See https://go.microsoft.com/fwlink/?linkid=2251601 for more details.
  • 45914: Restore operation cannot be performed because database format of the source instance is not compatible with the database format of the destination instance. See https://go.microsoft.com/fwlink/?linkid=2251601 for more details.
  • 45915: Failover operation cannot be performed because of database format mismatch between the instances in the failover group. Consider upgrading the database format of the primary instance or deleting the failover group to promote current secondary instance to a stand-alone instance. See https://go.microsoft.com/fwlink/?linkid=2251601 for more details.
  • 45917: Subscription {0} already has Free SQL Managed Instance. Only one free SQL Managed Instance is allowed per subscription.
  • 45918: The ‘%.*ls’ operation cannot be completed as there exists a database in a process of creation through failover group deployment. Please wait for the failover group deployment to complete, or alternatively delete the failover group and retry the operation again.
  • 45919: Converting to Free SQL Managed instance is not allowed. You can only apply Free offer when you create a new SQL Managed Instance.
  • 45920: Upgrade failed. To successfully upgrade to a paid offer, ensure that you’re not making any other changes to your SQL Managed Instance configuration, such as adjusting vCores or storage.
  • 45921: You cannot stop the instance that has running operation on it’s instance pool. Please wait for ongoing operation to finish or cancel ongoing operation, then try again. ErrorCause: Stop cannot be executed due to conflicting operation on instance pool.
  • 45922: Maintenance window is set at instance pool level and cannot be set individually per pooled instance.
  • 45923: Cannot move the instance into the pool due to pool database count limit.
  • 45924: Cannot move a zone-redundant instance into the pool as zone-redundancy is not supported in pools.
  • 45925: Deploying managed instances and instance pools into private subnets is not supported.
  • 45926: Deploying failover group is not supported with the free Azure SQL Managed Instance offer.
  • 45927: Unable to determine target service level objective. If restoring from a long-term retention backup of a database that was in an elastic pool, either target elastic pool name or target service level objective must be supplied. If the source database was dropped, use Azure Powershell to specify the target service level objective.
  • 45928: The server ‘%ls’ already exists in this subscription in a soft-deleted state. You cannot create a server named ‘%ls’ until the existing soft-deleted server is removed. If you are trying to recover this server or force-delete it to create a new empty server of the same name, please refer to https://aka.ms/restoredeletedazuresqlserver. Please contact Microsoft support if further assistance is needed.
  • 45929: Operation ‘{0}’ could not be completed because the minimum TLS version required is 1.2 or higher. Earlier versions of TLS are no longer supported as they are considered deprecated.
  • 45930: The limit of free databases for subscription ‘%.*ls’ has been reached as part of the free database offer. To continue using for free, drop an existing free database.
  • 45931: Cannot resume continuous copy if Db getting resumed is already a geo-secondary for another link or DB getting resumed is in a FG and the link being resumed is NOT a FG link.
  • 45932: Cannot change continuous copy primary to a database that is outside of the failover group.
  • 45933: Instance maintenance window must be set to match instance pool maintenance window.
  • 46554: External Table ‘%.*ls’ contains corrupted metadata. Drop external table and create a new one.
  • 46671: INTERNAL TABLE
  • 46672: MODEL_PROVIDER
  • 46673: MODEL_TYPE
  • 46674: MODEL
  • 46675: DEFAULT_PARAMETER
  • 46954: Number of columns specified in ‘%ls’ exceeds the number of columns in external table.
  • 46955: External table not found or you do not have permission to access it. Underlying data description: %ls.
  • 46956: Failed to extract rows from external table. Underlying data description: %ls.
  • 46957: Error encountered while extracting data from external table ‘%ls’. Underlying data description: %ls.
  • 46958: Columns ‘%ls’ were not found when querying external table. Underlying data description: %ls.
  • 46959: Error encoutered in ‘%S_MSG’ parameter. Expected three part name, but found ‘%ls’.
  • 46960: Try to reuse default credential name ‘%.*ls’ but its IDENTITY attribute is not ‘USER IDENTITY’.
  • 46961: Try to reuse default external data source name ‘%.*ls’ but its LOCATION attribute does not match table location.
  • 46962: Try to reuse default external file format name ‘%.*ls’ but its FORMAT_TYPE attribute ‘%.*ls’ is not consistent with the default file format name.
  • 46963: Try to use a default external file format for file ‘%.*ls’ but could not deduce a supported FORMAT_TYPE for the given extension.
  • 46964: Fail to create the default credential, data source or file format.
  • 47079: Login was denied since perms were requested on an invalid database.
  • 47080: User has insufficient permissions to complete the login.
  • 47099: Sql Authentication connections are not supported.
  • 47607: Reason: Login failure in mpdw
  • 47608: Reason: Login ack failure in mpdw
  • 47609: Conditional Access Policy is blocking usage of Service Principal.
  • 47610: Reason: An update SLO operation is in progress. The external connections are denied.
  • 47611: Reason: A Geo failover operation is in progress. The external connections are denied.
  • 47612: Reason: The database is in dropping state. The external connections are denied.
  • 47613: Reason: The login cannot obtain a LOCK resource at this time. This may be caused by insufficient resources or other queries holding locks on the system tables and/or system objects required for login processing. Query sys.dm_tran_locks to see if any sessions are holding Sch-S or Sch-M locks, and query sys.dm_tran_database_transactions and sys.dm_tran_session_transactions to see if there are long-running transactions holding these locks.
  • 47614: Reason: The database failover caused the login failure.
  • 47615: Reason: The instance could not start up properly because it does not have access to critical resources and it does not allow user connections. Only backend connections are allowed.
  • 47616: Reason: The login cannot obtain a LOCK resource at this time because the database is being updated.
  • 47617: Reason: Mwc max user connection is reached.
  • 47618: Reason: Mwc max system connection is reached.
  • 47619: Reason: Validation of user’s permissions failed. Verify the user has the Read item permission. For more information, see https://go.microsoft.com/fwlink/?linkid=2281595.
  • 49421: Any ALTER operation on foreign file through pageserver is not supported.
  • 49422: Updating database ‘%.*ls’ physical id failed. The usage is not impacted, please contact product group for manual mitigation.
  • 49423: There was a failure in the log service with physical database ID ‘%ls’ while executing an operation. HRESULT ‘0x%x’. Reason: ‘%ls’.
  • 49424: Transformation of the log file to Hyperscale format failed.
  • 49425: Oldest begin LSN is smaller than foreign redo LSN which will prevent Page Server creation. Operation will be rolled back.
  • 49426: Grow of existing striped file failed.
  • 49427: Foreign File Validator encountered an exception: ‘%ls’. Logical DBID: ‘%ls’, Physical DBID: ‘%ls’.
  • 49428: Could not access datafile during upgrade.
  • 49519: DBCC SHRINKDATABASE is not supported in this version of SQL Server. Contact Customer Support Services for more information.
  • 49520: Specifying the database_id parameter in DBCC SHRINKFILE is not supported in this version of SQL Server.
  • 49521: DBCC SHRINKFILE cannot shrink data files when used with database_id parameter.
  • 49522: DBCC operation on database id %d could not add new cache entries for Index Repair.
  • 49523: The operation failed because the Index Repair Manager is not available.
  • 49524: The operation failed because the Index Repair Feature is not supported for the object type %S_MSG (object ID %ld).
  • 49525: User ‘%.*ls’ does not have permission to use the Index Repair Feature for object ID %ld.
  • 49526: An internal error occurred. Unable to get lock on the cache used by the Index Repair Manager.
  • 49527: The operation failed because the Index Repair Feature is not supported for temp tables.
  • 49528: The input row handle could not be decoded. Please use the sys.dm_db_logical_index_corruptions DMV to obtain the row handle.
  • 49529: Index repair failed. Please verify that the corruption exists using DBCC CHECK commands and use the sys.dm_db_logical_index_corruptions DMV to get the row handle for the repair. Please contact support if the error persists.
  • 49530: The requested repair finished successfully.
  • 49531: The operation failed because the internal repair command is invalid.
  • 49532: DBCC SHRINKFILE for data files is not supported in a Hyperscale database when the database is not encrypted or protector change is in progress. Enable transparent data encryption or wait for protector change to complete and try again. Refer sys.dm_database_encryption_keys for more details.
  • 49533: DBCC SHRINKDATABASE is not supported in a Hyperscale database when the database is not encrypted or protector change is in progress. Enable transparent data encryption or wait for protector change to complete and try again. Refer sys.dm_database_encryption_keys for more details.
  • 49534: The input row handle is invalid for the given object ID and index ID. Please verify that the corruption exists using DBCC CHECK commands and use the output of the sys.dm_db_logical_index_corruptions DMV for the repair.
  • 49535: DBCC SHRINKFILE for fileId %d completed, initial size (%I64d KB), current size (%I64d KB). Please retry DBCC SHRINKFILE with target size well above minimum supported hyperscale database size
  • 49536: Cleanup of Index Repair cache entries failed for database ID %d, object ID %d, index ID %d. No new entries will be added to the cache.
  • 49537: The shrink operation was aborted because a page to be moved by shrink is in use by an active transaction on the primary replica or on one or more secondary replicas. Retry shrink later.
  • 49923: Managed Server ‘%s’ is disabled. Please enable it and try again.
  • 49968: SQL Server shutdown has been initiated with exit code %d.