Registration Open for My SQLSaturday Sacramento Workshop

Performance Tuning in 21 DemosI’m coming to Sacramento this year for SQLSaturday #885. On Friday, I’ll be teaching a one-day pre-con:

Performance Tuning in 21 Demos – You’re a developer or DBA who wants to tune indexes and queries to make your application go faster. You don’t like learning by watching slides or reading white papers – you want to jump right into the code and see what happens. You’re the kind of person who loves watching YouTube videos that start with, “Hold my beer and watch this.”

Attendees get their choice of a Recorded Class Season Pass or a Consultant Toolkit – so the pre-con pays for itself! Learn more and register now.

Can’t make it to Sacramento? Here’s where to find me this spring:

Should we use stored procedures or queries built in the app?

A client asked a great architecture question that deserved its own blog post:

Should we use more “stored procedures” for select, insert, update, delete or should that just be sent as queries from the application?

If you ask a junior database administrator where to put something, she’ll probably say “in the database” because that’s the thing she has control over. That’s fair – DBAs love having control over stuff. However, let’s step back a little.

If your shop lacks T-SQL expertise, but has a lot of sharp senior application developers, then it may be easier to keep the queries in the app. If you simply make a mandate that “all queries come from stored procedures,” but you don’t give your staff the training necessary to write fast, scalable stored procedures, then the result won’t perform. Database administrators love to bash the queries written by ORMs like Entity Framework and nHibernate, but the reality is that these days, those tools write better queries than untrained developers. Sure, in a perfect world, we’d all be perfectly trained on all of the tools we use, but….

If your shop is good at testing & source controlling app code, but doesn’t have that same level of expertise for testing & source controlling stored procedures, then you’ll probably find versioning and deployment easier when the queries are built into the app code. In the year 2019, testing & source controlling database objects like stored procedures and functions still isn’t quite as easy as app code. It’s getting better, but it’s not there yet.

If you’re good at measuring app code performance, like if you’re good with code speed tools like New Relic that instrument every step of your application code, then you’ll be able to more naturally track down slow queries with those tools. However, if you’re not good at app performance monitoring, you may find it easier to track down slow queries with plan cache tools like sp_BlitzCache.

If you have predictable performance needs, like your business has the same exact workload for months straight, then you can keep code in the application tier. When you start to see performance issues, you can make gradual changes to the code, test that as part of your regular testing processes, and gradually deploy those changes out through your normal release process.

However, if your business has extremely unpredictable performance needs, like a sudden high volume of transactions every few months, and if you don’t do a great job of performance testing ahead of time, then stored procedures can be a better fit. In the heat of a performance emergency, it can be easier to change and deploy a new stored procedure than deploy an entire new build of your application. For example, during one client’s emergency, I implemented this bizarro stored procedure caching technique because it meant the difference between making money, versus their online store being…not online.

In this client’s case, with a team of sharp app developers, none of which had database training, and no database developers or DBAs on the team, the answer was clear: keep the queries in the code. I do try to give folks an avenue to up their game if they want, though – and in your case, if you’ve got developers who want to level up, start them on Itzik’s classic, T-SQL Fundamentals.

What Are You Going to Do About SQL Server 2008?

Last week, you used SQL ConstantCare® to analyze 2,396 SQL Servers across a variety of versions:

Alas, poor SQL Server 2008

Alas, poor SQL Server 2008

  • 15% (362 instances) of SQL Server 2008 and 2008 R2
  • 18% (438) SQL Server 2012
  • 27% (650) SQL Server 2014
  • 32% (777) SQL Server 2016
  • 7% (169) SQL Server 2017

That means 15% of your servers are going to be out of support in about 106 days: SQL Server 2008 & R2 both end support on July 9th.

So I’m curious:

  1. What does your company plan to do about that?
  2. If the answer is “nothing,” what have you tried to convince them otherwise? Or have you?
  3. What’s the single biggest thing stopping you from moving to a supported version?

I’m asking because it influences how we support these older versions in the First Responder Kit, SQL ConstantCare, and Consultant Toolkit. For years, I’ve said, “If Microsoft can’t support it, neither can we,” but given the relatively high market penetration of these old versions, I’m not sure I can still say that. (After all, if I look at the market numbers, it’s more profitable to support 2008 and R2 right now than it is to support 2017!)

What happens when you cancel or kill a resumable index creation?

SQL Server 2019 adds resumable online index creation, and it’s pretty spiffy:

Those parameters mean:

  • ONLINE = ON means you’ve got the money for Enterprise Edition
  • RESUMABLE = ON means you can pause the index creation and start it again later
  • MAX_DURATION = 1 means work for 1 minute, and then gracefully pause yourself to pick up again later

If you decide you want to pause the index, run this command in another session:

When you do that, the index creation’s session fails with a bit of an ugly error:

Aborted index creation

Okay, well, that’s not exactly what I’d expect from “pausing” something, but that’s okay. There’s a new DMV, sys.index_resumable_operations, that shows the paused index creation:

sys.index_resumable_operations

It shows the syntax used to create the index, the percentage complete, when it was last paused, and more. If you want to give it a happy ending or abort it, it’s your choice:

So far, so good.

The first catch: a paused index is stealthy.

It doesn’t show up in sys.indexes, nor does it show up in the Object Explorer list of indexes on a table. That makes it sound like it’s harmless, but it’s not. I can’t drop the table:

Resumable index operation error

Nor can I create any other index – same error:

Index creation error

Tricky. New stuff for developers to watch out for in their deployment scripts.

The next catch: killing or canceling an index creation, doesn’t.

In SSMS, you’re used to being able to click the “Cancel” button on your query, and having your work rolled back.

You’re also used to being able to kill a query, and have it automatically roll back.

Neither of those are true with resumable index creations. In both cases, whether you kill the index creation statement or just hit the Cancel button in SSMS to abort your request, your index creation statement is simply paused until you’re ready to come back to it. (Or, it’s ready to come back to haunt you, as we saw above.)

So you’ve learned your lesson, right? Don’t use the resumable parameter if you don’t specifically need it. As long as you don’t ask for it, you should be fine…right? Wrong.

The final gotcha: you might get resumable even when you don’t ask for it.

SQL Server 2019 adds these database-level configuration options:

This means whenever you create an index, it’ll default to online creation and resumable even when you don’t specify it as part of your index creation statement! Translation: whenever you go to create any index, if you decide it’s a bad idea and click Cancel, your index creation is still left online partway done, lying in wait to disrupt other operations.

That’s why I’ve added an sp_Blitz check for resumable index operations that have been left behind. It’s in the dev branch now, and it’ll be in the April release. Enjoy!

Updated First Responder Kit and Consultant Toolkit for March 2019

This month, just half a dozen bug fixes, plus a new parameter for sp_BlitzCache to search the plan cache for specific strings. To get the new version:

Consultant Toolkit Changes

  • Improvement: new “CPU Now” tab shows the last few hours of CPU % along with a chart.
  • Improvement: more query plans without deep dive. By default, if you run the app without the deep dive switch, as long as it’s running quickly (less than 3 minutes), it’ll populate more of the plan cache tabs. It uses sp_BlitzCache’s @SkipAnalysis = 1 parameter, so the plans are gathered, but we don’t run the time-intensive analysis rules to tell you what’s wrong with each plan. If you want all the tabs & columns populated, use the deep dive switch.
  • Fix: size tab now populates even if user database collations don’t match system databases.
  • Fix: uptime tab no longer shows “RTM” for patch level on 2017/2019.
  • Plus all of the First Responder changes below:

sp_Blitz Changes

  • Fix: ignore backups-on-same-drive rule on Linux since it’s harder to detect mount points over there. (#1995, thanks UCJonathan for the bug report and Brandon (bsquidwrd) for the code.)

sp_BlitzCache Changes

  • Improvement: new @SlowlySearchPlansFor parameter lets you look for strings in the plan cache. Useful for when you wanna figure out what query is asking for a particular missing index, find out which query is using an index, or which query is calling a function. This one’s very new and untested, so if you find stuff, drop us a line in a new issue. (#2000)

sp_BlitzFirst Changes

  • Fix: when calling sp_BlitzCache, it was passing an invalid @SortOrder parameter, and since sp_BlitzCache started purposely throwing sev 16 errors in the #1945 enhancement, that caused the sp_BlitzFirst Agent jobs to fail.(#1983, thanks Bill Mrazik for the bug report.)

sp_BlitzIndex Changes

  • Fix: saving results to table was broken in last month’s release. Doh! (#1988, thanks Matthew Monroe for the report & fix.)
  • Fix: simplified missing index recommendations – they no longer say “with high impact” or “with low impact.” (#1991)

sp_DatabaseRestore Changes

  • Fix: the @StopAt value was not being honored when @ContinueLogs = 1. (#1986, thanks Troy Jennings for the report & fix, and ZLThomps1 for the assist.)
  • Fix: if you asked to restore diffs and logs, but no diffs were taken yet, the logs were being skipped too. (#1998, thanks Frederik Vanderhaegen.)

For Support

When you have questions about how the tools work, talk with the community in the #FirstResponderKit Slack channel. If you need a free invite, hit SQLslack.com. Be patient – it’s staffed with volunteers who have day jobs.

When you find a bug or want something changed, read the contributing.md file.

When you have a question about what the scripts found, first make sure you read the “More Details” URL for any warning you find. We put a lot of work into documentation, and we wouldn’t want someone to yell at you to go read the fine manual. After that, when you’ve still got questions about how something works in SQL Server, post a question at DBA.StackExchange.com and the community (that includes us!) will help. Include exact errors and any applicable screenshots, your SQL Server version number (including the build #), and the version of the tool you’re working with.

How to fix the error “String or binary data would be truncated”

To fix this error, patch to SQL Server 2016 SP2, CU6 or newer (including SQL Server 2017), and then turn on trace flag 460. You can enable it at the query level or at the server level.

First, let’s see the error happen: let’s create a table with small fields, and then try to insert more data than it holds.

Baby’s car is longer than 20 characters, so when the insert statement runs, we get an error:

String or binary data would be truncated

That sucks because we have no idea which field is causing the problem! It’s especially terrible when you’re trying to insert lots of rows.

To fix it, turn on trace flag 460.

Trace flag 460 was introduced in SQL Server 2016 Service Pack 2, Cumulative Update 6, and in SQL Server 2017. (You can find & download the most recent updates at SQLServerUpdates.com.) You can turn it on at the query level, like this:

And now when the query runs, it shows you which column is getting clipped, and which row, too. In our case, we’ve only got one row – but in your real-life data, you’ll be much happier knowing which row’s triggering the error:

You can turn on this trace flag at the query level as shown above, or at the server level:

That turns it on for everybody, not just you – so get everybody on your team to agree before you turn it on. This changes the error message number from 8152 to 2628 (as shown above), which means if you’ve built error handling based on the error number, you’re suddenly going to start getting different behavior.

I’m a fan of turning this trace flag on while I’m doing troubleshooting, and then once I’ve found the culprit, turning it back off again:

In our case, once we’ve identified that Baby’s car is too large, we either need to change his car, or change the datatype in our table to make it bigger, or cast the data on the way in to explicitly strip the extra length off his car. A data chop shop, if you will.

Don’t leave this trace flag enabled.

There’s at least one bug with it as of today on SQL Server 2017 CU13: table variables will throw errors saying their contents are being truncated even when no data is going into them. Andrew Pruski reports:

Sadly, that behavior is NOT fixed, and here’s the simplest repro script I could build:

SQL Server 2017 CU13 still reports that the string will be truncated even though the insert doesn’t run:

Switch out the table variable for a temp table, and it works fine, as expected:

If you want to follow progress on that bug getting fixed, it’s here. Such a great example of why I’m not a fan of using trace flags by default – sure, they can fix issues, but they can also introduce unpredicted, unwanted behaviors. (And, uh, not a fan of table variables either.)

Update 2019/03/25 – the above bug is fixed in 2017 Cumulative Update 14. Yay!

Registration Open for My SQLSaturday Boston Pre-Con

Performance Tuning in 21 DemosI’m coming to Boston this year for SQLSaturday #877. On Friday, I’ll be teaching a one-day pre-con:

Performance Tuning in 21 Demos – You’re a developer or DBA who wants to tune indexes and queries to make your application go faster. You don’t like learning by watching slides or reading white papers – you want to jump right into the code and see what happens. You’re the kind of person who loves watching YouTube videos that start with, “Hold my beer and watch this.”

Attendees get their choice of a Recorded Class Season Pass or a Consultant Toolkit – so the pre-con pays for itself! Learn more and register now.

Can’t make it to Boston? Here’s where to find me this spring:

Poll Results: What Conference Schedule Did You Prefer?

Last week, I asked you a couple of questions:

When you attend a 1-day conference like a SQLSaturday or SQL Relay, what is your preferred session schedule?

You said:

  • 31.8% (211 votes): 6 60-minute sessions
  • 31.3% (208 votes): A mix of 45-minute and 90-minute sessions
  • 25.3% (168 votes): 7 45-minute sessions: 25.3%
  • 11.6% (77 votes): 5 75-minute sessions

Other ways to think about that:

  • 88.4% do not want 75-minute sessions
  • 56.6% want 45-minute sessions as building blocks, and most of those voters want some of the sessions to be double-length

I know that’s frustrating for some presenters to hear because presenters want all of the time they can get, but the reality is that during a 1-day conference, attendees want more variety.

When you attend a 2-3 day conference like the PASS Summit or SQLbits, what is your preferred session schedule?

You said:

  • 50.2% (333 votes): A mix of 45-minute and 90-minute sessions
  • 20.2% (134 votes): 6 60-minute sessions
  • 18.2% (121 votes): 5 75-minute sessions
    (or to put it another way, 81.8% of voters do not want this)
  • 11.4% (76 votes): 7 45-minute sessions

I expected the numbers to be different during a longer event (thus the two separate questions), and they were – but in both cases, they were a strong vote against the default 75-minute session length of many events.

Attendees like 45 minute blocks, not 75.

This confirms something I’ve found when polling my own training class attendees who told me things like:

  • 75 minutes is too long to listen to one speaker without a break
  • Speakers try to cover too much ground, trying to bring people from 100-level to 300-level in a single session
  • When given a 75-minute slot, speakers leave too much time for Q&A – when instead they should take questions during the break, next to the podium, after the session completes

As a presenter, my first reaction was to fight the attendees – to tell them they don’t understand what presenters are trying to do, or how we as presenters could do better, or how we need to adapt their expectations.

That doesn’t work.

Attendees are the customers, and we presenters need to spend some time listening and adapting our work to meet the customers’ requests. They want shorter sessions, and they want us to deliver more knowledge in less time.

If you’re a presenter, this means your abstracts need to clearly define your ideal session attendee, then stick to it. As you plan your abstract, write down what the ideal session attendee already knows, and don’t try to cover that stuff in the session. Write down what’s going to be out of scope too – you simply can’t take someone from 100 level to 300 level inside a single 75-minute session. (You certainly can’t do it in 45, either – and if conferences listen to attendee feedback, shorter sessions might start becoming the new normal.)

In my own classes – both one-day pre-cons and multi-day classes – I’ve taken this feedback to heart by aiming to teach in 45-60 minute modules rather than 60-75 minutes. At first, it felt like a more stuttered agenda, taking a 15-minute bio break every 45 minutes, but it does seem to result in more focused attendees that are more able to digest material through the entire day.

What are students saying about my latest round of classes?

Mastering Index TuningMastering Index Tuning has a fresh set of reviews from the latest students.

Levente Kalotai wrote:

Absolutely loving it. The class, Brent style, everything. I have to admit, I had my reservations before the class, but boy I was wrong about it. Less than 5 minutes in the first lab anyone can see the insane amount of work what was put into the class. This admiration only grows through the 3 days as everything is marching toward the last lab. I am really grateful to Brent and his team to use their extensive real-life knowledge and put this class together. Cannot wait till Query Tuning starts, probably I will start the previous recordings ?

Katie Walsh wrote:

This class was just top notch. Brent is very entertaining and he explains the material in such a clear and concise manner. I learned a ton. Looking forward to the next class

Jay Taylor wrote:

Thorough and entertaining. Brent hits Sql Server indexing’s high points, low points and all points in between. Then he gives you a lab to do the work to let the lecture sink in. During the lab, he’ll help you out, in real time. And later you can watch the session again and do the labs again, on your own. It’s a great way to learn. I wish there were similar courses for C#, Angular and Javascript.

Mohammad Darab wrote:

Brent’s way of teaching is unmatched! This class is an absolute must for SQL Server professionals. Perfect amount of lecture, lab time, and Q&A.

Michael DeMore wrote:

The Master Index Tuning class is great. From the way the material is presented, to working the labs I really enjoyed this class. In fact, I have take 4 of Brents classes and they all are wonderful. I do highly recommend. And since I purchased the season pass, I look forward to the next class I take.

Joe D’Aquila wrote:

Excellent! Informative and entertaining. There was the right amount of structure and freedom. The lab machines and tools were top notch. I left the session eager to index tune. I am very much looking forward to the rest of the training classes in this series. Thank you Brent!

Wanna join in on the fun? Here are my upcoming classes:

Wanna take ’em all? Check out my Live Class Season Pass. Most of my students get this so they can take all of my classes for a year straight, and even re-take classes multiple times depending on their schedules.

Pop quiz: what do these things cost?

It’s been a few years since we’ve stopped in for a sanity check. Quick: without searching the web for prices, put these items in order. Do it on a scratch paper, guessing how much you think each one costs, then arrange ’em.

  • SQL Server 2017 Standard Edition licensing for 4 cores
  • SQL Server 2017 Enterprise Edition licensing for 1 core
  • 64GB of memory for a modern server (Dell r740)
  • 256GB of memory for a modern server (Dell r740)
  • A mirrored pair of 4TB PCIe SSDs
  • One year of an 8-core, 61GB RAM, 1.9TB SSD VM in EC2 (i3.2xlarge)
  • One month of an 8-core, 40GB RAM Azure Managed Instance (G5 BC)
  • One typical day of IT employee time (say, developer, sysadmin, DBA)
  • One week of a team’s time (4 people)

And I’ll make it a little easier for you: group them into these buckets:

  • Under $2,000
  • $2,000-$4,000
  • $5,000-$10,000
  • Above $10K

Without searching the web, write down on scratch paper what you think they cost, and put them in order. When you’re done, check yourself in the comments.

I find that if you haven’t been shopping for this stuff recently, you might be spending a little too much time slaving over a hot Pentium 4.

How to Implement In-Memory OLTP Quickly and Easily

When you first hear about it, SQL Server’s In-Memory OLTP sounds like a new feature that’s hard to implement, requires a lot of schema changes, and takes planning and downtime.

I’m here to tell you it’s simply not true, and you can do it in just 3 steps. Here’s how:

  1. Read Microsoft’s guidance on how much memory you’ll need
  2. Give your SQL Server that much
  3. Set your max server memory setting appropriately, leaving 4GB or 10% free for the OS, whichever is higher

That’s it. That’s all there is.

Forget the feature named “In-Memory OLTP” and just put the hardware in that the feature would require anyway. You’re going to have to do it sooner or later, so just start there. Before you go writing the necessary conflict detection code, trying to rip out your unsupported cross-database queries, messing around with hash index troubleshooting, or stressing out about their statistics problems, just buy the damn RAM.

You’d be stunned at how often that simply makes the performance issues go away.

In Azure SQL DB, what does “The connection is broken and recovery is not possible” mean?

All it really means is, “Click Execute again, and you will be fine.”

If I had a dollar for every time I saw this error message in SQL Server Management Studio:

The connection is broken and recovery is not possible. The connection is marked by the server as unrecoverable. All you have to do is hit F5, but I can't be bothered to do that for you.

The connection is broken and recovery is not possible.

The error says:

The connection is broken and recovery is not possible. The connection is marked by the server as unrecoverable. No attempt was made to restore the connection. (Microsoft SQL Server)

That’s not what it means though. Here’s what it really means:

Click Execute again, and things will be fine.

It’s not that your network is broken, or that your IP address changed, or that there’s a firewall error. Something just happened behind the scenes where Azure decided your connection was idle and it needed to do something on the server side.

I wish Management Studio wouldn’t give in quite so easily. Just run your query again, and you’ll be fine.

Are nulls stored in a nonclustered index?

When you index a nullable field, are the rows with nulls stored in the index? It’s easy enough to find out by creating a table with a nullable field, and then creating an index on it:

Here’s the plan:

Yep, SQL Server stores the nulls in the index. That makes sense, really, because sooner or later, you’re going to want to find the rows with nulls, and when you’ve only got a few nulls in a big table, the index can help a lot there.

To learn more about this stuff in action, watch my free 90-minute course, How to Think Like the SQL Server Engine.

What sessions do you want to see at GroupBy next month?

GroupBy.org is a free online conference run by the community, for the community. You get to pick the sessions by voting.

Voting is open now through March 15 for the next event, happening April 11. Here are some of the sessions you can choose from:

  • Just Enough Database Theory for Power BI
  • Linux OS Fundamentals for the SQL Admin
  • Marketing for SQL Consultants and Business Owners
  • PowerShell for the SQL DBA
  • SQL Server Memory Internals & Troubleshooting
  • Think Like a Certification Exam

Go vote now. It’s totally free, no registration required.

Can deleting rows make a table…bigger?

Michael J. Swart posted an interesting question: he had a large table with 7.5 billion rows and 5 indexes. When he deleted 10 million rows, he noticed that the indexes were getting larger, not smaller.

Here’s one way that deletes can cause a table to grow:

To demonstrate the growth, I took the Stack Overflow database (which doesn’t have RCSI enabled) and created a bunch of indexes on the Posts table. I checked index sizes with sp_BlitzIndex @Mode = 2 (copy/pasted into a spreadsheet, and cleaned up a little to maximize info density):

sp_BlitzIndex before

I then deleted about half of the rows:

Amusingly, while the deletes were happening, the data file was growing to accommodate the timestamps, too! The SSMS Disk Usage Report shows the growth events – here’s just the top to illustrate:

SSMS Disk Usage Report

(Gotta love a demo where deletes make the database grow.) While the delete was running, I ran sp_BlitzIndex again. Note that the clustered index has less rows, but its size has already grown by about 1.5GB. The nonclustered indexes on AcceptedAnswerId have grown dramatically – they’re indexes on a small value that’s mostly null, so their index sizes have nearly doubled!

I don’t have to wait for the deletion to finish to prove that out, so I’ll stop the demo there. Point being: when you do big deletions on a table that was implemented before RCSI, SI, or AGs were enabled, the indexes (including the clustered) can actually grow to accommodate the addition of the version store timestamp.

Things you might take away from this:

  • RCSI, SI, and AGs have cost overheads you might not initially consider
  • After enabling RCSI, SI, or an AG, rebuild your indexes – not that it’s going to make your life better, it’s just that you can control when the page splits, object growth, fragmentation, and logged activity happens instead of waiting around for it to hit when you’re in a rush to do something
  • Offline index rebuilds seems to have a related effect too – Swart noted in a comment that when he tested them, they were rebuilt without the 14-byte version stamp, meaning that subsequent updates and deletes incurred the page splits and object growth
  • I have really strange hobbies

[Video] Demoing SQL Server 2019’s new Accelerated Database Recovery

Wouldn’t it be nice if rollbacks just finished instantly? Wouldn’t you love for startup times to be near-zero even when SQL Server crashed just as someone in the middle of a transaction? How much would you pay for all this? (Well, I’m a little afraid to ask that, since we don’t know yet whether this is an Enterprise-only feature, but dang, I sure hope not.)

Here’s how Accelerated Database Recovery works in SQL Server 2019:

In the video, I’m using the Stack Overflow 2013 (50GB) database with this script:

You can learn about the internals in the Azure SQL DB documentation on that same feature.

New Objects, Columns, and Messages in SQL Server 2019 CTP 2.3

Sure, there’s official documentation on what’s new in SQL Server 2019, but Microsoft’s notorious for slipping undocumented stuff in. Sometimes these new features become officially documented in subsequent preview builds, like CTP 2.2’s initial plumbing for Accelerated Database Recovery that went officially public in CTP 2.3, and other times they never see the light of day, like Snapshot Materialized Views.

To unearth this stuff, I use this linked server technique, joining between old & new versions of the product, and query the system tables for changes. Here’s what’s new in SQL Server 2019 CTP 2.3:

New objects in CTP 2.3:

New objects in SQL Server 2019 CTP 2.3

New columns in existing objects in master:

New columns in existing objects

New error messages:

  • 3644: Prefaulter task failed for file %ls with error %d.
  • 3868: Failed to start tempdb in Hekaton mode. Continuing tempdb startup in SQL mode.
  • 5153: OVERFLOW is not supported in Alter Database modify statement in Sql DW back door.
  • 5346: DATAFILETYPE option cannot be provided without WITH schema.
  • 5347: USE_TYPE_DEFAULT option cannot be provided without WITH schema.
  • 7439: Fail to read required information about %S_MSG pool %S_MSG.
  • 7440: Remote execution status: “%ls” .
  • 8677: Cannot create the clustered index ‘%.*ls’ on view ‘%.*ls’ because it does not aggregate results. Consider changing the view definition.
  • 8705: A DML statement encountered a missing entry in index ID %d of table ID %d, in the database ‘%.*ls’ due to an interaction with another transaction. If you continue to see this error, consider using Repeatable Read or higher isolation level.
  • 13784: Setting SYSTEM_VERSIONING to ON failed because column ‘%.*ls’ does not have the same sensitivity classification in tables ‘%.*ls’ and ‘%.*ls’.
  • 13785: System-versioned table schema modification failed because column ‘%.*ls’ does not have the same sensitivity classification in tables ‘%.*ls’ and ‘%.*ls’.
  • 13810: Column ‘%ls’ is of type ‘%ls’ which is not supported with file format ‘%ls’.
  • 13962: The alias or identifier ‘%.*ls’ is not the last node in a shortest path pattern. Only the last node in the path can be used with LAST_NODE().
  • 13963: Node table name or alias ‘%.*s’ refers to a derived table and cannot be used by the LAST_NODE function.
  • 13964: The two aliases ‘%.*s’ and ‘%.*s’ refer to different underlying objects in a LAST_NODE equality expression.
  • 13965: The table name or alias ‘%.*s’ must use the LAST_NODE function to reference the last node in a shortest_path expression.
  • 14173: Replication-%s: agent initialization failed. %s
  • 15807: Provided statement is not supported.
  • 15808: Schema cannot be determined from data files for file format ‘%.*ls’. Please use WITH clause of OPENROWSET to define schema.
  • 15809: No files found matching the name pattern(s) ‘%ls’.
  • 15810: Error trying to retrieve schema from the data file ‘%.*ls’. Make sure the file matches the ‘%.*ls’ format.
  • 15811: No columns found in the data file ‘%ls’.
  • 15812: Number of columns in the data file ‘%ls’ exceeds the maximum of %d.
  • 15813: Error reading external file: ‘%s’.
  • 15814: Column ‘%s’ of type ‘%s’ is not compatible with external data type ‘%s’.
  • 15815: External data type ‘%s’ is currently not supported.
  • 16115: Sensitivity classifications cannot be added to a history table directly.
  • 17206: File %ls, file number %d is successfully mapped.
  • 17209: File %ls, file number %d: File size is not configured multiple of the minimum size of a large page.
  • 19515: Database partner is missing during database restart.
  • 22301: DW FIDO mode is not enabled.
  • 22302: DW FIDO transaction context not found.
  • 22303: Updates are not allowed from a FIDO Scan transaction.
  • 22304: Reading or Writing to database files is not supported in FIDO DBs.
  • 22305: A NULL or unexpected value was retured by a FIDO ODBC call.
  • 22306: Only CCI tables are allowed in Fido mode.
  • 22307: Alter statements are only allowed from a FIDO Alter transaction.
  • 22308: Fido thread failed to acquire a lock.
  • 22309: Fido Cache DB not found.
  • 22310: Failed to create Fido DB with key: [%.*ls].
  • 22311: Generic Fido out of bounds error.
  • 22312: Failed to remap rowset with Id ‘%I64d’ on a Fido DB.
  • 22313: Failed to find Fido columns in rowset with Id ‘%I64d’.
  • 22314: Fido invalid ODBC connection.
  • 22315: Invalid Fido Transaction type %d.
  • 22316: Failed to acquire CSI Cache lock.
  • 22317: Fido invalid ODBC column.
  • 22318: Fido ODBC transaction failed a commit.
  • 22319: An invalid access to a Fido DB Rowset (DbId ‘%lu’, RowsetId ‘%I64d) was performed.
  • 22320: Fido DB (DbId:’%lu’, Name: ‘%.*ls’) can only be used under a Fido session context.
  • 22321: Fido DB (DbId:’%lu’, Name: ‘%.*ls’) cannot be used. Only DbId: ‘%lu’ is allowed.
  • 27803: Message reported from (%.*s) device on device Id (%I64d), code : %d, message : %.*s .
  • 27804: Query failed on HPC device Id (%I64d), with error code %d .
  • 33554: Encountered error 0x%08lx while waiting for encryption scan completion consensus. See the errorlog for details.
  • 33555: Unable to find the user-specified certificate [Cert Hash “%hs”] in the certificate store of the local computer or current user. Please verify if the certificate exists.
  • 33556: Invalid character in the thumbprint [Cert Hash “%hs”]. Please provide a certificate with valid thumbprint.
  • 33557: Invalid thumbprint length [Cert Hash “%hs”]. Please provide a certificate with valid thumbprint.
  • 33558: Encryption scan can not be resumed because no encryption scan is in progress.
  • 33559: Specified workload group does not exist. Retry with a valid workload group.
  • 33560: TDE Encryption scan suspended for database id [%d]. Will not auto resume. Please resume manually to restart.
  • 33561: Encryption scan can not be suspended because no encryption scan is in progress.
  • 33562: TDE encryption scan state cannot be updated for database id [%d].
  • 35513: compute
  • 37201: An instance pool could not be found with name ‘%.*ls’.
  • 37202: “An instance pool with name ‘%.*ls’ is busy with another ongoing operation.
  • 37203: An instance pool with name ‘%.*ls’ is not empty.
  • 37204: An instance pool with name ‘%.*ls’ does not have enough vCore capacity for given request.
  • 39112: Duplicate file specification supplied for platform ‘%.*ls’.
  • 39113: Number of file specifications exceeds the maximum of %d.
  • 40977: ‘%.*ls’ is not a supported timezone.
  • 41670: Cannot retrieve tempdb remote file lease order id.
  • 41871: Failed to recreate XTP non-durable tables during recovery of the database ‘%.*ls’.
  • 41935: Managed Instance has reached the total capacity of underlying Azure storage account. Azure Premium Storage account is limited to 35TB of allocated space.
  • 43037: An internal error was encountered when processing the restore request. This request has been assigned a tracing ID of ‘%.*ls’. Message is ‘%.*ls’, and details are ‘%.*ls’. Provide this tracing ID/Message/Details to customer support when you need assistance.
  • 45433: The specified family %ls is not consistent with the specified SKU %ls.
  • 45434: The point in time ‘%.*ls’ for restoration can’t be later than now.
  • 45435: The operation could not be completed. The requested sku update would cause the master server to have a larger max_connections value than its replica(s).
  • 45436: The operation could not be completed. The requested storage update would cause the master server to have a larger storage size than its replica(s).
  • 45437: The operation could not be completed. Replication is not enabled for the server.
  • 45438: The timezone cannot be changed on Managed Instance.
  • 45439: Cannot create a Managed Instance with timezone ‘%.*ls’. Please use timezone ‘UTC’ instead.
  • 45440: Cannot create a Managed Instance as there are not enough available ip addresses in the selected subnet.
  • 45441: Elastic server restore verification is not supported.
  • 47131: Create or Join availability group ‘%.*ls’ has failed because there is a system availability group. Remove system availability group, then retry the operation.
  • 49410: Change tracking is currently not supported in this version of SQL Server.
  • 49510: Managed instance is busy with another operation. Please try your operation later.
  • 49511: Unable to set one or more trace flags. Unsupported trace flag(s): %ls%ls%ls.
  • 49512: Session level trace flags are not supported on managed instance.
  • 49972: Cannot add tempdb remote file to local tempdb filegroup in transition to primary.
  • 49973: Cannot remove tempdb remote file to local tempdb filegroup in transition to primary.

No new rows in sys.configurations and no new Perfmon counters. Happy spelunking!

What was the first SQL Server concept that stumped you?

What’s the first thing you remember struggling with on SQL Server? Your first memory of, “Dang, I just can’t figure this out,” and then you had a Eureka moment later when it was so much more clear?”

I asked folks on Twitter, and the answers were great, so I had to round ’em up here to share ’em with you:

Oh, man, that was a stumper for me too. I remember having to whiteboard out a couple of tables and think through the logic to put it together.

I still struggle with those. I have to copy/paste from a starter script whenever I write those, because I just can’t remember the syntax.

The first time I saw SQL Server rewrite my query in a different order, I wanted to bang on the box and go, “THANK YOU, WHATEVER MAGIC GENIE IS INSIDE HERE!”

Active Directory groups did that to me too at first – as in, “Wait, this person isn’t listed anywhere in the server, how on earth were they able to do that?” Especially when they’d been later removed from the group.

https://twitter.com/crummel4/status/1098948179750502401

Itzik’s demos are always chock full of those moments for me. He’ll show something, and it’ll vaguely register, but if I’m facing a particular problem at the moment, I’ll learn something new every time.

That constantly surprises people in classes, too!

When I query msdb.dbo.backupset, I still have to remind myself, “Oh yeah, D isn’t Differential.”

UNSUBSCRIBE

Menu
{"cart_token":"","hash":"","cart_data":""}