Microsoft has an official list of what’s new in 2022 overall, but here I’m specifically focusing on system objects that might be interesting to script developers like you and I, dear reader.
New stored procedure sp_get_table_card_est_and_avg_col_len – I assume the “card” refers to statistics and cardinality, not Hallmark. SQL Server has historically struggled with memory grants because it uses datatype size to budget memory grants, and bigger-than-necessary data types (like NVARCHAR(MAX)) have led to larger-than-necessary grants. It’d be cool if this proc was a down payment to mitigate that problem, but I have a sinking feeling it has to do with external data sources. I would tell you more about it, but when I run it, I get:
Msg 15871, Level 16, State 9, Procedure sp_get_table_card_est_and_avg_col_len, Line 1 [Batch Start Line 0]
PROCEDURE 'sp_get_table_card_est_and_avg_col_len' is not supported.
New view sys.dm_tran_orphaned_distributed_transactions – every now and then, I’ve run across Availability Groups replicas with problems due to orphaned distributed transactions consuming DTC resources or holding locks. This new undocumented DMV might be a down payment to resolve that problem. I don’t have an easy way to reproduce the problem quickly, so I can’t demo it.
New view sys.database_automatic_tuning_configurations –
this one’s a little odd because Books Online tells me it’s been around since SQL Server 2017, but I don’t remember seeing it before, and it’s not in my 2019 test instances. Tells you if Force_Last_Good_Plan is on, and I would imagine that down the road, as more automatic tuning options might come out over the next several releases, this might have more info.
New Query Store DMV columns – now that Query Store is starting to work on read-only replicas, looks like they added plan_persist_plan_feedback.replica_group_id, plan_persist_query_hints.replica_group_id to support those goals. Plus plan_persist_plan_forcing_locations gets columns for timestamp and plan_forcing_flags.
New spinlock troubleshooting – sys.dm_os_workers gets columns for spinlock_wait_time_ms, spinlock_max_wait_time_ms, and spinlock_wait_count.
New stuff to support offloaded compression
This stuff needs its own section. RC0 introduced the ability to offload compression to Intel processors equipped with QuickAssist.
We get new sp_configure options for ‘hardware offload mode’ and ‘backup compression algorithm’. By default, these are off. To turn on offloaded compression, install the Intel QAT drivers, then do an alter:
ALTER SERVER CONFIGURATION
SET HARDWARE_OFFLOAD = ON (ACCELERATOR = QAT);
Hardware offload configuration has changed.
Current configuration (0x0) and mode (0x0).
New configuration (0x1) and mode (0x0).
Restart SQL Server for the new configuration to take effect.
After restarting the SQL Server, check this brand spankin’ new DMV:
SELECT * FROM sys.dm_server_accelerator_status;
And, uh, on my VM, it’s still not enabled:
Because you can enable it even on processors that don’t support it, which strikes me as kinda odd. I suppose you would want to make it part of your standard build, and then whenever it’s available, it’ll get used, assuming you call for offloaded backup compression in the right way.
New messages in RC0
In each release, I check sys.messages for new stuff. Some of this stuff gets added for the cloud, like Azure SQL DB or Managed Instances, so read these with a grain of salt. Here’s what’s new in RC0, new from the last CTP:
- 1136: The tempdb has reached its storage service limit. The storage usage of the tempdb on the current tier cannot exceed (%d) MBs.
- 5373: All the input parameters should be of the same type. Supported types are tinyint, smallint, int, bigint, decimal and numeric.
- 5374: WITH clause is not supported for locations with ‘%ls’ connector when specified FORMAT is ‘%ls’.
- 16722: Cannot change service objective for %ls to %ls as long-term retention is not supported yet on Hyperscale. Please disable long-term retention on the database and retry
- 17414: Retrieving the address of an exported function %.*ls in accelerator library %.*ls failed with error 0x%x.
- 17415: %.*ls component enumeration failed with zero component count.
- 17416: %.*ls component enumeration failed with mismatch in component count.
- 17417: %.*ls %.*ls not compatible with SQL Server.
- 17418: Detected %.*ls %.*ls.
- 17419: %.*ls hardware detected on the system.
- 17420: %.*ls hardware not found on the system.
- 17431: %.*ls initialization failed with error %d.
- 17432: %.*ls initialization succeeded.
- 17433: %.*ls session creation failed with error %d.
- 17434: %.*ls session sucessfully created.
- 17435: %.*ls will be used in hardware mode.
- 17436: This edition of SQL Server supports only software mode. %.*ls will be used in software mode.
- 17437: %.*ls will be used in software mode.
- 17438: %.*ls session alive check failed with error %d.
- 17439: %.*ls session tear down failed with error %d.
- 17440: %.*ls session close failed with error %d.
- 17441: This operation requires %.*ls libraries to be loaded.
- 19713: Statistics on virtual column are not avalable.
- 19714: Number of columns in PARTITION clause does not match number of partition columns in Delta schema.
- 21093: Only members of the sysadmin fixed server role or db_owner fixed database role or user with control db permission can perform this operation. Contact an administrator with sufficient permissions to perform this operation.
- 22786: Synapse workspace FQDN is not in the list of Outbound Firewall Rules on the server. Please add this to the list of Outbound Firewall Rules on your server and retry the operation.
- 22787: Change feed table group limit of %d groups exceeded
- 22788: Could not enable Change Feed for database ‘%s’. Change Feed can not be enabled on a DB with delayed durability set.
- 25755: Could not create live session target because live session targets are disabled.
- 31633: The length of the provided %ls exceeds the maximum allowed length of %u bytes.
- 31634: The %ls must contain a ‘%ls’ for use with managed identity.
- 31635: The %ls’s ‘%ls’ value must be a %ls for use with managed identity.
- 31636: Error retrieving the managed identity access token for the resource id ‘%ls’
- 33547: Enclave comparator cache failed to initialize during enclave load.
- 39057: The value provided for the ‘%.*ls’ parameter is too large.
- 39058: The parameter ‘%.*ls’ has a type that is not supported.
- 45770: Failed to move the database into elastic pool due to internal resource constraints. This may be a transient condition, please retry.
- 46552: Writing into an external table is disabled. See ‘https://go.microsoft.com/fwlink/?linkid=2201073’ for more information.
- 46553: Create External Table as Select is disabled. See sp_configure ‘allow polybase export’ option to enable.
- 46953: Pass through authorization using S3 temporary credentials is not supported. Please use S3 credentials to access storage.
- 47507: Adding memory optimized files to the database replicated to Azure SQL Managed Instance is not supported because its service tier does not support In-memory OLTP capabilities. Consider replicating database to managed instance service tier supporting In-memory OLTP capabilities.
- 47508: Adding multiple log files to the database replicated to Azure SQL Managed Instance is not supported because managed instance does not support multiple log files.
- 47509: Adding FileStream or FileTables to the database replicated to Azure SQL Managed Instance is not supported because managed instance does not support FileStream or FileTables.
- 47510: Adding multiple memory optimized files to the database replicated to Azure SQL Managed Instance is not supported because managed instance does not support multiple memory optimized files.
If any of those messages are interesting to you, feel free to leave a comment about it.
New database-scoped configuration options
These are all new since SQL Server 2019 – some were introduced in prior CTPs, but I’m mentioning them all here because there’s good stuff in here for query tuners:
- 25 – PAUSED_RESUMABLE_INDEX_ABORT_DURATION_MINUTES
- 26 – DW_COMPATIBILITY_LEVEL
- 27 – EXEC_QUERY_STATS_FOR_SCALAR_FUNCTIONS
- 28 – PARAMETER_SENSITIVE_PLAN_OPTIMIZATION
- 29 – ASYNC_STATS_UPDATE_WAIT_AT_LOW_PRIORITY
- 31 – CE_FEEDBACK
- 33 – MEMORY_GRANT_FEEDBACK_PERSISTENCE
- 34 – MEMORY_GRANT_FEEDBACK_PERCENTILE_GRANT
- 35 – OPTIMIZED_PLAN_FORCING
- 37 – DOP_FEEDBACK
- 38 – LEDGER_DIGEST_STORAGE_ENDPOINT
- 39 – FORCE_SHOWPLAN_RUNTIME_PARAMETER_COLLECTION
That last one’s particularly interesting to me because SQL Server 2019 originally shipped in a way that you could see runtime parameters in sys.dm_exec_query_statistics_xml, and then they turned it off around CU11-12 without documenting the changed behavior. That was a total bummer, because that feature was a lifesaver for troubleshooting parameter sniffing. I’m hoping we can get that back again.
I, too, Detected %.*ls %.*ls.
“Is Vic there?”
FWIW, been testing the QAT compression and even without a QAT card and older processors, the software compression yields a benefit in reduced duration, smaller backup file, and less CPU usage! It’s nowhere near as dramatic as if you did have a QAT card available. But in my (limited) testing thus far, it seems that if you’re already compressing your backup, I see no downside to using the QAT software compression algorithm.
To add to this, another reason to have a software mode is so that one can restore a QAT compressed DB to another instance that doesn’t have a QAT card so your backups are locked in.
That’s the thing that I think is going to make it a non-starter for the next several years.
Your DR sites, QA servers, etc – anywhere that you need to restore a backup, needs to have the appropriate hardware. That’s… a lofty goal, especially in the days of virtualization and the cloud.
5-6 years from now, it’ll be something that DBAs need to remember to revisit, when the hardware’s more common.
“…needs to have the appropriate hardware…”
I think that’s the key point we’re disagreeing on here.
I’ve been running tests with old hand-me-down hardware/processors that pre-dates QAT and software mode still helps. Underneath the covers, it’s using a different algorithm than the “legacy” compression implementation, that’s more efficient. It shines when QAT hardware is present, but even when without hardware, it’s still shows an improvement elapsed time, CPU burn, and size of the compressed backup file.
Now yes, there’s still the matter of needing to actually upgrade to 2022 in the first place, but that’s a software & business problem, not a hardware problem. 😉
Have you tested to make sure that you can take a backup compressed with QAT acceleration done in hardware, and restore it via software acceleration?
That’s the part that makes me nervous. We’ve seen so, so many problems with restores over the years. “In theory, this should work…”
Strange, cannot reply to your last reply to me.
But yes, it SHOULD work… should. 🙂
The mixing and matching backups/restores done with QAT hardware & software is next on my testing agenda. Stay tuned!
Yes, it SHOULD work… but do a search for “restore” on SQLServerUpdates.com and there have just been so many bugs over the last few years as they’ve monkeyed with the backup format. There have been at least half a dozen CUs trying to fix problems with restoring compressed TDE backups.
Friends tell friends to just say no to TDE & compression? 😉
Call me a glass half-full kind of guy, in choosing to be optimistic. I’m still gonna test the hell of of it though.
It actually does compress now though – since a CU in 2016, they changed the way backups work so that the encrypted pages are read into memory, compressed, and then encrypted after they’re compressed. You do get compression on encrypted backups now, but … there are just so many hoops to jump through to get it to work right.
Kinda like …
Intel QAT backup compression does not require any specialized hardware. There is a hardware assisted mode (that is Enterprise Edition only) that requires QAT hardware. There is also a software-only mode (that works on Standard Edition and Enterprise Edition) that only requires the Intel QAT drivers.
QAT Software mode works very well compared to SQL Server native backup compression, with elapsed times that are typically twice as fast with slightly less CPU utilization. The backup size is also slightly smaller.
QAT hardware mode (with an Intel 8970 QAT card) reduces your CPU utilization dramatically compared to software-only mode. That mode is Enterprise Edition only. Upcoming Intel Sapphire Rapids server CPUs will have QAT support built in.
If you have a QAT compressed backup, you have to have the QAT drivers installed on the machine you want to restore it to. That just means a little more planning as you build your SQL Server instances.
Yep, all totally agreed. Now for the important question, the same one I asked Andy:
Have you tested taking a backup with hardware compression, and restored it with software only?
If you backup with QAT HW or QAT SW, then you can restore with either QAT HW or QAT SW.
I have personally tested this with my database files. If you have concerns about compatibility with your database, I can test the QAT HW/SW Backup/Restore if you would share your database files (provided they are of reasonable size).
Incredible couple of posts, Brent! Absolutely amazing. Thank you!
Awww, thanks sir! I think you’re gonna really love tomorrow’s post.
For QAT, you are missing a step before the ALTER SERVER CONFIGURATION
— Step 4
— Enable hardware offload config
EXEC sp_configure ‘show advanced options’, 1;
EXEC sp_configure ‘hardware offload enabled’, 1;
— Restart the SQL Server Service
Thanks to you and Brent for providing so much information for free. Both of your tools are indispensable. I recommend your scripts to everyone who will listen.
Thanks for the kind words!
Thanks sir! I’m missing a whole lot more than that, of course, like buying the right processor or making the decision about using the software version – just wanted to illustrate a quick example. Hopefully no one reads this post and thinks they’re ready to use a particular feature. 😉
thank you for the information
Nice article, Brent.
I hope all of these new parameters, functions and sprocs will get documented before RTM
Thank you for the information
Regarding — New view sys.database_automatic_tuning_configurations
The link you provided references sys.database_automatic_tuning_options which is available in 2017 & 2019. I have not looked at RC0 yet but did Microsoft mess up its release notes or did Google run amok?
Ooo, yes, great catch! No wonder I couldn’t find it on 2019, hahaha. Thanks, updated the post.