Today at Microsoft Ignite, Microsoft started dropping new details on the next version of SQL Server, 2022. Here’s a 13-minute video from Bob Ward explaining and demoing some of the new features:
The new features include:
- Failover back/forth from SQL Server 2022 and Azure SQL DB Managed Instances, including restoring versionless databases from Azure SQL DB Managed Instances down to on-premises SQL Server 2022
- Azure Synapse Link integration to avoid big ETL jobs between SQL Server and Azure Synapse
- SQL Server Ledger – blockchain immutable histories of tables
- Parameter-sensitive plan optimization that caches multiple plans per stored procedure
There’s no release date yet, but you can apply for the early adoption program now.
Thanks, Brent! Common wisdom in the past (rightly or wrongly) has said to hold off on adoption, even for Dev environments, for a few months post release. Or for the first service pack. Do you think that still holds true?
We JUST installed our first instances of 2019, so we won’t be going to 2022 anytime soon
That depends on what kind of shop you are. If you do front edge development for the newest tools techs you probabbly wanna jump in as fast as you can, test the water and live with the bugs. But if you are running a bank or any kind of app that needs stability overall you are probably best of with waiting till the dust settles. In bigger organisations the tendency was to pay for extended support and stick to the old version a little bit longer and although now i am talking about my experience in the field with Oracle, SQL Server will not differ much.
So, as always, the answer in short is: It depends.
@Daniel SQL Server 2017 and newer don’t have Service Packs (only Cumulative Updates), so that part of the old common wisdom will have to change.
Easy. Wait till CU10
I really hope that with 2022 it will finally be possible to integrate system database into availability groups. It is really unnecessary effort to take care that settings, logins, agent jobs etc. are in sync on all instances.
We are currently working on a hybrid DR from on-prem to Azure… Their new setup for creating the cluster is really great looking. We are planning on doing to a dedicated VM running SQL though, rather than a hosted SQL instance.
Gary – let’s not do advertising in the comments. Thanks for understanding.
And the million dollar question is: Will SQL 2022 allow restoring of a single table from a backup? 🙂
Rolf – HAHAHA, I think we both know the answer to that one, sadly…
Try being the DBA to explain that to your clients. At least my coders and developers understand; the clients – not so much.
You can use Amamar third party solution to achieve this
Also Red Gate SQL Backup, Quest LiteSpeed, or Idera SQL Safe (stolen from a dba.stackexchange post here https://dba.stackexchange.com/questions/34358/sql-server-restore-a-single-table-from-a-backup)
So, Brent, what happens to your parameter sniffing class? 🙂
Given how quickly y’all adopt new versions of SQL Server, I’ll have to plan to discontinue it next week. 😉
The real question should be will the new “Parameter-sensitive plan optimization that caches multiple plans per stored procedure” alleviate parameter sniffing problems or will it just create new and interesting parameter sniffing patterns to mystify users 🙂
Good stuff with parameters sensitive plan optimization and all but isn’t it about time that they change default isolation level.
I’m trying to work out what the “parameters sensitive plan optimization” means?
Is it the reverse of ‘Forced parameterization”?
Does it use ranges?
I guess we will find out in the fullness of time…
Or, you could, uh, watch the video. 😉
OK, I rewatched the video as it seems like I missed something… Bob Ward didn’t explain how “built-in query intelligence” worked except said it decided if an index seek or scan was used and SQL can now cache multiple plans against the same stored procedure (I realise this is new).
“Parameter sensitive plan optimization” sounds like the reverse of “forced parameterization” for store procedures 🙂
If we used dynamic SQL in our stored procedures with hardcoded values wouldn’t we get similar results of multiple plans for the stored proc? (I know it will be on the SQL not the stored proc).
Getting my flame retardant clothing ready…
No no, you’re exactly right! It does seem like they’re building a plan per parameter, and I’m not sure how they’re going to reuse plans in this situation. I’m really excited to see how they avoid plan cache bloat.
well with new blades with upwards of 1Tb of ram. Plan cache as long as remains in memory…but for etl workloads i can see 5 joins exactly the same tables with different parameters causing some fun in ….doh its slow…erm which, everyone else is fine, I am going to interested how you identify which plan is causing problems and how do you force it be selecively updated…or do we still update stats and recompile them all.
If I should guess I’d say, they are using statistics to decide if they need a new plan.
1st query: country = ‘Iceland’ -> statistic says 10 rows -> build a plan with nested lookup
2nd query: country = ‘USA’ -> statistic says 100k rows -> realize, that there is only a plan with much fewer rows -> create a new plan with hash mash
3rd quer: country = ‘UK’ -> statistic says 50k rows -> it is “near enough” to the 100k from query 2 -> reuse this plan
Yeah, they talk about that in the demo I linked to here: https://www.brentozar.com/archive/2021/11/is-microsoft-sql-server-2022-a-big-release/
I don’t think I explained myself very well I mean if we had a parameter like 10 we could build into the dynamic SQL the value like “10=10” so for each unique parameter for the stored proc we would have unique dynamic SQL and a unique plan.
I might have got the wrong end of the stick and It’s getting to the end of the day for me…
Right, but then you have the problem of plan cache bloat that we talk about in the Mastering Query Tuning class module on dynamic SQL. It’s effectively putting OPTION (RECOMPILE) on every query because you won’t be able to cache any execution plans.
I agree this sounds really exciting but as others have pointed out I don’t see this coming to production servers for several years but I guess the cloud could be calling…
Hi Brent, thanks for sharing,
Will the parameter sniffing end?
Yes, right from the first release of SQL Server 2022, I’m excited to share that they’ve gotten it completely right, and we will never have performance problems due to parameter sniffing again.*
* Cmon, seriously, think about that
Thank you brent!
I am very excited.
Uggh more Azure
My Cores, CPUs and Disks are cheaper and faster
Query Store on by default = will script to turn it off
But Maxdop and CE in query Store is nice
Analytics: nice, bring them in-house and ill be impressed
and what are the maximums on Standard Edition?
yet still 128GB RAM?
Brent, didnt you Njoy the 13CUs for 2019?
I do law enforcement IT. We ethically cannot do Azure; legally it’s a gray area. We still have requirements to do laser-disc due to the write-once-never-change nature of the media. This was due to court cases in the… 80s… I think. IANAL.
And lastly, a Haiku:
There is no free “cloud”
just someone’s machine you use
With data you gave.
— with all modesty and apologies to Matsuo Basho
You said in your conference;
We can use dynamic query to solve the Parameter sniffing.
Also said the server should have 128 GB of RAM at minimum for caching multiple plans of the one query.
I think in SQL server 2022, many plans will be created per stored procedure.
How will Microsoft deal with this problem?
That’s a great question! I look forward to the answer as well.
Parameter sniffing is fixed in SQL Server 2022
Good to know
Thanks Brent for sharing
That means we can have AG groups databases on read only replica (on premise) from high availability clustering setup and we can add some read write DB out of AG groups on the on premise server. This will help to setup node for ETls reading from ag groups dbs and writing to staging dbs.
If you’re referring to the Azure SQL DB Managed Instance part, no, that’s not correct. The Azure DR scenario uses Distributed Availability Groups, and those probably won’t be readable.
Thank you Sir for your prompt rely. Are there any options with MI where i can replicate TXN data to other node/server close to real time using AG groups and have read write stagings DBs on the same node ?
For personalized architecture advice, click Consulting at the top of the site. Thanks!
So, many new stuff to make me pay for the cloud. Hard pass. And hold on till we see what of these performance optimizations will be part of the Standard Edition.
“So, many new stuff to make me pay for the cloud. Hard pass. And hold on till we see what of these performance optimizations will be part of the Standard Edition.”
Will the Parameter-sensitive plan optimization “fix” the case where the optimizer won’t do a key lookup on parameter @MyKeyId?:
WHERE MyKeyId = ISNULL(@MyKeyId, MyKeyId)
It would be convenient if this type of WHERE clause was efficient.
I wouldn’t expect that, no.
at least you should rewrite it (even if it is annoying because longer) into WHERE (@MyKeyId IS NULL OR MyKeyId = @MyKeyId).
Depending where / how you are using it an OPTION (RECOMPILE) will help too in this case (if the statement will not be called all the time)
Am I missing something in the failover/failback? Imagine I have 2 nodes – one primary on site, and one for DR using an async Availability Group (or mirroring, or Transaction Log Shipping). If I want to do maintenance on the primary, I can make the DR replica synchronous (or do a final transaction log backup), do a failover, and run over at DR for a little while. Once the maintenance is done, I can reverse the stream pretty quickly, and be back up and running on my primary server.
If I use Managed Instances, sure I can failover easily, but the example I keep seeing from Bob (and other MS demos) shows me how easy it is to back that up to Azure storage, and then restore it. That works great for their trivial size database, but if I’m dealing with 100’s of GB or more, and/or I don’t have a data center with the network capacity of Azure, how big an outage am I going to have to take to get my data back to my primary data center?
Am I missing something in the demos? Or is this really a one-way feature?
They’ve repeatedly said that the feature will allow for failing back via the AG, but they just haven’t demoed it. My guess is that they’re either doing the demos that way to keep the timeline tight (after all, it’s really hard to demo everything in a short period of time), or else the preview bits weren’t quite ready to show the fail back yet.
Let’s hope so. The feature seems pretty useless without it.
Yeah, it reminds me of when they introduced log shipping to Azure SQL MI – sure, I get that it helps with one-time migrations, but…
Any new Graph features? SHORTEST_PATH was added in 2019. Hoping for “all paths” related feature.
Nothing was announced, no. I haven’t seen graph usage burning up the charts.
HA HA HO HO
“SQL Server 2022, now with 100% more blockchain!”
Hats off to MS on this one. They finally heard our cries over the last decade to add blockchain to our databases and dodged the whole “system databases in availability groups” bullet yet again. Thank god for that. This is the big one boys. Blockchain is coming to SQL Server, freakin’ blockchain!! What a time to be alive.
When it comes with HADR , does it supports below?
-AlwaysOn – SQLMI as Primary and On-Prem as Secondary?
-Publisher as SQLMI and Subscriber as On-Prem?
-Log Shipping -bidirectional(SQLMI and On-Prem)?
For questions on unreleased versions, check with Microsoft.
It’s hard to believe they haven’t announced a single new feature for T-SQL in this version. SQL Server continues to feel like a database displaced in time from 30 years ago.
The parameter sniffing problem seems to be solved.
What is your opinion?
What makes you say that it’s solved?
I tested the demo of your conference on SQL server 2022.
So you’re saying that one demo is representative of all the parameter sniffing issues we cover in Fundamentals of Parameter Sniffing and Mastering Parameter Sniffing?
I only saw Bob Ward’s demo.
I just did a test.
I did a lot of tests, the issue is still unresolved.
OK, great – that’s probably a good learning experience for you. If someone tells you they fixed something completely, you want to test it, right? Same thing with software. Cheers!
We are very interested in the Azure Synapse Link integration feature. Wonder when it will be ready for production.
Does anyone know how to restore a 2022 database snapshot back up? I used the script to create the metadata back up but the restore statement gives an error that says this – snapshot.bkm is incorrectly formed and can not be read.
For unrelated questions, head to a Q&A site like https://dba.stackexchange.com or https://SQLServerCentral.com.