If you right-click on a database in SQL Server Management Studio, you get an option to set Compatibility Level at the database level:
When you upgrade SQL Server or you want to improve performance, which option should you choose? Should you always go with the newest compatibility level? If you have an old application, can you upgrade SQL Server but still keep the old compat level without the vendor knowing? Let’s hit the common questions.
What does compatibility level do?
When Microsoft brings out a new version of SQL Server, they tend to keep the newest features only available in the newest compatibility levels. For example, SQL Server 2022’s Parameter Sensitive Plan Optimization (PSPO) is only available in databases running under SQL Server 2022 compatibility level.
That means if you’re taking a database that used to live on an older SQL Server, and you want to host it in SQL Server 2022, and you want it to have the same behavior that it’s always been used to, you should keep it on the compatibility level that it’s currently on. For example, if you’re hosting it in SQL Server 2016, and the database is currently at SQL Server 2016 compatibility level, then you could move the database to a 2022 server, but keep compat level on 2016, and the users shouldn’t notice the difference.
In reality, though, there are things inside SQL Server itself, at the server level, that will change no matter what your compatibility level is. For example, if Microsoft deprecates a feature and removes it altogether, that feature isn’t available even if you’re on older compat levels. (Hello, Big Data Clusters.)
Should I change compatibility level?
If there’s a specific feature that you need that’s only available in some compat levels, then yes.
However, if you’re happy with performance, then no. Hear me out: changing your compatibility level can make performance worse instead of better. Sure, in some cases, it makes performance GREAT – but because all change = risk, then changing compat level when you’re already happy is dangerous.
How does compatibility level affect performance?
If you migrated from SQL Server 2019 to 2022, here are ways that changing a database’s compatibility level can make things better or worse:
- Cardinality Estimation Feedback can change a query’s estimated number of rows
- Degrees of Parallelism Feedback can reduce the overhead of bad parallelism (most parallelism is good though)
- Parameter Sensitive Plan Optimization can cache multiple query plans for the same query
In each version of SQL Server, different features are enabled under newer compatibility levels. Before changing your compat level, you should review what features are added when you change from your current compatibility level to the new one. This is a good starting point.
What should I measure before changing compatibility level?
In theory, you should have a performance baseline of things like:
- CPU, memory, and storage metrics
- Your top wait types so you know what SQL Server is bottlenecked on
- Which queries are using the most resources
- Query plans of well-performing queries (because things might get worse, and you’ll wanna know what they used to look like back in the good old days)
Then, when people suddenly complain about performance, you can check your baseline to see whether things actually got worse, or whether your users had taken up eating shrooms. You could also track down which queries were NOW at the top of your resource-consuming query list, look at what their query plans USED to look like, and then figure out how to get back to the good old days.
Common ways to accomplish this are third party monitoring products, Query Store, or the First Responder Kit.
In reality, you’re not gonna do any of this ahead of time. So, when you change compatibility levels on the fly, and performance gets worse, you’re not going to have any answers.
Does that mean I shouldn’t touch compatibility level?
No, not at all! You can change compatibility levels whenever you want, one database at a time. You can also change back instantly as well. You just need to be aware of when you made the change, what you changed, and communicate it to the rest of the team so they can roll your change back if necessary.
What compatibility levels are available?
The screenshot at the top of the blog post was taken in SQL Server 2022, and even in this recent release, Microsoft supports compatibility levels going all the way back to SQL Server 2008. That’s kinda awesome, because it means that Microsoft is trying to keep old databases working great in newer versions of SQL Server.
In theory, that means you can take an old vendor application that was once certified on SQL Server 2008, and keep moving it to newer and newer versions of SQL Server. In theory, that means it’ll keep working the exact same way as long as you keep the same compatibility level – and hey, it might even get faster if you change to newer compatibility levels.
So, can I actually do that?
Well, no. I mean you could, but you might get caught.
Here’s the thing: the vendor might be relying on a feature that’s no longer available in newer versions of SQL Server. I gotta be honest, that’s extremely unlikely, but it is possible. And if they are, and their application suddenly breaks, you can’t restore a newer SQL Server database to an older version of SQL Server.
So if you take your SQL Server 2008 server, back up the databases, restore them onto SQL Server 2022, and then start using the app – and people start complaining – you can’t restore those 2022 backups down onto SQL Server 2008, even if they’re still in the same 2008 compat level. You can only restore to newer versions of SQL Server, not older.
Therefore, you’re taking a risk when you move databases onto newer versions of SQL Server. Make sure the vendor actually supports the newer version of SQL Server, because you don’t wanna be the person that the vendor blames for their application not working successfully.
Want to watch me write this blog post?
I streamed this blog post live if you want to get a rough idea of what’s involved with writing a post like this:
After some recent upgrades to SQL 2019, query performed poorly. For ex, where queries on previous version of SQL (in this case 2014) ran in a couple minutes, they took hours after the upgrade.
Upgrading database(s) to CL 150, rebuilding indexes, and clearing Proc cache (DBCC FREEPROCCACHE) resolved
I always rebuild indexes following a version ugprade. It seems to do something that full scan statistics does not.
I had terrible performance for some join queries when i moved from SQL 2008 r2 to SQL 2016. The compatibility level was the same as the source server. Finally, the issue was resolved by changing the Legacy Cardinality estimator option at the Database level. I d love to see o blog post from you Brent on this option.
Check out my Mastering Query Tuning class where we cover that sort of thing.
Were you interested in Sané – Mané fight? 😀
Restoring a database to a higher version of SQL server does not upgrade the database.
One can walk into a shop and find SQL 2012 running a SQL2005 compatibility level. Yes I’ve seen 2019 running 2008 compatibility level. Hey, things get overlooked.
First Responder kit is best, this code also works (for me at least)
Did you try restoring that database with compatibility level 2005 running on a 2012 server back to a different server 2005 or 2008? That will not work and that is what Brent is talking about. Even if you run in a lower compatibility level on a new SQL edition, it will still upgrade the database such that it will no longer run on an older server version.
If you need to “restore” from a higher version of SQL server you can create a copy of the DB schema on the lower version of SQL and use SSIS (or other data movement tool) to export the data out of the higher version of SQL into the lower version of SQL. This is fairly ghetto and I don’t enjoy it, but there are some scenarios where this is necessary. This of course refers to data only as certain features may not exist in the lower version of SQL.
thanks sir, you are great.