One of the things I love about SQL ConstantCare® (which happens to be free this month, by the way) is that I can keep in touch with what y’all are actually doing on your production servers.
Today’s lesson was a complete shocker to me: some of y’all are changing the compatibility level on the master database.
Normally, when you install SQL Server, the master database gets a compatibility level that matches your SQL Server version. If you’re on SQL Server 2019, for example, the master database will be compat level 150, which is 2019.
But there are a surprising number of users – like, over 20% of the population overall – who have changed the master database to be an older compatibility level. For example, we’ve got one user with several SQL Server 2022 instances, but they’ve chosen to set the master database to compat level 100 – that’s SQL Server 2008!
So I’m curious: if you’re one of those people who set the master database’s compat level to an older version … why? Are you running end user workloads in the master database, perhaps? Let me know in the comments, and it’s a judgment-free zone. I’m sure there are good reasons, I just … don’t know what they are.
Update 2023-06-16: Sounds like it’s in-place upgrades. I’m so saddened by this because I’ve told y’all not to do that, but in retrospect, I don’t think I’ve said it enough. I wrote that 8 years ago, and there are obviously a lot of new readers that have come in since then, and we should talk about it again. Noted.
21 Comments. Leave new
All the ones I found had been in place upgrades from an older version. We quickly got them corrected, but I bet that is what you are seeing.
…well crap. I would assume that the system DBs would be updated as the SQL Server instance was upgraded, but apparently not. That’s absolutely it I’m certain; just checked a few of mine and sure enough they’re still at the previous level.
But just the master DB, not the others. I’m assuming that Microsoft saw too many people creating tables in the master DB and decided it wasn’t worth the hassle?
That’s weird since I just checked mine and the master compat level matches the SQL Server version (in my case, SQL 2017) and those were all in-place upgrades.
Oh nm….going back to my scripts from when I did the SQL 2017 upgrade, yeah I updated all the compat levels afterwards to SQL 2017 including master and model. I’m guessing I didn’t have to do that with tempdb and msdb since they were automatically upgraded as part of the in-place upgrade (last August seems like so long ago)
Hi Kelly,
> I’m guessing I didn’t have to do that with tempdb and msdb since…
Tempdb gets refreshed every time you reboot a SQL Server, so I think setting the compat level on tempdb would be a temporary setting at best.
Yep. Can confirm. Just found my 2019 Datawarehouse which been upgraded from 2016 still sitting at 130.
It might be some automated script they use to set all their databases to a specific compat level – and they forget to exclude the system databases 😀
Well yep, my system databases in instances running SQL 2022 are all still at SQL 2019 compatibility, and they were in-place upgrades from SQL 2019. Yet another argument in this VM-heavy world for building fresh and migrating the databases, especially if you are like me running Always On AGs and can just add replicas and fail over… Lesson learned… 🙂
Just upgraded from 2019 to 2022 two days ago – master didn’t get upgraded, but model and msdb did. I normally don’t do in place upgrades so I didn’t even think to look for this.
Ours are all the same version as the SQL Server which is what I would expect. We always start with a fresh install and migrate databases over to the new box. So, no surprises here.
We choose to upgrade the company to a new version of SQL as a whole. This can take a good 3-6 months across hundreds of servers.
We have tests that confirm daily that all databases are at X level, including system databases like master. So after an upgrade everything is “as close as” to before to help seperate new version bugs from compatibility level bugs.
Then as the upgrades progress we’ll update the test to say hey now keep the Test environment at X+1. Then the Stage environment. Then specific categories of Production. And so on. By the end, everything is upgraded, and a few months later all the compatibility levels match again.
So we change master to keep in lock step with everything else while we manage the roll out and eventual upgrade of everything. This reduces cognitive load – when you have a lot of servers of different types and few staff, you don’t want to have to think too deep about variations and compatibility level interactions.
We do the same thing with database scoped configurations which are awesome because it lets you keep a higher compatibility level but turn off all the broken features, as you know!
The final thing I’ll add is we throw some of this away on Managed Instances. Those ^*{*%{^+ randomly change the master/tempdb compatibility levels on their own as Microsoft does patching and failovers in the background. Thus you can’t control them, it’s a losing battle.
Hi Brent,
Can you discuss what the negative impacts of having the Master database at an older compatibility level might be? Would you expect degraded performance?
Just curious…
That’s outside of the scope of this blog post, but I’ll consider it for a future post. Thanks!
Just as others have mentioned this is most likely due to in-place upgrades. I would expect the number of people actually changing compatibility level to something old to be very low. I have a few in our shop.
but Brent… in place upgrades save you so much time and they are easier and I don’t have to ask my sysops team to do anything
I put WhineyVoice as a tag in there to show my sarcasm but the blog ate it. Interesting…I wonder what happened to that tag? Is it in heaven now?
We used to have applications that required the databases on 100 even though they were on 150. Now, these are just user databases, which, made us switch the model to 100 (we heavily depend on model since all the databases + new ones share a template). The reason I can think of is that
1. People might have views/procs/funcs created on model that runs (or better) on 100
2. Application checks for 100 compatibility level (session landing on master by default)
3. Someone said switch all of ’em to 100
4. Superstition 😛
WRT in-place upgrades, people – migrate to a new machine (or cluster), you’ll get a clean slate doing that. In-place upgrades can deny you of quality sleep.
I like abuse, so I’ll answer this one. I’m in a manufacturing company, where users don’t want to test, they just want it to work so they can do their main job of sending product out the door. We’re finally getting rid of SQL Server 2012, and still have multiple servers that we can’t touch because of application requirements (which is really “no staff and money available” from management perspective).
Will this eventually bite us horribly? Probably. Will it sit there and run for the next several years? Also probably.
As with many other people, I’m doing 3-4 jobs simultaneously and so I’m not willing to fight both IT and production management to get a migration process/window set up. So, when we have outage windows, we do in-place upgrades, and so far haven’t had serious issues.
Am I proud of this? No, but I can live with it.
I did a brand new installation of a Managed SQL Instance on Azure.
sql
select convert(varchar(8), name) as name, compatibility_level
from sys.databases
where database_id < 5
name compatibility_level
-------- -------------------
master 150
tempdb 150
model 150
msdb 150
Huh? I'm gonna Google this but they should all be 160 IMVHO