SQL ConstantCare® Population Report: Summer 2020

Ever wonder how fast people are adopting new versions of SQL Server, or what’s “normal” out there for SQL Server adoption rates, hardware sizes, or numbers of databases? Let’s find out in the summer 2020 version of our SQL ConstantCare® population report.

The most popular version of SQL Server is 2016. The combination of 2014, 2016, and 2017 make up 75% of market share right now:

But companies are gradually adopting newer versions and replacing their older servers. Here’s how market share changed over the last 6 months:

SQL Server Version Adoption Rate

This quarter, shops continued to gradually replace their older versions:

  • SQL Server 2019: 4% of the market, up from 1% 6 months ago
  • SQL Server 2017: 24% of the market, up from 18% from 6 months ago
  • SQL Server 2016: fairly static at 34%
  • SQL Server 2014, 2012, and 2008R2 all declined at the expense of 2017 & 2019
  • Azure SQL DB and Managed Instances didn’t make any headway, still at 1% of the total population

My thoughts:

  • SQL Server 2016 is still the juggernaut, with 1 in 3 instances overall.
  • SQL Server 2019 still has less market adoption than SQL Server 2008R2.

So why is Azure SQL DB making so little headway here?

In this data, Azure SQL DB & MI still have less market adoption than SQL Server 2008, a 12-year-old and wildly unsupported version, let alone 2008R2.

SQL ConstantCareI’ll be the first to point out that there’s a natural bias in the source of the data. Our SQL ConstantCare® service lets users send data in about their databases and get health & performance advice via email. SQL ConstantCare® isn’t a full picture of the SQL Server ecosystem: it’s marketed to you, my readers, and specifically the subset of readers who want an inexpensive second opinion about their database performance & health. Conceivably, people running in Azure SQL DB don’t need a second opinion about their database’s health: the patching, uptime, and backups are all handled by Microsoft, and you can’t get much better than that.

However, Azure SQL DB performance – just like any database’s performance – is still a real problem, and I know that from the consulting side of the business. You can scale a long way just by throwing money at it, but eventually your wallet starts to buckle, and you need to fix the queries and/or the indexes.

If I’m wrong – and I certainly could be, I just don’t know – Microsoft could fix this perception by publishing adoption data. SQL Server phones home by default, so they could easily publish adoption rates for different versions. (If they just published download numbers, that wouldn’t really mean anything – lots of folks download stuff they never use.) The fact that they don’t publish metrics makes me suspect that my report’s numbers do give a rough idea of the wider population: 2016 is the big behemoth right now, and the adoption rates for 2019 and Azure SQL DB just don’t hold a candle to it.

I still think platform-as-a-service databases like Azure SQL DB and AWS Aurora are the future king of the market – but just emphasis on future. And at this adoption rate, the future is still at least 5-10 years away.

I’ll ask you, dear reader: why aren’t you migrating to Azure SQL DB?

Previous Post
Watch Brent Tune Queries at SQLSaturday Oslo
Next Post
SQLBits Attendees: Here Are Your Prerequisites for Mastering Index Tuning.

24 Comments. Leave new

  • keith.brockman
    August 31, 2020 9:35 am

    I work for the State Government. Our instance has about 30 databases and use filestream and filetable. We have about 100 applications that go against this instance. To convert the 30 databases into schema’s and change filestream and filetable to something else would cost a lot of time.

    Reply
  • I won’t use it because of the lack of control and capabilities. If I had apps that just need a place to store stuff, and the data wasn’t terribly important I could possibly consider it, but I have no such apps. I have SQL Server instances in Azure, but they are running on dedicated VMs. The workloads on one of these couldn’t be run on SQL DB or a managed instance for a number of reasons with one of the big ones being the need for cmdshell. Its nice that the backups in SQL DB and managed instances are handled by Microsoft, but I want to retain control of this. I have no experience with a restore request via MS support, but I strongly suspect it is a lengthy. Based on my experience with Premier Support, I have zero confidence. There have been a few occasions where I’ve had to do a point-in-time restore just to get a single row back and I didn’t know exactly when it “disappeared” from the production table. I don’t know how something like that could be done in one of these vendor-managed environments.

    Reply
    • James – makes sense. So in your VMs, if you need to get a single row back from production and you don’t know exactly when it disappeared, how do you do it? (There are definitely ways to do it, just curious which approach you use most often.)

      Reply
      • Fortunately having to do this was not a common occurrence. In the SQL 2000 and earlier days it would be “tell me when you think…” and I would do restores in stages looking for what I needed. The last few times I had to do this were around 5-6 years ago and I used fn dump to look at transaction log backups. Though I didn’t know exactly when someone made the oops I did have some knowledge when the last time it was seen, so I had something of a place to start.

        Reply
        • James – hmm, so if you had some knowledge of the last time it was seen, why not use Azure SQL DB’s point-in-time recovery?

          Reply
          • Would that allow me to do the restore to a different database? When this stuff happened I never wanted to restore the entire database as I was looking for a single row in most cases. I was also doing these restores in a staging area away from the prod server. I have an SP that will script an insert statement and as long as I had no data type problems I would script that insert and run it on prod and then clean up the staging DB.

          • Yep! That’s the only way Azure SQL DB restores work, actually – you have to restore to a new database.

          • Ah! so it works like VM restores in Azure where it won’t overwrite the existing VM. This can be good or bad depending on the situation. Good to know. One potential gotcha is with the last time it was seen. That doesn’t mean it was the last time it was modified. But I was lucky and never got bit by only being able to restore stale data. Not yet, anyway.

  • lack of feature parity and size constraints – our biggest database is pushing 14TB and that size isn’t supported by almost any Azure DB offerings (assuming we could find a way to get all that data over there without downtime in the first place). We also use features that aren’t supported by Azure DB – service broker, SQL Agent Jobs, SSIS (not all on the same box) and the inability to keep a backup long-term or bring it back from Azure to OnPrem if things go badly later.

    Reply
    • Samuel – yeah, makes sense. You’re definitely an outlier there in terms of the combo of Service Broker and 14TB.

      Reply
    • I recently watched a course about Azure storage and one of the areas was long-term backup storage. Something like ten years from what I recall. In the demo, the instructor picked the master DB to restore. In the demo there were no issues and all was well. Hum.

      If a DB service in Azure could accommodate a DB as large as yours and larger, do you think a seeding approach would be viable? In the past I’ve had to add new subscribers and initialized it with a backup. It may be possible to use the Azure import process to ship that initial backup and then replicate any changes to the cloud. Seems plausible, but it could get ugly depending on a number of factors.

      Reply
  • The size limitation on Managed Instance was our first concern. Our team wasn’t prepared to learn Azure Data Factory in time to refactor all of our cross-server SSIS jobs and replace the xp_cmdshell code (which would have been great).
    The biggest concern from upper management was this:
    Backups from a managed instance can only be restored to another managed instance. They cannot be restored to a SQL Server instance or to Azure SQL Database.

    Reply
    • Surely this wording is incorrect “A database backup created on a managed instance can only be restored on ANOTHER Azure SQL Managed Instance.” Surely they don’t really mean to say a backup cannot be restored to the SAME managed instance from which it was taken?

      Reply
  • When I evaluated Azure SQL DB (which was several years ago), the performance of my application, which involves a lot of very small queries was absolutely horrible. That was ignoring all the parts of the application I had to disable just to get the rough performance numbers (things like SQL CLR).

    Reply
  • Based on what I have seen at many (very) large companies across the world is, that the main reason behind staying with older versions or not going to DBaaS, is laziness and the fear of change and losing control. In the traditional IT manager sense, controlling a large on-premises inventory equals to control and power. Yes, it’s that dumb actually. Even if they can’t really manage their on-premises inventory either, they lack the expertise, people, and thus “control” a large mass of unsecure, unpatched, old legacy environments on-premises, so their control is just an illusion.

    Reply
  • As a single developer on my application, I migrated to Azure SQL DB earlier this year for my 100+ client databases and so far I’m very happy. The additional burden of monitoring my previous SQL Server installation was just way more than I could handle on my own. Performance and up-time have been very good, which is why I eventually let my ConstantCare subscription lapse. A lot of the main concerns I had on a daily basis are now extracted away on the Azure SQL DB platform.

    Reply
  • Keeping us from migrating to Azure SQL Database – 3rd party applications using SQL Server features unsupported by Azure SQL Database (SAAS). We don’t see value in using PAAS because we’d still need to same FTEs to manage the applications.

    Reply
  • George Walters
    September 1, 2020 4:29 pm

    I read that someone said 14 TB for a database.

    Azure SQL Database Hyperscale exists, and goes to 100 TB.

    The other pieces, like service broker, or MDS, or OS access, it is understandable. It’s a refactor of code to go Azure SQL DB.

    Reply
  • From my perspective, the main reason why we have a 2016 version is as simple as it works, and migrating to a newer version includes both a risk and a cost. Therefore, we will not move to a newer version before there is any compelling reason. (Yes, newer version has new features, but so far we are doing okay without them.)

    And about the reason why we don’t move over to Azure SQL DB is partially the same, but also that the cost is a bit unknown. I can compel my boss to change something, maybe increase memory on the VM, if I can show the cost of it and explain the gains. When it comes to Azure SQL DB, it feels like the cost can increase outside my control. For my employee/customer, a slightly higher but predictable cost is better than a potential smaller but unpredictable.

    I think a lot of companies are in the same state of mind.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.