Why You Should Stop Depending On SQL Server Versions In Code

It Used To Be

That when you wrote scripts that had to work across different versions of SQL Server, you were pretty safe.

Microsoft would release a new version, or a Service Pack that had something new in it, and it wouldn’t start showing up in older versions.

That’s not so true anymore. Microsoft has been doing some awesome work to add and back port new features. It’s doubly true when you’re dealing with cloud products (I’m looking at you, Azure SQL DB) where who knows what’s going to be or not be there, what version numbers will be, or anything like that. It’s the Wild West.

Which means if you write the kind of scripts that might throw errors if they go looking for a DMV or column that might not be there, you constantly have to figure out which major and minor versions you can go look for stuff in, often using dynamic SQL.

Nowadays

You still have to use dynamic SQL. That much is obvious. But it doesn’t make sense to do stuff like this anymore:

Why? Because all your code is going to do something like this:

This is gonna be a real pain in the butt to keep up with. Stuff can get released in Cumulative Updates that’s brand spanking new.

Here’s an example from 2017 CU3. That got backported to 2016 SP2.

Awesome! Unless your code looks like… that. Up there.

What’s Better?

It’s a whole lot more reliable (and a whole lot less work) to explicitly check for new things in DMVs, and use that to inform your code what to look for.

For example, this just makes sure all four columns are in dm_exec_query_stats:

Then your dynamic code becomes a bit simpler, too:

Now it doesn’t matter if Microsoft adds, removes, changes names, or definitions down the line. That stuff could normally cause problems or create a lot of busy work keeping dynamic SQL up to date. Now it doesn’t.

Hooray

Now, there’s still a whole lot of version checking in our code in the First Responder Kit. Cleaning it up isn’t a priority, but not writing more code like that is.

And this won’t work for everything, unfortunately. For example, if there’s a bug in certain versions and you need to skip those, you still need to rely on version numbers. There are likely other examples where it won’t work out there, too.

Thanks for reading!

Brent says – another thing I love about this approach is that when Microsoft backports a feature to an older SQL Server version by way of cumulative update, you don’t have to go revise your scripts right away. If the field is there, you’ll use it, whenever it happens to show up.

Previous Post
Nodes Offline: Why Your SQL Server VM Can’t Use All Its Virtual Hardware
Next Post
Can Non-SARGable Predicates Ever Seek?

3 Comments. Leave new

  • Steffan Rhoads
    June 14, 2018 11:27 am

    Thanks for providing a much better way to account for the features you need. Will still be side-effects.

  • That is really useful. Thanks.

    However, I have found it more useful to version control my database objects, and store a DB version number (specific to the DB, not related to SQL Server version). The calling app or external DB code can get the DB version (or release #, if you prefer) with a very generic SP. Then the calling code can decide how to make calls depending on the version or release number.

    There are other advantages to versioning DB objects and keeping them in source control.

    QA can test your DB release against various SQL Server versions to verify the DB release works and publish what those compatible versions are.

    • Erik Darling
      June 24, 2018 6:07 pm

      Jeff — yes, that’s quite convenient when you’ve got full control over the environment, and the luxury of QA teams to test things.

      When you’re releasing scripts to the general public, like we do with the First Responder Kit, it’s quite a different story.

      Thanks for stopping by!

Menu
{"cart_token":"","hash":"","cart_data":""}