Building SQL ConstantCare®: I Love Unit Tests

You know what my biggest regret is about sp_Blitz?

No unit tests.

Seriously. Back then, I wrote a bunch of T-SQL checks to see if a server had memory configuration errors, corruption, obscure trace flags, etc. Over time, sp_Blitz got bigger and bigger, and we started getting code contributions from strangers. Things got even more challenging when we open sourced it with the MIT License and put it on Github – it was even easier for people to throw in more code.

And we did the best job we could of testing it, but…people make mistakes, including us. That bug fix we did to fix one issue had an unforeseen side effect, and broke something else. Or even worse, silently broke checks – making it so we no longer alerted someone if their server had an issue.

When we started building SQL ConstantCare®, I didn’t wanna make that same mistake. Every check we coded with a PostgreSQL function, we needed to be able to automatically test going forward, every single time we had a commit. I have no idea how you accomplish something like that – I mean, I know the theory, but I wouldn’t even begin to know what tools to use. Fortunately, Richie does – so the way our arrangement works is that I write the query, and I supply him with test data to satisfy different edge cases that should pass or fail.

Example: the remote DAC rule

Take the remote DAC rule I blogged about last week. It sounds so simple: if it’s not enabled, you need to turn it on. Well, it’s just a little more complex than that:

  • If sp_configure ‘remote admin connections’ reports 0, we want to advise you
  • If sp_configure ‘show advanced options’ reports 0, then we need to have you turn that on to make the change (and possibly turn it back off afterwards if you want
  • However, if any sp_configure options have value_in_use <> value, then we need to skip this advice for now, because we don’t want you running RECONFIGURE since it would put those other sp_configure changes into effect
  • Except for a few sp_configure options where certain variances are normal (like min memory, which seems to jump back and forth between 0 and 16 all the time)

There’s a bunch of different test cases in there – and hey, that’s one of the cool things about having all this data up in the cloud. It’s easy for me to go spelunking through diagnostic data (looking only at the folks who gave us permission to use their data to improve their recommendations, of course – more on that in another post.) In a matter of minutes, I could find examples of:

  • Someone who had remote DAC turned on (so they shouldn’t get an alert)
  • Someone with remote DAC off, show-advanced on, and no pending changes
  • Someone with remote DAC off, show-advanced on, but pending changes to, say, MAXDOP
  • Someone with remote DAC off, show-advanced off, … you get the point

I can then run my Postgres function against these different clients, make sure it produces the right results, and then hand it over to Richie to build the automated tests. The testing is done with Javascript, and I am positive of that for two reasons: Richie told me, and the unit test files end with .js. I found them as part of writing this blog post, opened a couple of them, saw that they weren’t database queries, and did my best Grandpa Simpson.

Unit testing costs us money today,
and saves us money tomorrow.

The very first time you build something, it takes longer. As a small business, that cost us more time (and money, in terms of salary) to get to minimum viable product (MVP.) Each completely-brand-new feature (like the wait stats analysis we’re working on now) takes longer to build, too. However, when something goes out the door up to the cloud, we’re more confident that it works today, and that it’ll still be working five years from now despite tons of unrelated code changes.

Seven years ago, when we started this company, I wasn’t a believer in database unit testing. Today, I’m a raving fan.

If you wanna learn how to apply the same concepts in your SQL Server work, check out Alex Yates’ upcoming database devops class.

 

Richie says: In my opinion, when creating new features, that feature isn’t complete until tests are created. The tooling to write automated tests for code is fairly simple, we’ve been doing it for years, but creating tests for database code is a horse of a different color. In the SQL Server world, we have tSQLt and in the Postgres world there are similar tools, but for SQL ConstantCare, we built a small database testing engine using Node.js, Mocha, and Chai. These are the same tools that we use to test the serverless functions. We’re not doing anything super fancy like faking or mocking: we just load data, run a function, and verify the result. We even plugged these tests into our automated build system. And yes, we know that our database test are technically integration tests and not unit tests, so you don’t have to blow up the comments explaining the difference to us.

When I first joined the company, I begged Brent to let me write tests for sp_Blitz. Having tests around sp_Blitz would have been a huge undertaking and we had other things to do like build SQL ConstantCare. You might face the same situation. The decision to create unit/integration tests around your product might be more of a investment than what your company is willing to make. It’s a tough call. Just make sure to point out the risks during your testing and deployment to your stakeholders.

Previous Post
Today’s Press Conference Transcript: I’m Running for Database Administrator General
Next Post
How to Screw Up Your Database Updates in 3 Simple Steps

11 Comments. Leave new

  • Who Tests The Tests? The Tests?

    Reply
    • It’s a sarcastic question, obviously, but it turns out there’s a real answer. When we light up new rules, we gradually roll them out to clients that we know are facing the issue that the rule is designed to catch. Then, we progressively roll it out to more and more clients, monitoring to make sure it’s working as intended.

      For more on how it works, check out this post on feature flags/toggles:
      https://martinfowler.com/articles/feature-toggles.html

      Reply
      • Or, as in the story of the mouse project, you have a project that’s designed to trigger your tests. You’re sort of doing that by cherry picking data that you have and running your tests against it, but a quicker and safer method would be to have a test database that always has the same data in it, so you eliminate variables in your testing. You can compare today’s test suite output to yesterday’s test suite output, and if it’s not the same, you know you’ve broken one of your tests.
        http://www.thecodelesscode.com/case/182

        Reply
        • Right, that’s exactly what Richie’s testing is doing. It loads test data, runs a rule, checks to make sure the output is correct, then loads a different set of test data, repeats the process. He has several different sets that should always pass, or always fail, each rule.

          Reply
  • Do you have any information on the database testing engine? I am curious how you are performing these unit tests.

    Reply
    • Sure, what kinds of questions do you have? (The more specific you can be, the easier it is for Richie to answer.)

      Reply
      • I was just hoping for an overview for how Richie is running or has setup the test engine. Do you have any blog posts on this topic?

        Reply
        • Richard – ah, no, we tend to keep this as a SQL Server focused blog. If we ever wanted to spin up a PostgreSQL consulting side of the house, we’d probably start a different site & blog.

          Reply
  • Andrej Kuklin
    May 15, 2018 5:00 am

    I use NBi http://www.nbi.io/docs/home/ for ETL and SQL tests. Easy to use. Devs familiar with NUnit would find a lot of similarity.

    Reply
    • Richie Rump
      May 15, 2018 8:31 am

      Interesting. Does this just test the output of two queries or does it do more? How do you manage the data for the expected results?

      Reply
      • Andrej Kuklin
        May 15, 2018 9:37 am

        It’s all in the docu.

        http://www.nbi.io/docs/compare-equivalence-resultsets/
        Expected results can be another SQL query, CSV file, embedded result-set (if you expect just a couple of rows with a couple of columns) and XML-file.

        You can define key columns for comparison of the result tests and different rules for comparison themselves like tolerance or to be ignored columns.

        Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.