Automated Tuning And The Future Of Performance Tuners

Before you get too excited

This isn’t a dive into any of the new automated tuning features of SQL Server 2017.

I’m interested in them because Microsoft thinks they can put us out of business with them, but…

Fat chance.

See, a lot of the automated performance tuning features assume you EVER HAD GOOD PERFORMANCE IN THE FIRST PLACE.

I mean, sure, it may have seemed okay when your database was 5, 10, or 20 GB, but now it’s 200 GB and things have just been getting worse for months. Or years.

These features are cool, but they don’t go in and fix your crappy code, your wrong settings, or get you off that VM with 8 GB of RAM that your VM Admin swears by the grace of his body pillow girlfriend is all a SQL Server will ever need.

After all, he read a book once.

Best of the worst

Our typical customer is worried about…

Keeping the lights on:

  • If they’re on the right hardware
  • If they can or should move to the cloud
  • If they’re meeting RPO/RTO
  • If their HA/DR strategy is correct

Decoding hieroglyphics:

  • How do I validate server settings?
  • How do I figure out what my problems are?
  • How do I find my worst queries?
  • How do I tune my indexes?
  • Bonus points: How do I interpret output from your free scripts?

Some customers are more savvy, have pains around parameter sniffing that they’ve identified, and want help with those. They’re tired of restarting the server, freeing the proc cache, or recompiling queries. But the thing is, there’s usually so much other stuff that gets uncovered, parameter sniffing getting fixed is almost a side effect of other changes.

Some examples:

  • An index adjustment took the bad plan choice away
  • It wasn’t parameter sniffing, it was a scalar function
  • There was a blocking problem
  • Non-SARGable predicates buried four nested views deep

The future

So, while I’m totally keen on automated performance tuning, and SQL Server 2017 being able to compare and correct bad plan choices, I’m still not worried about my job.

We still talk to people on 2008-2012 who are locked in because of vendor requirements, or other reasons, who just aren’t going to see those new features in the near future.

And plus, they still assume performance was good in the first place.

Thanks for reading!

Previous Post
Introducing sp_BlitzQueryStore
Next Post
4 DBA Resume Anti-Patterns

5 Comments. Leave new

  • Hi Erik – gotta say, I really enjoy your posts. What do you think about systems like SAP Hana? I know that there is definitely a learning curve with a system like that (and large costs associated with it), but we have deployed a small instance and I gotta say…it’s a game changer for us. We’re able to deploy normalized tables with hundreds of millions of records and minimal tuning and get unbelievable response times.

    Reply
    • Erik Darling
      June 8, 2017 9:15 am

      Hey James,

      You know, I’d love to have a reason to use HANA for something (I can’t just spin up an instance that size in our AWS lab without a good reason). It seems really cool, and I’ve talked to a couple people using it for GPS-enabled device coordinates ingestion. They’re really happy with it.

      I’d assume by the relative quickness that major cloud providers have all introduced instances specifically sized for the technology that it’s catching on pretty well.

      Someday, perhaps!

      Reply
  • Gavin Harris
    June 9, 2017 5:19 am

    I’m not going to fear Skynet until they can code a tool that can ask and understand the answer to “So, what are you *actually* trying to do with that code?”

    Reply
  • ” VM Admin swears by the grace of his body pillow girlfriend”

    It’s like you can see him.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.