How to Change SQL Server Settings with the Scientific Method

SQL Server
5 Comments
Ask your scientist if Service Broker is right for you.
Ask your scientist if Service Broker is right for you.

When you’re trying to fix a problem (or even just researching something you’re curious about), do what scientists do.

1. Ask a question. “Why is this database so slow?”

2. Do background research. Hit Google, check Stack Exchange, maybe even read Books Online.

3. Construct a hypothesis. “I think this database is slow because I don’t have Lock Pages in Memory enabled.”

4. Test your hypothesis with an experiment. “On Saturday night during the change control window, I’ll enable Lock Pages in Memory. After that change, I believe that Page Life Expectancy will be higher during our normal business hour load periods – it’s currently 300, and I expect to see it go up.” Pick the exact metric you want to watch, and explain – in advance – how you expect that metric to change.

5. Analyze your data and draw a conclusion. “I measured PLE, but it still hovers around 200-300 during our weekday peak times. Therefore, enabling Lock Pages in Memory was not the solution I needed.”

6. If it didn’t get the result you expected, put it back. This is the part most folks miss – we make changes willy-nilly when we’re troubleshooting, and we leave behind all kinds of crazy sp_configure options, trace flags, and registry settings. The next time you have a problem on the server, it’s hard to know what settings we changed on purpose – and which unrelated ones we just button-bashed.

I can hear the experienced DBAs in the crowd screaming out that we should never run experiments in production, and that’s somewhat true. In a perfect world, we’d all have exact replicas of our production environment, and the ability to run production-quality loads. We don’t. Let’s start by taking the first step – making sure our production changes have the effect we expect.

Previous Post
Does Updating Statistics Cause a Recompile if No Data Has Changed?
Next Post
Oracle Flashback: Undeleting Data

5 Comments. Leave new

  • I don’t often test my changes,
    but when I do I prefer to do it in Production.
    Stay oncall, my friends.

    Reply
  • Jack Whittaker
    January 7, 2015 7:53 am

    Before constructing a hypothesis, it would be a good idea to spend a few hours with Paul Randall’s online course about Waits and Queries

    SQL Server: Performance Troubleshooting Using Wait Statistics pluralsight.com

    Learn how to begin troubleshooting SQL Server performance problems, applicable for developers and DBAs from SQL Server 2005 onwards

    Reply
  • 1. Calculate the cross product of all possible settings using sys.configurations (combinatorics is cool and kind of scientific)
    2. Run performance tests with each possible combination of settings
    3. Check whether the heat death of the universe has occurred yet
    4. If you made it to step 4, run tests of statistical significance against the performance test data (statistics are also cool and kind of scientific)
    5. Discard any uninteresting results
    6. Publish the rest
    7. Repeat step 3
    8. If you made it this far, hide
    9. Unread this list

    Reply
  • Brent, You have written often about the (possible) evils of setting LPIM without proper cause or testing, but I haven’t found any writeups you’ve done about the potential benefits of Large Page extensions for the buffer pool, which is presumably another major benefit gleaned from using LPIM and T834. could you touch on that? And I’d also like to emphasize my gratitude for all that you do for the SQL community! Your efforts and wizdom are greatly appreciated.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.