Why Not Just Create Statistics?

Here at Brent Ozar Unlimited

We have a proud tradition of not blaming index fragmentation for everything. There are points you should deal with it, but they’re probably not 5% and 30% and 1000 pages. But that’s not what this blog post is about. I’m hoping to clarify why we’re more interested in up to date statistics, and also why statistics outside of indexes aren’t really the most helpful thing. If they were, we’d all just create statistics and every query would magically blaze from select to offset without a care in the world.

Statistics: It’s What’s Inside That Counts

Statistics are what SQL Server uses to figure out how many rows to expect from accessing a data structure. You can do some things that fundamentally break this, like using functions in joins or where clauses, using local variables or optimize for unknown, using table variables without recompile hints, and a sniffed parameter can just allow SQL to run a query without guessing at all. It already guessed once. No tag backs. As you may imagine, this is important information for SQL to have for running your queries optimally.

Indexes: Pride and Vanity

Indexes of the nonclustered variety contain subsets of your table or view’s data. Clustered ones are your table or view data ordered by the clustering key(s). Focusing on the nonclustered variety, they’re the “and the [band name]” to your clustered index’s “[lead singer name]”, and they’re great for providing SQL with a more appropriate data structure for your query.

If you don’t need to select all the columns, and you do need to filter, join, order by, or aggregate a column not in the key of the clustered index, nonclustered indexes get a solo after the chorus. Give the drummer some.

Nonclustered indexes will, under normal circumstances, get a statistics object created with rolling density information for the key columns going left to right, and a histogram on only the leftmost column.

I AM I SAID
I AM I SAID

With that out of the way

Why do we care more about statistics being updated than indexes being fragmented? Largely, because reading pages with some empty space from a fragmented index is oh-so-very-rarely the root cause of a performance issue. Especially if those pages are already in memory. Out of date statistics can allow SQL to continue to make some really bad guesses, and keep giving you a lousy execution plan no matter which way you tune your query.

The bigger your table gets, the worse the problem gets. Prior to 2016, if you don’t turn on Trace Flag 2371, about 20% of your table’s rows need to change before an automatic statistics update kicks in. For a 100 million row table, this can be a long ways off. Poor cardinality estimates here can really sink you. Rebuilding indexes for a 100 million row table is a B-U-M-M-E-R.

Log Shipping? Mirroring? Availability Group? Good luck with that.

Crappy server? Low memory? Slow disk? Dead man walking.

You may ultimately spend more time and expend more server resources defragmenting indexes than your queries will spend reading extra pages from fragmented indexes. Rebuilding or reorganizing large indexes can be a special kind of brutal.

Consider the process

Read a bunch of index pages with sys.dm_db_index_physical_stats to figure out if there’s fragmentation, reorganize or rebuild based on feedback.

  • Reorganize is online but single threaded and can take FOREVER on big tables, especially if you’re compacting LOBs.
  • Rebuild is offline and single threaded in Standard, online and potentially parallel in Enterprise, but you better hope you’re patched up so you don’t corrupt anything.

Is that worth it? Every night? For every index on every table in every user database? Only if you can prove it.

The one upside to Rebuilding is that it also updates statistics with a full scan. Think about this the next time you say something like “rebuilding the index fixed the problem”, you may have an epiphany on the way.

Wait for it.

Wait for it.

Wait for it.

Statistics with no indexes

SQL Server doesn’t make easy work of getting information about your Statistics, or finding out which statistics get used. Even at the query level, you have to use a spate of Trace Flags to find out what gets loaded and looked at. Cached plans don’t fare much better.

No wonder everyone cares about indexes and their fragmentation. Microsoft has made information about them easy and abundant, while Statistics are kept hidden in the basement next to piles of soft bones and a bowl of hot blood.

Head rush moment: SQL may use information from histograms outside of the index it chooses for cardinality estimation.

Back to earth: If you just create a bunch of statistics instead of indexes, you’re (at best) using your Clustered Index for everything (which is still bad), or you’re using a HEAP for everything (which is usually worse). You’re still generally better off creating good indexes for your workload. They’ll get statistics objects created and associated with them, and if SQL thinks another column is interesting, it will create a single column statistics object for it, as long as you haven’t turned off auto create stats.

Sure, you can put on your black cloak and goat mask and create some multi-column or filtered statistics, but in the words of a wise man (Doug), you end up with more stats to maintain and understanding query behavior gets more difficult.

Filtered statistics suffer from a problem where they don’t automatically update based on the filtered row count, but rather the table row count. Imagine you have a 100 million row table, and your filtered index is on 1 million rows. All million of those rows might change, but the statistics on that index won’t. 1 million is not 20% of 100 million. You’ll have to update the statistics manually, or rebuild the filtered index.

Multi-column statistics are hardly a replacement for a multi-column index, and it’s not like you get an enhanced histogram that includes the second column. It’s just like a normal histogram. All you get is the density information for the columns you throw in there. Boo hiss.

Moral of the story (B-B-B-B-B-B-BENNY WITHOUT THE JETS)

Indexes are a really important factor for performance, but index fragmentation very rarely is. Statistics are super helpful when they’re not out of date, and getting them up to date is much easier on your server’s nerves. Though not perfect, I’d rather take my chances here. Updating statistics can also cause a bad execution plan to get flushed out of the cache. On their own they can sometimes help queries, but you should only end up here after you’ve really tuned your indexes.

Unless you can establish a metric that makes nightly index defragmentation worthwhile, don’t jump to it as the default. Try just updating statistics. You may find that nothing at all changes, and you now have many extra hours a night to do maintenance. Like run DBCC CHECKDB. If you think index fragmentation is a performance problem, try corruption sometime. That’s not what Brent meant when he said “the fastest query is one you never make.”

Thanks for reading!

Previous Post
Can Adding an Index Make SQL Server 2016…Worse?
Next Post
[Video] Office Hours 2016/07/13 (With Transcriptions)

16 Comments. Leave new

  • 1.) not to mention: plus, you can save those limited write cycles on your $olid $tate $torage for doing more valuable work for a longer time.

    2.) not to mention: most routines that would purport to rebuild indexes “judiciously” start with a trip sys.dm_fn_index_physical_stats, which generates all kinda physical reads…iatrogenic performance problems, hello!

    3.) from the BOL for that same function:

    Fragmentation alone is not a sufficient reason to reorganize or rebuild an index. The main effect of fragmentation is that it slows down page read-ahead throughput during index scans. This causes slower response times. If the query workload on a fragmented table or index does not involve scans, because the workload is primarily singleton lookups, removing fragmentation may have no effect.

    https://msdn.microsoft.com/en-us/library/ms188917.aspx

    does your workload even warrant this activity?

    i point these out because these were the next steps in my particular journey of understanding, which started with reading the “proud tradition” article referenced in the first line. knowing how to think properly (or at least not improperly) about the kinds of things that can (and cannot) be attributed to index fragmentation allowed so many other things to fall right into place: it felt like putting on a pair of performance-tuning glasses for the first time. so great post, 2012-Brent!

    but the real reason for this long comment:

    great post, 2016-Erik! Statistics are the next phase in the epic journey chronicled above, yet they remain a pretty murky subject for me, admitedly, still. But you hit the nail squarley on the head (and then buried it in the basement) with:

    “No wonder everyone cares about indexes and their fragmentation. Microsoft has made information about them easy and abundant, while Statistics are kept hidden in the basement next to piles of soft bones and a bowl of hot blood.”

    which is why i can’t wait to check out the new course and grab me a pair of stats glasses! =)

    Reply
    • Erik Darling
      July 14, 2016 3:28 pm

      Dear Mike,

      Your comments are better than my blog posts.

      Statistically yours,
      Erik

      Reply
      • Between the basement comment, band analogy and goat mask suggestion I have to say this is one of the more entertaining posts you’ve done Erik, and you’ve set the bar pretty high for yourself since you joined. I’m interested enough in SQL Server generally that even the deepest-dive article is never really a chore, but you go that extra mile.

        mmiike has taught me the word “iatrogenic” though, which I’m now gonna be using every opportunity I get (and there are so many opportunities in IT!).

        Reply
      • that’s because your blog posts are filled WITH (NOLACK) of inspiration!

        ps. nessie says hi.

        Reply
  • Alberto Gastaldo
    July 15, 2016 2:02 am

    Great post Erick !
    I also consider statistics one of the most important factor rather tnan index frags or other stuff like max degree of parallelism…:-)

    My doubt is : when auto.create stats is set to ON, column statistics are created when a column is “touched” by a query.
    When I find many “WA_” statistics objects I guess there are many “SELECT … WHERE =XX” and there’s no useful index satisfy this query.
    In this case what’s the use of this object ? As per my knowledge, this statistics won’t help this query at all. It will only help if I JOIN this table with another, because the optimizer can guess how many rows will be returned and choose that appropriate join operator in the plan.
    Is my guess correct ?

    Reply
    • Erik Darling
      July 15, 2016 1:38 pm

      Not necessarily! There may be a useful index for the query, but SQL may not have a histogram built for EVERY column that a join or where clause touches, and it may want one for some further cardinality estimation. SQL can use whatever statistics objects it wants to do that, not just ones that are associated with the index it’s using.

      Reply
      • Alberto – When SQL Server has to consider or hash a column, but discovers that column’s selectivity is no better than a sewage treatment plant’s, I think SQL Server should persist that column’s crappy statistics … somewhere…. That way SQL Server doesn’t have to waste time wading through a statistical cesspool, too many times…. WA_t about keeping those putrid stats there? Wouldn’t their existence help SQL Server reduce the number of times it has to get its toes all brown and stinky?

        Reply
  • Learned something new while smiling the whole way through, rock on!

    Reply
  • Brent had an article on not wasting time rebuilding indexes along with a couple of movies. I responded to the article and thought I’d respond to this one in a similar fashion.

    I thought Brent was drinking bong water through two straws when I first saw the suggestion of not bothering to rebuild indexes at all. I thought I proved that during one of my normal “restore from prod to refresh dev” evolutions. During such evolutions, I truncate 4 very large log tables (and, yes, I’m in the process of moving them to a different database) and that leaves a huge amount of empty space in the MDF. Being a Dev environment and knowing that everyone wants their own database for major projects, I have to be careful with space, so I normally do a shrink and then rebuild the indexes.

    To test Brent’s theory, I didn’t do the index rebuilds one of those times and almost immediately got complaints about performance. Proof positive (or so I thought) that Brent was, in fact, drinking bong water.

    Fast forward a couple of weeks and I finally put 2 and 2 together on a major blocking problem that I had every Monday. I remembered that we do index rebuilds on Sunday nights. With vivid memories of Brent bent over a full bong with two straws in his mouth, I decided to do a little experiment.

    I notice that CPU and disk IO decreased a little bit each day after the index rebuilds and, after checking for differences in the number and types of jobs run on Mondays (there were no differences) as well as the fact that major blocking on Monday was followed only by occasional blocking on Tuesday and virtually no blocking from Wednesday on, I disable the Sunday Index Rebuild job.

    I was up bright and early the following Monday morning and me and PerfMon became best of friends for most of the day. There was virtually no blocking and CPU and Disk IO had continued to drop.

    To make a much longer story shorter, I haven’t done ANY index rebuilds on my production box, which suffers both heavy batch jobs and high OLTP usage since Sunday, 17 January 2016. I started off with CPU averaging 13% across 32 CPUs during “normal” activity during the day and now it’s down to an average of 4% and I did nothing else to improve the performance of any queries during that time.

    Yes, disk usage increased a bit owing to some of the page splits that have occurred over time but not nearly as much as I expected. I’m thinking the system formed what I’m calling a “natural fill factor” and it’s certainly less than 20% additional if I had spent the time to figure out which indexes needed a fill factor of 80%. It’s also opened up a good part of Sunday nights to have the system do other things.

    I’ll also stipulate that I DO maintain stats even if that particular system needs some work (my predecessors downloaded it from the internet) and that my Clustered Indexes for non-static tables DO follow the best practice of being narrow, unique, ever increasing, and immutable. If your tables aren’t in a similar condition, then you may get explosive growth of your clustered indexes by not maintaining them.

    But, for me, it’s been 7 months since I’ve done any index maintenance on my main and heavily used production box and I’ve seen nothing but good come out of it. Virtually no blocking. CPU dropped from an average of 13 to 4% across 32 CPUs, IO has dropped substantially, and my log files and MDF files are no longer exploding due to index maintenance.

    Hat’s off to Brent for the great and very tasty bong water! 😉 And thank you, Erik, for this great post on the incredible importance of maintaining stats!

    Reply
    • Thanks, Jeff. That means a lot, considering how much your posts have helped me over the years. Bong water and beer popsicles for everyone.

      Reply
    • Thought I’d post an update to all of this… all things change, such as 100GB databases growing to 1 and 2 Terabyte databases in the last 3 years. If for no other reason, you will need to do some index maintenance on those indexes that suffer from extremely low page density as a result of page splits. I’ve given a two hour presentation (a couple of times now) to introduce people to the problems that occur with standard “best practices” and introduce what can be done about it (using a different take on index maintenance) without running into all the problems I did with index maintenance.

      One of the cool things that I demonstrate is how to setup Clustered Indexes that have GUIDs as the key so that you can go literally MONTHs with ZERO page splits and ZERO fragmentation as well as having them be more memory efficient than than you (and it was a surprise to me, as well) ever imagined. The funny part is, they cover this in Books Online but everyone gets stuck on using the 10/30 recommendation (that people call “Best Practices”) and that’s the real reason why GUIDs seem to fragment so badly and quickly. If you do index maintenance on them the right way, they cause ZERO fragmentation for months! And, no… I’m NOT talking about shifting to NEWSEQUENTIALID… that actually fragments more than properly maint

      And, no…. for other reasons, I’m not recommending that everyone rush out and change all of their index keys to GUIDs. I’m saying that if you’re stuck with GUIDs, they are actually one of the best datatypes to prevent page splits/fragmentation and don’t even suffer supposed “good” page splits when maintained properly.

      The next presentations will be at the 2019 Cincinnati and Colorado Springs SQL Saturdays and I’m working on setting up a remote presentation with the Baltimore PASS chapter.

      Reply
  • srinivasa.korlakunta
    November 12, 2018 9:05 pm

    Why Index stats are not using the query while the index using

    Reply
  • Hi
    I got a script from a college, where they do a update statics, before reorganize, and then an update statistics after, does it give any meaning, to do a update statistics to start with?

    Reply
    • Actually, it does. It turns out that index maintenance can work a bit faster if you update stats before hand. Of course, I don’t generally use REORGANIZE because it’s not as lightweight as people think it is. For example, I have a 146GB Clustered Index that had only 12% logical fragmentation… when I did a REORGANIZE on it, it caused my relatively tiny log file to explode from (IIRC) from 20GB to about 227 GB and took an hour and 21 minutes to do so. I did the same using a minimally logged REBUILD on a restore of the database (prior to the Reorganize) and it only caused the log file to grow to about 37GB and completed in about 12 minutes.

      My recommendation is don’t use REORGANIZE as a part of your general maintenance. Use it only to compress LOBS and only when necessary. And move your LOBs out of row. MS started defaulting the location of LOBs (the pieces that would fit on a row) to IN-ROW (ie, in the Clustered Index) and it kills your query performance (In all cases) and causes (unless you NEVER update your LOB columns and only do inserts) a huge amount of totally unnecessary page splits, the resulting massive fragmentation, totally unnecessary index maintenance, and totally unnecessary log file writes due to the massive amount of page splits and then using REORGANIZE to try to fix the mess.

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.