Stats Week: Only Updating Statistics With Ola Hallengren’s Scripts

I hate rebuilding indexes

There. I said it. It’s not fun. I don’t care all that much for reorgs, either. They’re less intrusive, but man, that LOB compaction stuff can really be time consuming. What I do like is updating statistics. Doing that can be the kick in the bad plan pants that you need to get things running smoothly again.

I also really like Ola Hallengren’s free scripts for all your DBA broom and dustpan needs. Backups, DBCC CHECKDB, and Index and Statistics maintenance. Recently I was trying to only update statistics, and I found it a little trickier than I first imagined. So tricky, in fact, that I emailed Ola, and got a response that I printed and framed. Yes, the frame is made out of hearts. So what?

What was tricky about it?

Well, the IndexOptimize stored procedure has default values built in for index maintenance. This isn’t a bad thing, and I could have altered the stored procedure, but that would be mean. I set about trying to figure out how to get it to work on my own.

First, I tried only passing in statistics parameters.

But because of the default values, it would also perform index maintenance. Sad face. So I tried being clever. Being clever gets you nowhere. What are the odds any index would be 100% fragmented? I mean, not even GUIDs… Okay, maybe GUIDs.

But This throws an error. Why? Well, two reasons. First, 100 isn’t valid here, and second, you can’t have the same fragmentation level twice. It would screw up how commands get processed, and the routine wouldn’t know whether to use @FragmentationMedium, or @FragmentationHigh. This makes sense.

Okay, so I can’t use 100, and I can’t set them both to 99. What to do? Let’s bring another parameter in: @PageCountLevel.

This seems safe, but it’s still not 100%. Even with the integer maximum passed in for the page count, it still felt hacky. Hackish. Higgity hack. The other part of the equation is that I don’t even want this thing THINKING about indexes. It will still look for indexes that meet these requirements. If your tables are big, you know, sys.dm_db_index_physical_stats can take foreeeeeeeeeeeeeeeeeeeeeeeeeeeever to run. That seems wasteful, if I’m not going to actually do anything with the information.

Hola, Ola

This is where I emailed Ola for advice. He responded pretty quickly, and here’s how you run stats only updates.

Moral of the story

NULLs aren’t all bad! Sometimes they can be helpful. Other times, developers.

Thanks for reading!

Brent says: Subtitle: How many DBAs does it take to think of NULL as a usable option? Seriously, we all banged our heads against this one in the company chat room.

Previous Post
Breaking News: Query Store in All Editions of SQL Server 2016
Next Post
[Video] Office Hours 2016/04/20 – Now With Transcriptions

41 Comments. Leave new

  • Oh, geez. This deserves a bazillion tweets just for the code to update stats only using Ola’s scripts.

  • I can’t unseen his post =)

  • I’ve added this to my growing list of scripts that I hope someday to make use of. Thanks Erik!

  • Erik. Can you comment on some of the common cases where you would want to update Stats much more often than reorganize and/or rebuild indexes? Thanks!

    • It would be easier to comment on the only two scenarios that I might prefer to reorganize or rebuild indexes: at 40% and 80% fragmentation, respectively.

  • Scott Caldwell
    April 22, 2016 3:16 pm

    Ola’s documentation is really good. It’s very cut n’ paste worthy. This may even be one of the examples 😉

  • Why not use setstatiscs=100, this will use a full 100% sample on the table

    • A fullscan sampling is good for testing just how low you make production’s sampling, without changing query plans. Beyond that, I think a fullscan sampling is overkill. Usually, SQL Server does a good job at determine an appropriate sampling for you. But in order for SQL Server to pick a good sampling rate, the column that is in a statistics/index’s first ordinal position should be the least dense (the most selective) column.

      • Yeah. There were rants from some mathematical statistician about that too (on Stack Overflow iirc). Speaking shortly – 100% sample is not needed most of time. It might be useful only in some few rare situations.

        • Kimberly Tripp has some scripts on determining the skew of the data inside the indexes. In my experience, a higher sample size helps deal with large amounts of skew. I would use default sample size and pick a specific larger sample size for your worst skewed indexes.

  • Hello. May you suggest how to exclude the single index from performance? I have the single index but it will be rebuilt and take a lot of time. I will rebuild it later.

  • I’m laughing at myself here. I use Ola’s script and when I encountered this dilemma, I came up with a separate script to just handle statistics. This solution is much straight forward.

  • Alin Selicean
    April 27, 2016 5:37 am

    Hi Brent

    If I knew that this trick would have saved your tables (and your heads, of course), I would have make it a blog post 🙂

    I am using Ola’s scripts this way for more than 2 years. Due to a limited time window, I also split the jobs into multiple ones based on the duration of the largest tables. This way, the largest ones have their own job and can run concurrently with the rest of them. Because of this, I also extended the info that’s stored in the XML column (ExtendedInfo) to include SPIDs so I can differentiate between the what I call threads.

    It’s all in the parameters 🙂

  • One thing I wish Ola would add to the statistics part of his code would be to update all stats on a table if he’s going to update any of them. It doesn’t take much longer, and it a lot better than updating many stats on a very large table individually.

    I’ve used his MaintenanceSolution for 6 years, and it has made my work much simpler.

  • I scripted out and added an UpdateStatistics job to Ola’s script from code on his website. I’ve been using it for over a year now on all of my production servers, and it works like a charm.

    TL;dr RTFM

  • Mark Freeman
    July 14, 2016 1:22 pm

    Add @StatisticsResample = N’Y’, @StatisticsSample = NULL to do RESAMPLE instead of FULL.

  • “Restoring tempdb since GETDATE()”

    Oh man, this cracks me up every time. And yea, great information, thanks :D…Ola

  • Noob question. Can maintenance such as this, or DBCC be run against remote db, and log the results to a local table? The Ola’s scripts log to the command table in the target db. However, I’d like to designate the server/table to log into. You know… Centralize.

    • You’d want to look into something like Multi Server Administration, I think. Some monitoring tools, like SQL Sentry, do job scheduling, though I’m not sure you’d get the centralized results you’re looking for.

      Another sort of clumsy option would be to set up linked servers, and use a view to union all your CommandLog tables together. Just be careful if there are XML columns — they have to be cast to NVARCHAR(MAX).

      Someone out there may mention PowerShell. I don’t object to that; it’s just not my thing.

      • Based on your answer I guess this isn’t a typical approach. If we just run Ola type scripting on each server independently. What would you suggest we do to centralize the viewing of the results of all the maintenance?

        • Let’s start by figuring out what you’re trying to get out of this!

          Do you just want success/failure, or are you trying to do some analysis? CommandLog only gives you start and end times for each command, which is valuable, but not necessarily actionable information.

          So like, I can’t imagine that knowing msdb took 2 seconds to back up is good information, but I can see knowing that it got backed up is good information.

          • We want to implement best practice for maintenance. Then centralize notification of success. If there are any problems, I want to set off the klaxon.

            We don’t have a dedicated DBA resource, so the tools/help I’ve received from you guys, and many around the interwebs have been much appreciated. Keep up the good work!

          • The best practice here would be to set up job failure alerts via email for all critical jobs, and have them sent to a distribution list email address. That way multiple people are notified and can react if one person is on vacation or whatever.

  • So, how would you know if DBCC reports any corruption? The emails from agent would only send if the job fails, not if the job reported issues with the db, right? My guess is you would need to review the output files for reports of issues. Unless I’m missing something huge. That is where something like an SQL Sentry makes their case no?

    • Great question!

      No, Ola’s job will fail with (some of) the error text if you hit a database with corruption. You will want to make sure your job logs the output to a file for you to review (this is set up under the Advanced tab of the Job Step in the Agent job properties).

      I would encourage you to test stuff like this out. It’s easy enough to find example corrupt databases to download.

  • how can we use ola’s script for update statistics for a table ? what parameter we need to add to his command..
    EXECUTE dbo.IndexOptimize
    @Databases = ‘USER_DATABASES’,
    @FragmentationLow = NULL,
    @FragmentationMedium = NULL,
    @FragmentationHigh = NULL,
    @UpdateStatistics = ‘ALL’

    generally we use “update statistics .; ” . If you can help me out to run update statistics on a table.

  • I think LOB Compaction is really not fragmentation. We could skip that when reorganizing indexes on large tables with 3 Billion some rows.

  • Any idea how the OLA program prioritizes which Tables to do/run Stats updates first?
    Also it seems it run in sequence one table at a time.

    • I’d be doing you a disservice if I told you. You should read through Ola’s code and figure it out.

  • Brent: In my opinion, your honesty in this post gives you more credability.

  • Nothing better than reading the “tried this… didn’t fly… tried that… didn’t fly… here’s what worked”. It is powerful to see we all go through trials before finding the solution. And thank you for the solution!

  • Gregory Loeffler
    August 22, 2019 7:16 am

    I know this is an old post, but its one I recently came across and implemented on a number of servers. I’ve dug into the commandlog and came across something that I am not sure how to handle. Any ideas on best practices on the below?

    Currently I have a job that will do just the index maintenance and no updating of statistics. I handle the update on stats separately based on this article. Since index reorganizations do not update stats like rebuilds do, I am left with <1% sample rates on many indexes until my update stats jobs run later in the week. If I add the @updateStatistics = 'index' and @statisticssample = 25 to my index maintenance job, will this just update the stats on the reorganized indexes and skip the rebuilt ones since they get the equivalent of a fullscan when rebuilt? i don't want to override a fullscan with a 25% sample if possible. I also don't really want to leave <1% sample on indexes until my update stats job runs. my update stats job is set to update ALL stats, not just column or index stats.

  • Niels Broertjes
    May 4, 2022 12:50 pm

    Great stuff! You saved me some hours trial and error. Thanks for posting this!


Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.