How to Configure Ola Hallengren’s IndexOptimize Maintenance Script

If you’re a production database administrator responsible for backups, corruption checking, and index maintenance on SQL Server, try Ola Hallengren’s free database maintenance scripts. They’re better than yours (trust me), and they give you more flexibility than built-in maintenance plans.

However, the index maintenance defaults aren’t good for everyone. Here’s how they ship:

The defaults on some of these parameters are a little tricky:

When an index is between @FragmentationLevel1 (5%) and @FragmentationLevel2 (30%), the scripts will perform the actions listed in @FragmentationMedium, in order. First it’ll try an index reorganize, which is an online operation. If the index’s design doesn’t allow for that, IndexOptimize will try to rebuild the index online. If the index’s design or your SQL Server edition doesn’t allow for that, it’ll perform the last resort – an offline index rebuild.

When an index has higher fragmentation than @FragmentationLevel2 (30%), the scripts will try the actions listed in @FragmentationHigh – first an online rebuild, then if that’s not possible, an offline rebuild.

These defaults aren’t dangerous or deadly, but they do have drawbacks.

Index maintenance changes data pages and creates transaction log records. This means larger transaction log backups, which at first doesn’t sound too terribly harmful. However, if you’re using database mirroring, transaction log shipping, AlwaysOn Availability Groups, or anything else that copies your logged transactions around, you’ve put yourself behind. In one case, I saw an all-solid-state server generate ~25GB of transaction logs in under 3 minutes – all of which had to be copied out to the various reporting and disaster recovery servers. This creates an RPO/RTO problem without really solving a performance problem.

Index maintenance changes data pages, too. This also sounds harmless, but if you’re doing differential backups or storage replication, boom, you just inflated your storage and bandwidth costs. The larger your database becomes, the more you’ll start to investigate these recovery options, and the more index maintenance starts to backfire.

Offline index rebuilds cause blocking. As low as 5% fragmentation, the script defaults have the potential to take tables offline and block queries. I take kind of an extreme view on this: I would rather not take tables offline when there’s not a human being around to watch what’s happening.

So here’s the defaults I prefer:

  • FragmentationLevel1 = 50%
  • FragmentationLevel2 = 80%
  • FragmentationMedium = ‘INDEX_REORGANIZE,INDEX_REBUILD_ONLINE’
  • FragmentationHigh = ‘INDEX_REBUILD_ONLINE’

Note that I raised the fragmentation thresholds by a lot, and I removed the offline rebuild operations. I’m not saying I never rebuild indexes offline – I just don’t want to do that by default, with nobody around. When I’ve got tables that are heavily fragmented, and I’m convinced that a rebuild will solve the problem, AND I can only rebuild them offline, then I want to run the stored procedure manually when I can be around to keep an eye on it.

How to Change the IndexOptimize Defaults

You could just edit the IndexOptimize creation script. This way, whenever you execute it without any parameters, it will inherit the defaults you set at creation time. If you maintain IndexOptimize across a lot of servers, and you keep a single master version in a central location, this works.

But you have to be doggone sure to remember to change those defaults every single time you download a fresh version from Ola. (You won’t. You’re too busy reading blogs.)

The safer way is to leave his script the way it is, but when you work with SQL Agent jobs to run IndexOptimize, change the job’s settings.

You should probably be doing this anyway because you should create two separate Agent jobs: one to do your normal lightweight optimizations, and a second one for deeper optimizations (like with offline rebuilds). Some folks like to have weekday jobs that only do low-impact maintenance techniques for highly fragmented tables (like if something manages to hit 80% fragmentation on a Tuesday, go ahead and rebuild it online) followed by more invasive techniques on the weekends.

Just keep your backup schedule in mind – if you do full backups weekly on Saturday, followed by differential backups the rest of the week, don’t do your index rebuilds on Sunday. That instantly has the effect of inflating your differential backups all week long. Do the index maintenance first (like Saturday morning) followed by the full backup Saturday evening.

Previous Post
The Most Confusing Words, Phrases, and Acronyms in SQL Server
Next Post
Whitespace, Comments, and Forced Parameterization in SQL Server

128 Comments. Leave new

  • Thanks for your expertise on this. I’m currently using his scripts on new 2012 and 2014 servers that I’m migrating older and relatively small databases. One DB still in the testing stage on a new server is extremely volatile as it is constantly rolling up detailed daily data to weekly and eventually to monthly. I love being able to head off the problems before they occur. The only issue I ever have with being proactive is that everyone sees me having few issues and uses that as a basis to assign more work. It’s a good position to be in though.

    Reply
  • At my current gig, there are configuration tables on each SQL Server instance for the parameters that Ola’s stored procedures take. These get deployed as part of a database project.

    A row for each DB gets inserted into the config tables by an Agent job for with some sane default values.

    Then Agent jobs execute wrapper stored procedures that read the config tables and pass the parameters to Ola’s index defrag and backup etc stored procedures.

    In this way we can customize the index defrag and backup behavior for each DB as needed by updating the config tables without ever having to edit any Agent jobs (except in the case of changing the job schedule from the defaults) or modifying Ola’s scripts.

    It works pretty well, though I can’t say I personally deserve much credit for it! For someone with a small number of databases, it might be overengineering.

    Reply
  • Firstly, thanks to Ola for publishing these scripts. They are now part of my standard config when I take over an environment.

    Thanks to you as well for giving this some thought. I know the published best practice values are somewhat arbitrary and I’ve had some questions about thresholds to use in practice.

    Reply
  • Thanks for the article. I have been working with the IndexOptimize job from Ola and noticed the PageCountLevel parameter defaults to 1000. As a result many of my indexes are not getting rebuilt, offline or online. Any thoughts on this? I am planning on rebuilding some of these indexes manually during our scheduled downtime.

    Reply
    • Hi Camille,

      Ola scripts these by default in his script because there’s typically no benefit to defragmenting them. There are many cases in which rebuilding a small index leaves it with the same fragmentation it had before, which leaves your job just constantly rebuilding them over and over again. I personally think the 1,000 page limit is conservative and on most systems would raise it a bit higher.

      Kendra

      Reply
  • Jeffrey Langdon
    December 17, 2014 11:58 am

    I totally agree with the offline rebuilds. Unfortunately our 2008 R2 Editions are Standard. ;(

    Reply
  • Along the lines of keeping a master version in a central location – I personally have a central “utility” SQL Server instance with my core DBADB (which every instance I administer has) to house the main versions of all the administrative scripts that I want in each and every instance. I then set up a publication with only these stored procedures and any time a new version or script comes out, I just reinitialize all my subscribers. Instantly all my instances have current versions.

    Works pretty slick!

    Reply
  • >When I’ve got tables that are heavily fragmented, and I’m convinced that a rebuild will solve the problem, AND I can only rebuild them offline, then I want to run the stored procedure manually when I can be around to keep an eye on it.

    I’m in this situation. What does “keep an eye on it” really mean though? Other than stopping the operation if it isn’t done by the time my maintenance window closes, is there anything else to do?

    Reply
    • Unfortunately, killing your offline rebuild is going to take at least as long to roll back as it took to get to that point in the first place. When I keep an eye on it, I want to review disk I/O, log growth, and CPU utilization during that maintenance window to determine if it’s worth the pain that I’m facing during business hours. If the pain is bad, and we can’t fix the problem during the maintenance window, that’s a great data point to take to management and say “Look, you need X to happen. I can’t make it happen with what we have today. Here are ways I can solve the problem and here’s what each one costs. Here’s what I would like to do. What can we budget for?”

      Reply
      • Jeremiah, thanks for the info. So if things aren’t looking good, less than halfway through the maintenance window is the time to bail. After that, I might as well just let it do what it’s going to do?

        That’s a great point about documenting the situation, along with potential fixes and costs, for management.

        Reply
        • Yeah, pretty much – if you reach the halfway point, a rollback might end up running outside of your maintenance window. This could just be something that you can survive with nothing but a re-org, too.

          Glad you like the document and proposal approach – it’s the best way to learn how much the business really needs you to solve a problem.

          Reply
  • I’d like to see a post about sqlcmd. When I first played with the scripts, excluding DBs in the @database parameter wasn’t straight forward. Still learning about the command parameters in the jobs.

    What I have played with and tested regarding the Ola scripts is pretty sweet.

    Reply
  • I’d also like to echo the thanks to Ola for publishing this great script. It has simplified our previously complicated scripts which had evolved over many years, and has generally been a resounding success.

    I do have one query, however. Where the dm_db_index_physical_stats DMV is used, the parameter “LIMITED” is sent for the mode. MSDN says “mode specifies the scan level that is used to obtain statistics. mode is sysname. Valid inputs are DEFAULT, NULL, LIMITED, SAMPLED, or DETAILED. The default (NULL) is LIMITED.” but this doesn’t really explain what it really means? When statistics are obtained, even a FULLSCAN does this mean that not all of the available data will be taken into account.

    In testing, I have changed from LIMITED to DETAILED and the time taken to run the entire script is only around 10% slower, when running it in DETAILED mode I am seeing more indexes that are fragmented as opposed to LIMITED mode. So in summary, if the maintenance window permits, is it worth switching this? Unfortunately it is not something that there is a parameter for in the SP.

    Reply
    • Re LIMITED/DETAILED/SAMPLED, scroll down to Scanning Modes under Remarks here:
      http://msdn.microsoft.com/en-us/library/ms188917.aspx

      Basically LIMITED just looks at the B-tree, not the leaf level, so it is the fastest and potentially least accurate mode. There’s a typo on that page (determined by comparing with previous versions of the article) that seems to indicate that limited mode against heaps scans the data pages, but as I understand it only scans the meta-data type pages (PFS and IAM). DETAILED mode reads the whole dang thing…for smaller indexes, not a big problem, but could be rather costly on your big indexes.

      Reply
  • We’re in the process of deploying Ola’s scripts, and I’m curious about the @OnlyModifiedStatistics parameter. Ola defaults to ‘N’. Is there any benefit to rebuilding stats on a column that wasn’t modified at all? What’s the use case? We set it to ‘Y’ and saw huge gains on relatively quiet databases when testing it out.

    Reply
    • Agreed, we set this value to Y when we adopted this solution a few years ago – it’s been deployed to hundreds of instances since. The biggest concern for me with respect to statistics and maintenance execution time was the sample size as our databases can be sensitive to it.

      Computing statistics with an auto generated sample doesn’t always work well for large tables so ideally we’d like to use full scan (100). It does take longer (could take a really long time), results in greater I/O utilization, and possibly requires increased tempdb space, but the results are better. So you might decide to update ALL statistics with a full scan (100) once a week and comment out @StatisticsSample=100 during the weekdays. We’ve also separated the statistics update from the re-indexing process to ensure all the re-indexing completes quickly, do that first so any index rebuilds update index statistics with the full scan equivalent.

      Reply
      • Just curious what you mean by “could take a really long time”. I’ve seen statistics update with fullscan against all tables in a database take 6-7 hours easily because if you do this for column level statistics it has to scan the entire table for *each* column statistic. So I’m basically wondering if your statistics maintenance is taking even longer than 6-7 hours?

        How are you measuring that the results are better?

        Edit: Thinking back, I remember an incident where a stats update job took 15+ hours because of fullscan with column stats. The runtime resulted in some really nasty overlap with other maintenance jobs, when then blocked all the applications using the database. 🙁 So that’s why I’m wary of it.

        Reply
        • Nice. It’s been on the magnitude of hours across an entire instance. I’m unsure about any single database, though. We provide the solution to clients who run our software on their own gear and the number and size of databases varies. In the past there would be maintenance related tickets escalated that ranged from jobs running on top of each other to issues with the underlying storage or configuration.

          The main benefit of using the job we provide for Relativity is that I look at our audit table within each database to see if there’s been any recent activity from anyone, if not, Ola’s stored procedure doesn’t waste time analyzing the database for work. This helped shave hours off of execution time in environments with hundreds or thousands of databases.

          We haven’t measured how more accurate statistics have allowed for better execution plans but experience dictates that bad stats have more often been the root cause of performance related tickets and not fragmented indexes. I’m not suggesting it is unnecessary to ever rebuild or re-organize, though. I understand your point about being cautious enabling the 100 sample since it can run into Monday, that’s why I don’t default to it 🙂

          Do you have any input on trace flag 2371 to lower the threshold for how often auto-update stats will run? Do you guys ever recommend it to any of your clients? Thanks in advance.

          Reply
          • Mike – howdy sir, good to hear from you. I have this general hesitancy to recommend trace flags as a solution. We’ve recommended it when we’ve looked at somebody’s system closely and said, “Yep, this is the best way to solve a particular problem,” but I just wouldn’t recommend it as a default unless we’ve looked at a system.

    • I suspect that the reason Ola gives an option for @OnlyModifiedStatistics and defaults it to ‘N’ is that the methods for telling if data has been modified have changed a lot and aren’t always very reliable.

      In many versions of SQL Server, all he has to go on is ROWMODCTR, which has a lot of problems described here: http://msdn.microsoft.com/en-us/library/ms190283.aspx (If I recall correctly it was reaaaaally off on some versions of SQL Server 2005 and maybe even 2008.)

      For newer versions of SQL Server where sys.dm_db_stats_properties exists, he uses that, which is more reliable.

      I think there may be another exception with statistics on heaps where checking row modifications is particularly frustrating, but I’m not sure if I’m remembering correctly or not. I’ll write up some sample code and blog the results.

      Reply
  • Some good suggestions, and food for thought on upping the default 5 / 30 threshold…Paul R. is the first to tell you those figures were basically arbitrarily chosen. I’ve monkeyed with it a lot over the past year…one thing I did to spread the load across databases (to avoid massive log traffic to one database) is to stage it…first all indexes over 100k pages, then all indexes over 50k pages, then over 20k, then everything, plus stats. This way we spread the load among the different database tlogs…has worked out OK.

    We also wanted to set a cap on how long it can run, so what we do is, in the sqlcmd job step, compute how long is remaining from the start of the job to the start of the job plus a stored maximum run duration, and pass that in to Ola’s @TimeLimit parameter…if that value is reached, Ola’s code stops, quietly. Bit nicer than having a “watcher” job forcibly kill it if it runs late.

    And lastly, though I was reluctant to do so, I carefully added some well-documented code to IndexOptimize to basically stall in a WAITFOR loop for certain conditions…if the affected database is having a full backup at the same time, or if the (non-simple recovery) database in question has 90% or greater t-log used, and only 2 or less autogrowths are possible on the log drive. This helps in situations where a firehose of index reorg traffic is hitting your log but log truncations are prevented or delayed (full backup, CDC or replication log reader gets behind).

    Generally though we use it with minimal tweaking (and almost no twerking, I assure you) and it performs fine, it’s just those “special” instances that need a bit more tender care and attention that we start to get creative with it. It’s a fantastic bit of code, my hat is off to him.

    Reply
  • Do you ever advocate running this with MAXDOP set to 1 – I’m aware of article http://blogs.msdn.com/b/psssql/archive/2012/09/05/how-it-works-online-index-rebuild-can-cause-increased-fragmentation.aspx but just wondered what your experience has been of this ?

    Reply
  • I’m on SQL2012 and my database is partitioned.

    The indexOptimize by default maintains index fragmentation on partition level.

    If I remove the “INDEX_REBUILD_OFFLINE” option without defining parameter @partitionLevel = ‘N’, will there be an issue when it comes across an index that exceeds the rebuild threshold, since as I understand it, SQL2012 is unable to do online single partition rebuilds?

    Reply
  • We been running ola hallengren Index Maintenance script and was ran into a problem with some indexes that have GUID as the Index Key like one below, They are Frag at 85% and have a page count of 6,309, but they don’t get scanned to be rebuild or reorg.

    Thank you in advance,
    /****** Object: Index [idx_GUID] Script Date: 01/14/2015 10:54:47 ******/
    CREATE NONCLUSTERED INDEX [idx_GUID] ON [dbo].[ckbx_InvitationRecipients]
    (
    [GUID] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]
    GO

    Reply
    • Dave – for support questions with Ola’s script, you’re best off contacting Ola. Thanks!

      Reply
    • Dave, do you meanwhile know if there was something just used wrong or some issues related to the solution itself? I am about to implement it and have lots of uniqueidentifier indexes in place…

      Reply
  • Hi Brent, thanks for writing this article Your suggestion makes online rebuilds the only option for high fragmentation. Does that mean that highly fragmented indexes in standard edition will have no action taken on them? Would there be benefit in FragmentationHigh = ‘INDEX_REBUILD_ONLINE,INDEX_REORGANIZE’ so that at least some action is taken on fragmented indexes in Standard Edition or is a reorganize never a good idea for highly fragmented databases?

    Reply
    • Howard – correct, if I get to the point where I need to rebuild indexes offline, I only want that happening with a DBA on hand.

      Reply
      • I see so reorganize is not a valid alternative to offline rebuild in Standard edition. If it is >50% fragmented then it really does need an actual rebuild, reorg wont do.

        Reply
  • Hi, what do you think about update statistics with Ola ? Is it possible to tweak index defrag and update statistics so if an index is rebuilt it´s not necessery to do update statistics on it?

    Thanks, Magnus

    Reply
  • Another thing we added as a slight tweak recently…a separate step to reorganize CDC system tables. For various reasons we’ve been seeing accumulation of fragmentation on system-generated CDC _CT tables in some environments, and since we didn’t want to lump all system tables in, we created a second step that targets the CDC enabled databases (and only cdc schema tables), just does reorganizes (I’ve had deadlock issues occasionally with online rebuilds due to schema mod locks and so am gunshy to use on system tables), and has the following set (assuming db1 and db2 are your databases):

    @Databases = ‘db1, db2’,
    @Indexes = ‘db1.cdc.%, db2.cdc.%’,
    @MSShippedObjects = ‘Y’

    Works pretty well so far, YMMV.

    Reply
  • Joseph Brown
    April 3, 2015 6:38 pm

    Would the following not be the most cautious setting for high fragmentation, so that if the script decides it cannot rebuild, it attempts a reorg?

    @FragmentationHigh = ‘INDEX_REBUILD_ONLINE,INDEX_REORGANIZE’

    Reply
  • Ramakant Dadhichi
    June 15, 2015 5:02 am

    Hi,

    When i checked for fragmentation in one of my huge database the results were shocking. Had lots of indexes with fragmentation ranging from 90% to 99%. I used the Index optimize maintenance from Ola Hallengren’s free database maintenance scripts. Still i can see nothing much has changed in fragmentation levels.

    Can you please advise me on this. Is it like tables with less number of rows have high fragmentation levels?

    Reply
    • Ramakant – how are you checking for fragmentation, and are you using the same minimums for object size that you’re using with Ola’s scripts?

      Reply
      • What should I do if I got a clustered index (one column) that are not enough unique to get 100% fragmentation? It´s always 15-20% fragementation (on a HUGH table with 1,200,000,000 rows).

        Reply
        • Magnus – when choosing clustering keys, you want to pick something that follows the NUSE guideline: narrow, unique, static, and ever-increasing. The more of these you can follow, the easier performance management will be.

          It sounds like you’ve got a 1.2b row table where the clustering key is not unique. Honestly, I wouldn’t be doing defrags on a regular basis on a table of this size. Let’s take a quick step back: what’s the problem you’re trying to solve? (Meaning, users aren’t running in complaining about high fragmentation – they’re complaining about something else. What is that something else?)

          Reply
      • Hi Brent,

        And first thank you for your excellent scripts!

        Regarding Ramakant Dadhichi I have similar results and here is what I have found:

        As an example I have an index (page count > 45000) that I was looking.

        I ran Ola’s index maintenance and found out that
        when running sys.dm_db_index_physical_stats with DETAILED keyword after running Ola’s script the index still had 100% average fragmentation!

        When checked using SSMS index properties that shows only 1,10% fragmentation.

        So Ola uses sys.dm_db_index_physical_stats with LIMITED keyword and that
        will return the same as SSMS 1,10 fragmentation.
        Also for this index SAMPLED returns exactly the same 1,10%.

        So that is why that index would not get processed using Ola’s default fragmentation thresholds.

        So I am confused, DETAILED says 100% fragmentation and other options says only 1,10%.

        Is this a real problem i.e. my index is really fragmented but only detected by using heavy DETAILED scan?
        Should I trust only DETAILED and change Ola’s script using that?

        Reply
        • Vesa – for support questions with Ola’s scripts, head over to the DBA Q&A site http://dba.stackexchange.com.

          Reply
          • Hi,
            Yes I understand that you do not provide support for Ola’s scripts.

            But what I was trying to ask is if you have experience with sys.dm_db_index_physical_stats using different scanning options and which option result should I “trust”.
            In my example is my index 100% (result running with DETAILED option) or 1,10% fragmented (result running with LIMITED option) in you opinion?

          • Vesa – you are writing a comment in a blog post about script defaults, and you’re asking a completely unrelated question.

            It’s like walking into a restaurant and asking for a haircut.

            If you have a question about an unrelated topic, go to a place that specializes in answering them, and that would be Stack.

          • Ok, sorry for waisting your time.

            I thought I could ask you more on this because you answered Ramakant Dadhichi June 15, 2015 | 5:02 am post and my post has the same subject and actually gives some answer to that original post.

  • Thanks Ola and Brent for great work.

    A quick question regarding this solution, Any parameter we can set/use to rerun index rebuild(for one particular index) if index operation deadlocked with user transaction and aborted.

    Reply
    • SQLSyd – off the top of my head, I don’t remember, but I’d check the documentation. It’s really extensive!

      Reply
    • I haven’t seen any “baked in” retry logic in IndexOptimize but if it really is for one particular index, you could fairly easily build in a second step in your Agent job to check the latest result for that particular index in master.dbo.CommandLog, assuming you are running with the CommandLog on, and then fire a retry based on the status there. Not perfect or elegant but may get you closer to what you want without actually editing IndexOptimize.

      Another thing we do in light of occasional deadlocks is to set deadlock priority to low when launching IndexOptimize…it’s a strategic decision we’ve made that we’d rather index maintenance fail on an index one night (and the DBAs get notified) than a more visible business process get deadlocked and a larger group of folks get concerned and involved. If an index waits two days for a reorganize life goes on just fine, much preferable to getting brought onto a telecom bridge to explain why a critical batch job failed overnight… 🙂

      Reply
  • Brent, I am using Ola’s Maintenance Scripts on an Agent Job

    The schedule is
    -Every 2 hours for the indexes all the Tables, except the 3 main system tables that cannot go Offline during the day (I’m Using Standard Edition)
    -Once a day in those 3 tables (15 minutes downtime)

    Do you think every 2 hours is overkill? The job usually runs in 10 seconds. I’ve read a lot of people telling how to use Ola’s IndexOptmitze Scripts but not in which frequency we should use it.

    Reply
    • Auber – yeah, I’d probably back down to once a day. What’s the problem you’re trying to solve with reindexing every 2 hours?

      Reply
      • Actually I’m just trying to Fix a Lot of Bad Practices that were a Pattern in our databases

        As a Developer that turned into a Tryhard “Accidental DBA” I’m getting rid of old Agent Jobs that were killing our servers performance

        I Inherited from the last “DBA” stuff like:
        -AutoShrink
        -Full Recovery Model without Log Backups
        -Rebuilding a Lot of Indexes without checking the fragmentation

        So I dont have a specific Problem, just trying to figure out the best configurations for the servers

        I will put this one together with the other Indexing Job (Once a day)

        BTW Thanks a Lot for all the information you guys post on the internet to help people like me. Keep up with the good Work!

        You (Brent Ozar and your team), Paul Randal and Gail Shaw are my references for my Accidental DBA carreer 🙂

        Reply
        • Auber – OK, great. When in doubt, sometimes your best action is none at all. If you’re not fixing a specific problem, maybe hold off on running tasks on the database server. Give it some room to breathe. 😀

          Reply
    • If it usually runs in 10 seconds, you’re probably not seeing much fragmentation anyway, so basically you’re cursoring through a bunch of sys.dm_db_index_physical_stats executions but probably not hitting a threshold for reorg/rebuilding.

      If you are logging to the CommandLog table, you can analyse that to see how much actual work you are doing and pinpoint the tables that are getting worked over regularly. Then you’ll want to do root cause analysis on those tables to figure out why they are getting fragmented so quickly…better to fix the source of the trouble, if possible. Even then, we have some tables that fragment heavily through the day due to factors we can’t change, and we let them. The cost of trying to keep it unfragmented throughout the day is more than the cost of just letting it fragment and fixing it overnight, but every scenario may be slightly different. There are some tables we just let fragment out to 99% and deal with the page splits, the white space, etc.

      But a good place to start would be to dial back on the frequency…once a day, for example…and then analyse the CommandLog table and figure out where you are seeing fragmentation building up.

      Just my 2c, cheers!

      Reply
      • Hi Nic,
        I’ll ajust the schedule to once a day.
        I’ll analyse the commandLog table as you said, thanks for the Tip and Explanations.

        Reply
  • I have numerous replication subscribers that have to have backups done to keep log files under control. They do get a LOT of inserts/deletes during the day. I have been using simple recovery because the data does not really need to be backed up at all.

    I am switch to Ola’s very cool Maintenance Scripts because it is easier and faster to deploy and reindexing can be more selective.

    With Ola’s Maintenance Scripts – Can anyone see a reason not to leave the recovery model simple? Or should I go with full and backup the logs to every hour to keep them small? This is just for replication subscribers.

    (This is the only Ola’s Maintenance Scripts “forum” I could find.)

    Thanks

    Reply
  • Hi Brent

    i was attending your Idera web cast for Index maintenance, while looking at my maintenance plane using Ola script, I have a quick question. When the MAXDOP is set at the server level, do we need to set the MAXDOP at the OLA index rebuild jobs.

    Let’s Say i have an Instance with MAXDOP is set to 4. In the Index rebuild script if i leave the default MAXDOP=NULL, does the Index rebuild will use the MAXDOP setting at the server level or i need to set the parameter for MAXDOP =4 to take advantage of MAXDOP for Index rebuild

    Thank you in advance

    Reply
  • Juan Swanepoel
    January 5, 2016 7:12 am

    Hi Brent,

    Do you suggest running the index maintenance job on very large tables unattended? Should I manually edit this script to exclude large databases. I have limited FusionIO and I could land up in a sticky situation.

    Reply
  • Hi Brent. In your own blog, you did suggest we can use Maintenance Plans and not bother about Ola’s script.
    https://www.brentozar.com/archive/2012/04/maintenance-plans-roombas-suck-good-way/
    I know it can depends of our own needs, but do you still suggest to use Maintenance Plans if it fit our needs? Or do you now prefer Ola’s script all the way?

    Reply
  • We have been using this script on our SQL servers for months now, and the index optimize job is the only one I can’t figure out. I have the job running everyday during the week, and the indexes that are fragmented are not actually being rebuilt or reorganized. I have checked the history of the job and it is showing that is has been successful, but I am not seeing any change. Any ideas?

    Reply
  • Hi,

    I’ve had the script running for some time (in prod and qa) but now and found somethings strange. I do the weekly full backup and a daily diff. When I tried to restore these files in QA one of the diffs contained contained data for another database. Have you come across this before? Is this a database bug? Still on 2012 sp2 (Microsoft SQL Server 2012 – 11.0.5343.0 (X64)
    May 4 2015 19:11:32
    Copyright (c) Microsoft Corporation
    Developer Edition (64-bit) on Windows NT 6.3 (Build 10586: )
    )

    Reply
  • I am running IndexOptimize on a Sql 2008 Server. Based on the CommandLog table I see the proc entirely skips processing some DBs — even ones with massive index fragmentation. Any idea what could cause this behavior? (For example, a database setting?) BTW: the proc defrags other DB indexes on the same server.

    Reply
  • In the default, there is a parameter for number of pages as 1000. So it will skip those DBs if the total page is 10000 or less.

    Try to modify that value as per your need.

    Thanks

    Reply
  • Hi,

    I assume the preferred defaults you mention are for lightweight index maintenance, do you have any suggested defaults for more invasive index maintenance?

    Reply
  • Thanks for the post Brent. Just came across this upon another search. We use this on server with hundreds of databases on them. We use the commands many times on the same databases. For instance, if I have a job that runs 12 hours and it is critical, I want to run it outside those hours on that particular table. Also, I may want to run stats at different times… another reason to create another job.
    I might have 20 stats jobs running on the same server to give different parameters. I want to log to a table. We do alter his script slightly to install it in a different database and a different schema.

    Reply
  • Is it possible rebuild selected indexes on a single table

    Reply
  • Hi Brent
    Always thought the reason you would do a reorganise instead of a rebuild was because the reorganise is faster, but this has not always been my experiance. On large tables i find a rebuild is faster. I think one of the reasons is you can use the MAXDOP parameter in a rebuild but not in a reorganise.
    On a 2 TB Database using Ola Hallengren’s script with MAXDOP=8 most of the tables are quicker when doing a rebuild rather than a reorganise.
    Anyone else experience this?

    Reply
    • Stephen – correct, sometimes rebuilds are faster on different servers, different tables, different storage, different levels of fragmentation, etc.

      Reply
  • What method do you use for limiting the maximum number of pages for an index before skipping it (i.e. I want to handle monster tables separately). I see a minimum number of pages setting for skipping, but not a max.

    Reply
    • Ed – instead of going by pages, go by table name on that. (I don’t worry about it because tables that large tend not to get that fragmented quickly, and I handle those by hand periodically during lower-load windows. For example, if I have a 1TB table, it’s not going to get 30% fragmented overnight.)

      Reply
    • The current version of Ola’s script now supports a MaxNumberOfPages argument.

      MaxNumberOfPages
      Set a size, in pages; indexes with greater number of pages are skipped for index maintenance. The default is no limitation.

      Reply
  • Bo Duholm Hansen
    September 22, 2016 2:30 am

    Hallo mr. SQL

    Ola Hallengran uses the old SQL 2000 best practice of only doing maintenance work on tables with 1000 pages and above. I believe that is still Microsoft best practice. But, is that not antiquated because of higher IO performance? Do you know why that limitation is in place?

    Reply
  • Hi Brent,

    We have a 10TB database and we use Ola Hallengren maintenance plan for rebuilding the indexes. We have scheduled the maintenance job weekly through SQL Server Agent. But for last week it was observed that the maintenance job was shown as failed without any proper reason and when checked the output file, we could see that all the indexes were re-build and re-organized. My question here is how do we determine the cause of the job getting failed.

    Reply
    • Adrian Sugden
      October 24, 2016 5:48 am

      If you have deployed Ola’s SQL agent jobs then the output is written to a text file if you provided a valid path when deploying them. You should find that one of the rebuild steps failed. Ola’s scripts are good in that even when a rebuild fails it will continue with the other tasks and still report a failure of the job so you can investigate.

      Reply
    • Well, I am not Brent, but assuming you have run the whole “installation” script, there should be table CommandLog. Query that for errors. For example, if there is even one problem (lock timeout for example, when trying to run rebuild/reorganize for an index), it will report failure/error for the job.

      Reply
      • Thanks for the reply on previous question. But now the optimization job runs for almost 3 days and hence we need to optimize it . As our bigger tables are partitioned, from the below parameters what will happen if we keep it as ‘N’ i.e. what do you means by index level. Will it not touch partitions of the table ? It will be great if someone could explain it.

        PartitionLevel
        Maintain partitioned indexes on the partition level. If this parameter is set to Y, the fragmentation level and page count is checked for each partition. The appropriate index maintenance (reorganize or rebuild) is then performed for each partition.
        Value Description
        Y Maintain partitioned indexes on the partition level. This is the default.
        N Maintain partitioned indexes on the index level.

        Reply
  • Adrian Sugden
    October 28, 2016 6:08 am

    Ola’s script doesn’t include an “IndexOptimize – SYSTEM_DATABASES” job. What are your thoughts on performing index maintenance on your system Dbs? Do you regularly do this?

    Reply
  • Thanks for the reply on previous question. But now the optimization job runs for almost 3 days and hence we need to optimize it . As our bigger tables are partitioned, from the below parameters what will happen if we keep it as ‘N’ i.e. what do you means by index level. Will it not touch partitions of the table ? It will be great if someone could explain it.

    PartitionLevel
    Maintain partitioned indexes on the partition level. If this parameter is set to Y, the fragmentation level and page count is checked for each partition. The appropriate index maintenance (reorganize or rebuild) is then performed for each partition.
    Value Description
    Y Maintain partitioned indexes on the partition level. This is the default.
    N Maintain partitioned indexes on the index level.

    Reply
    • I’m not Ola Hallengren (nor is Brent or the crew around here) but I read that as it will evaluate the index as a whole instead of by each partition. So if you have three partitions, one heavily fragmented, two not, it will calculate fragmentation across all three partitions and then evaluate against the threshold, then rebuild or reorg the whole thing. If set to “Y” it evaluates, and takes action, against each partition individually (so it would leave the other two partitions alone).

      Reply
      • Thanks for you reply. What is your perspective will it optimize the timing of the maintenance job if we keep it as ‘N’ . Currently it is by default ‘Y’ . The database size is close to 10.5 TB and there are 5 Partition tables with avg 800GB per size of the table.

        Reply
  • It depends perhaps how your fragmentation is distributed. I think it would be optimal in most situations to leave as Y, particularly if the partitions aren’t all evenly fragmenting. Say you have four partitions and the 4th one hits 10%, others are zero…with ‘N’ it will evaluate them all together, estimate 2.5%, and do nothing. Not until that partition hits 20% fragmentation would it consider it 5% fragmented (assuming that’s the configured threshold) and run a maintenance operation, and it would run it against the other three partitions too…which don’t need it at all. Also, if you’re doing rebuilds, nicer to handle things one partition at a time, out of concern for locking or similar issues. For your situation, with fairly sizable partitions, I think the “y” option is best…allows it to treat things at a smaller and more granular level, so you’ll likely have smaller operations more often rather than being hit by a huge full table reorg or rebuild all at once. Just my opinion/guesswork, though.

    Reply
  • Hi ,
    We are seeing lock timeout for indexes in job history for this and the job is been failing for 2 weeks now. I was looking for an option to exclude a system table where it gets stuck. Any directions? I will try this too to see if this rebuilding indexes offline is causing the issue.

    Reply
  • NAVNEET AGARWAL
    June 27, 2017 1:45 pm

    I’m using Ola Hollangreen below script in my environment and it fails with the error:
    USE DBA
    EXECUTE dbo.IndexOptimize
    @Databases = ‘USER_DATABASES’,
    @FragmentationLow = NULL,
    @FragmentationMedium = ‘INDEX_REORGANIZE,INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE’,
    @FragmentationHigh = ‘INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE’,
    @FragmentationLevel1 = 5,
    @FragmentationLevel2 = 30,
    @UpdateStatistics = ‘ALL’,
    @PageCountLevel = 0 ,
    @LogToTable = ‘N’

    Msg 50000, Sev 16, State 1, Line 153 : Msg 2706, Table ‘_abcd’ does not exist. [SQLSTATE 42000]

    ‘_abcd’ actually is a view and the stored procedure is unable to update statistics on view and it fails.

    Please guide me how can I solve this issu

    Reply
  • Thanks everybody

    Reply
  • Hi Brent! Thanks for this. Would you still say that the 30%/50% rule is your go to?

    Reply
  • Carlos Alejandro De Los Reyes Rodriguez
    August 1, 2018 2:35 pm

    Hello Brent,

    Would this way be the best in a DW Database:

    EXECUTE [dbo].[IndexOptimize]
    @Databases = ‘USER_DATABASES’,
    @FragmentationLow = NULL,
    @FragmentationMedium = ‘INDEX_REORGANIZE,INDEX_REBUILD_ONLINE’,
    @FragmentationHigh = ‘INDEX_REBUILD_ONLINE’,
    @FragmentationLevel1 = 50,
    @FragmentationLevel2 = 80,
    @UpdateStatistics = ‘ALL’

    Not sure but I’ve heard some people don’t default to rebuilding indexes in a DW because essentially the time and resources consumed never worth it.

    Reply
  • Hi Brent.
    Hi Brent. I’am trying the following with a table, avg_fragmentation_in_percent = 50, but nothing is happening, could you please tell what missing? this to only, re-organize

    EXECUTE dbo.IndexOptimize
    @Databases = ‘AdventureWorks2016’,
    @FragmentationLow = NULL,
    @FragmentationMedium = ‘INDEX_REORGANIZE’,
    @FragmentationHigh = ‘INDEX_REORGANIZE’,
    @FragmentationLevel1 = 30,
    @FragmentationLevel2 = 80,
    @Indexes = ‘ALL_INDEXES, -AdventureWorks2016.Person.StateProvince’

    otherwise is working normally with this,
    –ALTER INDEX ALL ON Person.StateProvince
    –REORGANIZE;

    Reply
  • Thanks for the response Erik, I will try with another table then!

    Reply
  • Brent,
    Although somewhat new to Index reorg/rebuild, I am in the process of setting up Ola’s maintenance scripts and was concerned about the number of pages (high for my databases) and the percentages when most web blogs, not just Ola, seem to ignore what I thought was more important – Page Density.

    Your Groupby.org session on defragging, if I understood it correctly, just reinforced to me that page density is important, possibly the more important issue from fragmentation. If my frag percent is high but my density is also high is rebuild really necessary? But if my page density is below 50 -75 percent no matter the fragmentation, seems like a rebuild is in order, then figuring out why it is fragmenting

    Does any one have rebuilds automated on page density? I doubt if am I the first to care about it over fragmentation.

    Reply
  • What does this parameter do? I’m not finding any details on this one:
    @FragmentationLow nvarchar(max) = NULL,

    Reply
  • Hi Brett:
    You made recommendations on index optimization in this article and explained why. I am now re-thinking the way I do those based on these recommendations. The other purpose of Ola’s script is running Update Statistics. What are your recommendations for doing these? Do these weekly, or do them more often with the index parameters turned “off” (set to null):
    EXECUTE dbo.IndexOptimize
    @Databases = ‘USER_DATABASES’,
    @FragmentationLow = NULL,
    @FragmentationMedium = NULL,
    @FragmentationHigh = NULL,
    @UpdateStatistics = ‘ALL’
    Or is this a “it depends” situation? The databases I administer are OLTP, not Data Warehouse.

    Reply
  • I’m going to make the recommendation that if you’re using REORGANIZE in a general manner, then you are causing damage to the indexes and it is dangerous. Consider this… why does a Random GUID index immediately start fragmenting after you do a REORGANIZE on it even when you’ve assigned a low Fill Factor?

    The answer is because REORGANIZE doesn’t work the way most people thing it does. It does NOT follow the Fill Factor like a REBUILD does… instead, it tries to compress the data UP to the Fill Factor… it cannot and will not reduce page density back down to the Fill Factor like REBUILD does. If you don’t use REORGANIZE on a Random GUID, you can go MONTHS before active inserts will cause even 1% Fragmentation.

    I have a shedload of information in the following video that will help folks begin to understand what’s really going on in indexes. Contrary to the title, it’s not just about Random GUIDs. I just use Random GUIDs in a lot of the demos because they’re the proverbial “Poster Child” of fragmentation. I also tell you that I destroy the myth of Random GUID fragmentation and lay waste to supposed “Best Practice” Index Maintenance.

    Here’s the video. Watch it past the “Q’n’A” section because there’s a pretty BIG surprise there, as well.
    https://www.youtube.com/watch?v=rvZwMNJxqVo

    And, I have to take my hat off to Brent. He started me on all this index stuff and he’s responsible for me adopting the stance of “It’s better to do no index maintenance than it is to do it wrong and, if you’re using REORGANIZE by rote, you’re probably doing it wrong and causing a lot of damage and performance issues, to boot”!

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.