What’s Better: Disabling vs. Dropping Indexes?

In order to improve your applications and your databases, they will need to change over time. The structure of the database changes, the structure of the tables change, the data in the tables change, the application changes, the queries against the data change. Indexes that once helped performance now just bloat your database and cause extra work for inserts, updates, and deletes.

When index needs change and you want to test how removing an index will affect performance, you have two options – you can disable or drop the index. Which is better?

Disabling an Index

To disable in index, issue an ALTER INDEX command.

What happens when you do this? The metadata about the index is retained, as are the usage statistics in sys.dm_db_index_usage_stats. The index is not usable by the query optimizer, however. If you have an index hint in a query specifying the index, the query will fail. Also, foreign key constraints are disabled.

Disabling Nonclustered Indexes vs. Disabling Clustered Indexes

Disabling a nonclustered index will deallocate the index pages – the space is freed in the database.

Disabling a clustered index has additional effects. The data in the table still exists, but will be inaccessible for anything other than a drop or rebuild operation. All related nonclustered indexes and views are also unavailable. Foreign key constraints that reference the table are disabled. Queries against the table will fail.

I want it back! How to re-enable a disabled index

If you want to re-enable the index, you need to rebuild it – there is no ALTER INDEX…ENABLE command. The rebuild works like this:

When you rebuild an index, the usage stats will be reset in sys.dm_db_index_usage_stats.

Dropping an Index

To drop a clustered or nonclustered index, issue a DROP INDEX command.

When you do this, the metadata, statistics, and index pages are removed. If you drop a clustered index, the table will become a heap.

Once an index has been dropped, it can’t be rebuilt – it must be created again. That means that if you are going to drop an existing index, make sure you script the CREATE statement before you pull the trigger.

What’s the biggest difference?

The biggest difference between disabling and dropping a nonclustered index is whether the index definition stays around. Make sure you carefully weigh your options before performing either action, and always have a way to recreate the index available.

LEARN MORE ABOUT INDEXES

Read all about it: We’ve got tons of free articles. Check out our top articles on indexes.

Free tools: Get our free tool to diagnose index insanity, sp_BlitzIndex

Online video training: Check out our one-day online course on How to Tune Indexes in SQL Server

Advanced training: Learn from us in our Mastering Indexing course

Previous Post
Rolling Averages in SQL Server
Next Post
Six Scary SQL Surprises

52 Comments. Leave new

  • Thanks for the article. I remember being taken aback when, while playing with these ideas myself some time ago, I created a table as a heap and could run a select on it just fine. Then I created a clustered index on it, and the select worked fine. Then I disabled the clustered index, and the select would not work anymore. I was baffled because I reasoned that even if the clustered index is disabled, SQL Server should still be able to access the table as a heap, right? Nope, no dice. This helps explain it, and reminds me why I concluded that a table doesn’t so much HAVE a clustered index as a table IS a clustered index (unless it’s a heap). Cheers.

    Reply
  • Can you elaborate on the disabling of FK’s when a non clustered index is disabled?

    Reply
    • Jes Schultz Borland
      March 1, 2013 8:14 am

      Take the example of AdventureWorks SalesOrderHeader and SalesOrderDetail. Disable the clustered index on SalesOrderHeader. The SalesOrderDetail table has a FK to SalesOrderID. When you try to insert a row into SalesOrderDetail, you will get an error stating that the index on SalesOrderHeader is disabled.

      Reply
      • Mike Dimmick
        March 4, 2013 9:44 am

        I wouldn’t read that as ‘foreign key constraints are disabled’ – that’s too broad a statement. I would say that if you disable the primary key (rather than dropping it), inserts on tables referencing that key as a foreign key will no longer work.

        If, however, you disable some other index, the foreign key constraints will still work fine. For example, I just disabled IX_SalesOrderHeader_CustomerID on Sales.SalesOrderHeader, and could still insert into Sales.SalesOrderDetail. (Tested on SQL Server 2008 R2, AdventureWorks2008R2.)

        Reply
  • Same like Hakim, when I tested this index disable feature a while ago, I didn’t see any use case as to ‘enable’ the index back require a index rebuild which similar to dropping/creating the index. Now I can see at least one use case for this index disable feature for its metadata retention. Data warehouse operation often require index to be dropped and recreated later after insert. A lot of times I saw the index recreation step is pre-scripted out which cause subsequent problems when someone modify the index and forgot to change the index recreated script. This metadata of the disabled index could be useful to reduce this type of problem as long as the foreign key concern is carefully examined.

    I am still thinking what is the use case of the statistics retained from the disabled index. Jes or anyone, do you come across any use case for this? Thanks.

    Reply
    • Robert Carnegie
      September 12, 2014 10:34 am

      I assume that statistics are retained on a disabled index because they may as well be kept, but are not maintained better than statistics that you could create anyway without an index (and were they maintained better before?)

      A practical difference (in SQL Server 2005) of disabling and then rebuilding an index seems to be that rebuilding takes place without releasing space in the database where the old index is held. That space is released only when the rebuild is complete. So, rebuild requires space for both the old index and the new one, plus some (which can be partly in tempdb). And you may want not to take up extra space in the database, and maybe make the file grow, when you don’t need to.

      On the other hand, I presume that disabling a foreign key that references the index literally means disabling the foreign key constraint, silently – its definition still exists but it isn’t enforced and it isn’t trusted. That you need to enable it again, like this (if you want to):
      ALTER TABLE MyTableName WITH CHECK CHECK CONSTRAINT MyConstraintName

      (By the way, I’d like to know if this statement makes the server re-check a constraint that is already enabled and trusted, in which case, you’d usually want to avoid doing -that-.)

      From what’s been said, it seems that another impression of mine was wrong – “you should disable nonclustered indexes before disabling then rebuilding the clustered index” – but no(?) – my thought was that the non-disabled nonclustered indexes will automatically and redundantly rebuild when the clustered index is disabled, as well as if it was dropped, but I think I’m being told that what happens is that disabling the clustered index just makes the entire table inaccessible, but rebuilding the clustered index doesn’t affect nonclustered indexes at all?…

      Reply
      • Jes Schultz Borland
        September 12, 2014 2:00 pm

        Why does SQL Server retain statistics on disabled indexes? I don’t know. Perhaps it would be more work to remove them. The stats aren’t updated or changed in any way when the index is disabled, because the data in the index doesn’t exist any longer.

        Rebuilding a clustered index will not necessarily affect nonclustered indexes. From http://msdn.microsoft.com/en-us/library/ms188388.aspx: “Rebuilding a clustered index does not rebuild associated nonclustered indexes unless the keyword ALL is specified.”

        Reply
  • Thank you Jes for this useful information. Question about table scans versus index scans. Many of my indexes are being used for scans, not seeks. I am experiencing performance issues not because of the index scans, but because the indexes continually need to be updated each time we add/modify/delete records.

    Question: If I dropped these indexes that are only used for scans (not seeks), can I expect performance to remain the same since SQL will just switch to table scans instead of index scans? Is there really a performance difference between table scans and index scans?

    Thanks again. John

    Reply
    • John – quick clarifying question. Are your indexes the same size as the table, or narrower? (Think in terms of field quantity and size of each field.)

      Reply
      • Hi Brent. Thanks for the quick reply. The indexes are narrower than the table. The table has 5 columns, but the indexes will have one or two columns. The table is itself has over 20 million records and growing. I believe the reason why so many index scans and no index seeks is because the stored procedures that call on this table are doing counts.

        Since index scans and table scans both require a scan of every record, I just wondered if performance would matter between the two. If I am reading your question the way I think I should, I suppose much of it depends on the where clauses that are used to call on the table?

        Reply
        • Jes Schultz Borland
          October 2, 2013 8:40 am

          If SQL Server does a table scan, it has to read all the rows AND columns into memory. If it does an index scan, it’s going to read all the rows – but only the columns in the index. In terms of I/O, a narrower nonclustered index can be less expensive than a clustered index or table scan.

          Reply
  • Hi Jes,

    In the article, you mentioned that when indexes are rebuilt, the usage stats are reset. Does that mean that if our indexes are being rebuilt frequently (say, nightly right now), that I will not be able to get an accurate list of used/unused indexes?

    Thank you.

    Reply
    • Kim – correct.

      Reply
      • Thank you very much for your response.

        Will this affect how sp_blitzindex works? Does it depend on historical usage stats?

        I am a new SQL Server DBA (couple months) and I just wanted to comment that your team has been a great help to me. Your articles/blogs/videos are all very easy to follow and understand. I am focused on performance tuning and a lot of your articles and videos have helped me. Thank you!!

        Reply
        • Jes Schultz Borland
          February 19, 2015 12:08 pm

          It will – sp_BlitzIndex depends on the index DMVs. If you’re rebuilding indexes nightly, I have two thoughts. First, you probably don’t need to do that! Second, if you need to do it for a specific reason, capturing the index usage statistics and putting them in a table before rebuilding would be a good idea.

          Reply
          • Hi Jes,

            Yes, I’ve watched that video! I think it is one of the first ones I’ve watched. I do agree that we do not need to rebuild indexes so frequently; however, since it “fixes” the problem (temporarily, I think), this is what management prefers. It will be kind of difficult to sway them against that mentality for now. They see queries speed up after indexes are rebuilt (with fill factor of 90), so that is the thinking. But like what the other videos I’ve watched suggested, those could be due to other things like statistics getting updated or queries recompiling and using a different plan, etc.

            Which DMVs do I need to consider if I am to save the usage statistics for indexes? Are there others besides sys.dm_db_index_usage_stats?

            Also, is this the best way for me to kind of have a discussion with you, by posting a comment? I just wanted to make sure that I am not spamming anyone by doing that.

            I also am wondering if you have any resources for Parameter Sniffing. Believe me, I’ve read many many articles about it by now, but have not really been able to find a “real” solution for it. And I’m still confused on whether it is actually a problem or not… seems like a gift and curse…

            Thank you very much again!!

          • Jes Schultz Borland
            February 20, 2015 9:09 am

            Kim, one way to test the theory of updating stats fixing the queries – and not the index rebuilds – is to let stats update one night, without rebuilding the indexes. (In development, of course!) For capturing metrics, start with sys.dm_db_index_usage_stats. Create a SQL Server Agent Job that queries the DMV for the database and/or tables you’re interested in and inserts it into a table in a DBA admin database.

            Oh, parameter sniffing! Have you read my blog about the elephant and the mouse? Is it a problem? You’ll know if it’s a problem when you hear this from a user: “This query (or part of the application, or report) runs fine when I run it for customer X, but it is unacceptable for customer Y.” That is a great sign of when parameter sniffing is smelling something bad!

  • I have not read that blog post from you but I just did and posted a question. =)

    That is a good idea with trying to gather stats without an index rebuild.
    I will recommend that to the team for sure. Do you know of a way that we can record workload in production and replay in the QA/dev instances? It has been very challenging to replicate the issues in our database. I don’t think I have successfully replicated any!

    Reply
  • What are some scenarios where you would want to disable as opposed to dropping an index?

    Reply
    • The classic one is nightly data warehouse loads – you disable the index before doing the load, and then enable it again after the load is finished. That way you don’t have to script out the definition.

      Reply
  • I have a customer who nightly backs-up then restores a 1.2TB Production database on a Reporting server. There are useless indexes on Prod that are used in Reporting.

    Would it be better to Disable the useless indexes in Prod (speeding up prod), then Rebuilding them after they are restored into Reporting. Or (since prod-stats will be no use there), just creating them afresh every-time reporting is restored?

    Reply
    • Richard – if the customer is complaining about pains with insert/update/delete performance in production, then I’d drop the unused indexes there, and create them nightly in reporting.

      If the customer is complaining about pains with long restore times and they need to access the reporting system more frequently, then I’d leave the indexes in place in production.

      If the customer isn’t complaining, I would leave it as-is.

      Hope that helps!
      Brent

      Reply
      • Thanks Brent – indeed the restore is the pain – sometimes delaying their working day. Saddens me not to be able to tune-up production but I guess that’s their call.

        Reply
  • Dipesh Malani
    February 9, 2016 5:57 am

    Hi,

    We have a table with 2 million records every day one process truncate the data and re-insert the new data in it.
    While we re insert the data it took more than 4 hours to execute this process because, we have one non clustered index on the table
    So to get faster this we have made changes in procedures First Dropped the NC index and after completion of inserting again created the index.

    Please suggest is above solution is good if not please give us more efficient solution.

    Thanks in advance !

    Reply
    • Dipesh – personalized performance troubleshooting is a little beyond what we can do in a blog post comment. For free Q&A, head over to http://DBA.StackExchange.com, or for personalized consulting, click Contact at the top of the page and our sales pro can talk through what a consulting engagement looks like.

      Reply
  • Hi , I recently dropped a list of indexes that were unused in the Data Warehouse which were using around 240 GB. Before the drop of indexes the driver space was at 64 GB and after the drop there was no change . I would expected to gain those 240 GB but it did not happen.
    Does it take time for SSMS to free that space or do i need to do something else ?

    Reply
    • Erik Darling
      April 18, 2016 9:56 am

      This won’t happen at all. The unused space will remain in the data file as white space that can be grown into. If this is a one time thing, you may want to shrink the data file down in small increments to reclaim space. If you expect the data to grow out again, just leave it alone. Shrinking data files is a rather painful operation, and you should probably rebuild indexes afterwards.

      Reply
  • Hi,

    Thanks for such a nice article… 🙂
    I have a question: For example say I have 1 heap table which has 100 pages. Now I created a Clustered Index. So once the index is created, it will create a new set of pages for the data. right?
    In that case what happens to the old pages which were containing the heap table data?
    Also after that if I again drop the Index then what happens to the data pages and if I disable the index then what happens?

    Kindly suggest…

    Many thanks in advance

    Reply
  • I suspect that the answer to my question is to recreate the non clustered index but… I’ll ask the question anyway – silly as it may seem. If I truncate a heap table that had a non clustered index on it, then add completely different data to the table, is it better to recreate the index or rebuild the index?

    Reply
    • Hey Mark! If you’re regularly truncating and reloading data, I’d probably want to drop all the nonclustered indexes beforehand to speed those processes up.

      Reply
  • 2 questions that still come to mind:
    1. For non-clustered indexes: is there any known performance difference between disable-rebuild and drop-recreate?

    2. Consider the following case. A non heap table (1 clustered index plus few non-clustered).
    BulkCopy is performed on the (non empty) table which will add significant amount of records to the table (>25%). MS recommends dropping the indexes but recreating the clustered index can be very time consuming. Would keeping the clustered index, disabling only the non-clustered, and rebuilding the non-clustered be a good alternative to dropping all indexes?

    Thanks for any hints.

    Reply
    • Guy – I’d recommend running experiments yourself in your own environment to get answers like this. It’ll really depend on your workloads and your hardware.

      Reply
      • Thanks and agree, this is what we are currently doing. Was just wondering if there is some rule of thumb here. Ill post back should we find anything worth posting.

        Reply
  • I wish there was a way I can truncate and load data into a table by disabling all indexes and then rebuilding them after the load without hardcoding of index names in my SP.

    Reply
  • thanks for the really useful articles on and around this topic.
    Are there any clear differences between a drop/create and a disable/rebuild approach when it comes to the locks they take on system tables? I ask because we are trying to troubleshoot blocking of the system tables during some long-running stored procs that include index manipulation. These procs have to be run in an explicit transaction because of the interface we are using from Python to control SQL server – thus any locks on system tables are in place for the duration of the sproc running and cause us contention issues.
    We want to adopt strategies that would allow us to pre-define the index in a quick running sproc then run a longer running rebuild sproc later that would not block system tables when it was running in an explicit transaction – does this make sense?

    Reply
  • Great article, that answers most of my questions, so sorry if this is a really obvious question but if I disable a NonClustered index in a production environment will that lock the access to the table whilst being disabled or not – I understand that there will be a cost if I rebuild the index. I ask because I have a high number of unread indexes with high writes that I want to remove.

    Reply
  • Hello.
    When migrating data to a fresh server, is it advisable to DISABLE all the INDEXES in Target database schema or to DROP them all?

    Thanks.

    Reply
  • Hi Brent-

    Amazing Article. Lot of good information !

    One question I have is, that I want to rebuild a clustered index on huge table (500m rows) and part of rebuild I simply plan to drop and re-create index (during the downtime) with same definition, and reason for my work is I will reclaim about 300 gb space out of this rebuild,

    1) Do you think It can create any performance issues after rebuild ? I do not think so but wanted to get your thoughts ?

    2) Table also has lot of referential integrity and I am not touching constraints just dropping and recreating index so it should not impact any constraints right ?

    3) Should I first drop all Indexes (including non cluster) on the table and rebuild all of them ? I was reading in MS article that if you want to drop and rebuild clustered index make sure you also drop non clustered indexes first and then drop clustered index and rebuild them.

    https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008-r2/ms190691(v=sql.105)?redirectedfrom=MSDN

    Appreciate your feedback. Thank you !

    Reply
  • David Tovee
    April 5, 2021 8:13 am

    Disabling indexes documents their existence and easy to restore…
    My friend, a lazy DBA, thinks that disabling non clustered indexes (if there is no referential integrity or hinting) would enable them to be easily identified to be restored if there is an issue and the support contract doesn’t allow indexes to be changed.
    They could be rebuild using a ‘generic’ script and it would show which indexes were in place.

    Reply
  • Hi Brent!
    Why is it possible to disable the clustered Index?
    it is dangerous.
    All queries against the table will fail.

    Reply
  • Hi Brent!
    Yes.
    According to a person,
    That is the purpose of disabling the clustered index. It becomes inaccessible and many clients practice it for security reasons.
    But I do not like this feature.
    Thanks!

    Reply
  • George Walkey
    March 9, 2022 3:51 pm

    Brent,
    if I have large Non-Clustered Indexes and I drop them, it causes blocking. In SSMS I can hit cancel,
    but then that causes even more blocking. I Ended up killing SSMS, and, viola, the Index is still there.

    I cant seem to find in the MS Docs where an implicit transaction is implied where a DROP INDEX cancel causes a rollback of the drop action, aka , the opposite of a drop is a rebuilding the index, or does the drop fully commit.

    AKA Junior DBA killed the drop when they noticed it caused a blocking chain.
    Does the drop Index automatically get thrown into TEMPDB so that a cancel/rollback can be done?

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.