Blog

In order to improve your applications and your databases, they will need to change over time. The structure of the database changes, the structure of the tables change, the data in the tables change, the application changes, the queries against the data change. Indexes that once aided performance now sit unused while new indexes are added.

When index needs change and you want to test how removing an index will affect performance, you have two options – you can disable or drop the index. Which is better?

Disabling an Index

To disable in index, issue an ALTER INDEX command.

ALTER INDEX IX_IndexName ON Schema.TableName DISABLE 

What happens when you do this? The metadata about the index is retained, as are the usage statistics in sys.dm_db_index_usage_stats. The index is not usable by the query optimizer, however. If you have an index hint in a query specifying the index, the query will fail. Also, foreign key constraints are disabled.

Disabling a nonclustered index will delete the index pages – the space is freed in the database.

Disabling a clustered index has additional effects. The data in the table still exists, but will be inaccessible for anything other than a drop or rebuild operation. All related nonclustered indexes and views are also unavailable. Foreign key constraints that reference the table are disabled. Queries against the table will fail.

If you want to re-enable the index, you need to rebuild it – there is no ALTER INDEX…ENABLE command.

ALTER INDEX IX_IndexName ON Schema.TableName REBUILD 

When you rebuild an index, the usage stats will be reset in sys.dm_db_index_usage_stats.

Dropping an Index

To drop a clustered or nonclustered index, issue a DROP INDEX command.

DROP INDEX IndexName ON Schema.TableName 

When you do this, the metadata, statistics, and index pages are removed. If you drop a clustered index, the table will become a heap.

Once an index has been dropped, it can’t be rebuilt – it must be created again. Thus, if you are going to drop an existing index, make sure you script the CREATE statement also. If it is determined that you need it again in the future, it will be much easier to open the script than try to remember how you had it set up previously!

Do you want to maintain statistics? 

The biggest difference between disabling and dropping an index is whether the metadata and statistics are persisted. If disabled, they are. If dropped, they are not. Make sure you carefully weigh your options before performing either action, and always have a way to recreate the index available.

↑ Back to top
  1. Thanks for the article. I remember being taken aback when, while playing with these ideas myself some time ago, I created a table as a heap and could run a select on it just fine. Then I created a clustered index on it, and the select worked fine. Then I disabled the clustered index, and the select would not work anymore. I was baffled because I reasoned that even if the clustered index is disabled, SQL Server should still be able to access the table as a heap, right? Nope, no dice. This helps explain it, and reminds me why I concluded that a table doesn’t so much HAVE a clustered index as a table IS a clustered index (unless it’s a heap). Cheers.

  2. Pingback: Something for the Weekend - SQL Server Links 01/03/13 • John Sansom

  3. Can you elaborate on the disabling of FK’s when a non clustered index is disabled?

    • Take the example of AdventureWorks SalesOrderHeader and SalesOrderDetail. Disable the clustered index on SalesOrderHeader. The SalesOrderDetail table has a FK to SalesOrderID. When you try to insert a row into SalesOrderDetail, you will get an error stating that the index on SalesOrderHeader is disabled.

      • I wouldn’t read that as ‘foreign key constraints are disabled’ – that’s too broad a statement. I would say that if you disable the primary key (rather than dropping it), inserts on tables referencing that key as a foreign key will no longer work.

        If, however, you disable some other index, the foreign key constraints will still work fine. For example, I just disabled IX_SalesOrderHeader_CustomerID on Sales.SalesOrderHeader, and could still insert into Sales.SalesOrderDetail. (Tested on SQL Server 2008 R2, AdventureWorks2008R2.)

  4. Same like Hakim, when I tested this index disable feature a while ago, I didn’t see any use case as to ‘enable’ the index back require a index rebuild which similar to dropping/creating the index. Now I can see at least one use case for this index disable feature for its metadata retention. Data warehouse operation often require index to be dropped and recreated later after insert. A lot of times I saw the index recreation step is pre-scripted out which cause subsequent problems when someone modify the index and forgot to change the index recreated script. This metadata of the disabled index could be useful to reduce this type of problem as long as the foreign key concern is carefully examined.

    I am still thinking what is the use case of the statistics retained from the disabled index. Jes or anyone, do you come across any use case for this? Thanks.

  5. Thank you Jes for this useful information. Question about table scans versus index scans. Many of my indexes are being used for scans, not seeks. I am experiencing performance issues not because of the index scans, but because the indexes continually need to be updated each time we add/modify/delete records.

    Question: If I dropped these indexes that are only used for scans (not seeks), can I expect performance to remain the same since SQL will just switch to table scans instead of index scans? Is there really a performance difference between table scans and index scans?

    Thanks again. John

    • John – quick clarifying question. Are your indexes the same size as the table, or narrower? (Think in terms of field quantity and size of each field.)

      • Hi Brent. Thanks for the quick reply. The indexes are narrower than the table. The table has 5 columns, but the indexes will have one or two columns. The table is itself has over 20 million records and growing. I believe the reason why so many index scans and no index seeks is because the stored procedures that call on this table are doing counts.

        Since index scans and table scans both require a scan of every record, I just wondered if performance would matter between the two. If I am reading your question the way I think I should, I suppose much of it depends on the where clauses that are used to call on the table?

        • If SQL Server does a table scan, it has to read all the rows AND columns into memory. If it does an index scan, it’s going to read all the rows – but only the columns in the index. In terms of I/O, a narrower nonclustered index can be less expensive than a clustered index or table scan.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

css.php