Some of our clients have very high forwarded record counts and aren’t aware of it until they run sp_BlitzFirst and get an alert about high Forwarded Records per Second. Some of these clients are using Ola Hallengren‘s IndexOptimize stored procedure to maintain their indexes. This brought up a question of whether or not rebuilding a heap fixes the forwarded records or if IndexOptimize is excluding heaps.
What is a heap?
A heap is a table without a clustered index. It is not stored in any kind of order. Think of a heap like a teenager that has been asked to clean their room. The teenager grabs everything off the floor and crams it into the closet. The room looks orderly at first glance, until you examine the room. When everything was crammed into a pile in the closet, it was done randomly and without order. That pile is a heap.
What are forwarded records?
Forwarded records are rows in a heap that have been moved from the original page to a new page, leaving behind a forwarding record pointer on the original page to point at the new page. This occurs when you update a column that increases the size of the column and can no longer fit on the page. UPDATEs can cause forwarded records if the updated data does not fit on the page. Forwarding pointers are used to keep track of where the data is.
How do you fix forwarded records?
You have two options to remove the forwarded records.
1. Rebuild the heap: ALTER TABLE TableNameGoesHere REBUILD;
2. Add a clustered index to the table
Option 1 is a temporary fix. Forwarded records can still happen, so you should monitor for forwarded records and rebuild the table to remove them. Note that rebuilding heaps was added to SQL Server starting with SQL Server 2008. You can’t rebuild heaps in SQL Server 2005 or lower.
Option 2 is a permanent fix. There are some people that prefer heaps for performance reasons. I am not one of those people. Writes on heaps do perform well, but reads do not. Think of the teenager cleaning their room analogy. The teenager can “clean” his/her room quickly but can’t find things easily. Finding one item might not take too much time, but imagine having to find 10 items from that pile.
Add a clustered index to all tables with the exception of staging tables or those used for ETL.
Can you show me a demo of this?
Using the StackOverflow database, I created a heap by dropping the clustered index on the Posts table. Even though this isn’t a small table, the clustered index dropped in just over a minute. I could have used a smaller table or created a new one, but I was too lazy. I save my energy for hiking, plus I always start with Posts.
ALTER TABLE Posts DROP CONSTRAINT [PK_Posts__Id];
Examining the table, we see it has 0 forwarded records:
SELECT OBJECT_NAME(object_id) AS table_name, forwarded_record_count, avg_fragmentation_in_percent, page_count
FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('Posts'), DEFAULT, DEFAULT, 'DETAILED');
Create some forwarded records:
SET Body = Body+Body+Body
WHERE Id BETWEEN 6785 AND 7000;
Examining the table again, we see it now has 36 forwarded records:
Time to rebuild it:
ALTER TABLE Posts REBUILD;
Check the forwarded record count again:
Yippee, no forwarded records after the heap was rebuilt!
Ola Hallengren’s IndexOptimize does not rebuild heaps.
IndexOptimize does not rebuild heaps as of this writing. It specifically excludes them with “indexes.[type] IN(1,2,3,4,5,6,7)” since type=0 is a heap. If you want Ola’s script to do that, edit the IndexOptimize proc and look for this area:
SET @CurrentCommand = @CurrentCommand + ' WHERE objects.[type] IN(''U'',''V'')'
+ CASE WHEN @MSShippedObjects = 'N' THEN ' AND objects.is_ms_shipped = 0' ELSE '' END
+ ' AND indexes.[type] IN(1,2,3,4,5,6,7)'
+ ' AND indexes.is_disabled = 0 AND indexes.is_hypothetical = 0'
And add 0 in the list of types, as in:
+ ' AND indexes.[type] IN(0,1,2,3,4,5,6,7)'
Just keep in mind that you’ll need to edit Ola’s proc each time you update it. To be alerted if/when Ola changes this, watch this Github issue.
Brent says: I was shocked when I learned this. I thought for sure Ola would take care of me. Turns out there’s a few things I still have to do for myself.
Does REBUILD take the table offline? Are there any other issues to be aware of for this? Thanks!
The rebuild will take the table offline if you do not have the option of rebuilding your index online, which is an enterprise feature. Ola’s scripts handles this as options you can set in order of what to do.
I wrote a short article last year, showing what DBCC PAGE shows in relation to forwarding pointers – http://wp.me/p3Vxvi-bO
This post is really eye opening, yet another reason why every table should have a clustered index. Well done Tara!.
Hi Tara, it’s nice to see some posts written by you…
“Add a clustered index to all tables with the exception of staging tables or those used for ETL.”
Can you explain why adding a clustered index prevents forwarded records?
Does adding the index remove existing forward records too?
It’s just because fragmentation on b-tree indexes (clustered or non clustered) produces page splits, forwarded records are a phenomena exclusive for HEAPS.
And yes, creating a clustered index will remove all forwarded records in your HEAP as they must be placed contiguously to the previous and next [clustered] key in the index.
remember that a clustered table is always sorted by the cluster key and must stay that way to be able to perform seek operations on the key.
hope this helps
Just in case anyone hasn’t seen it, I thought I’d point out that the problem described in this KB can make heaps with frequent deletes very troublesome:
I’ve observed this causing a couple types of really bad performance problems (sometimes both at the same time):
1) Table scans on a heap taking minutes that only take a few seconds after a table rebuild
2) Buffer pool being full of empty or nearly-empty pages from a heap
The symptoms surprised me until I learned about that KB and that IndexOptimize doesn’t rebuild heaps.
Ok what do I do with a 3rd party DB which has over 6K tables, over 5K no clustered index (they went guid crazy don’t get me started), several of these tables have millions of rows. I use Ola’s stuff this is good but now I am depressed
timc – you rebuild the heaps as we describe in the post. Why so sad? You’ve got the answer now! Rejoice!
Rejoice? 5k tables with no CI? You are genuinely a wrong one, Ozar.
FWIW, here’s a proc I wrote a few years ago that rebuild fragmented heaps. Note that it uses DETAILED to find “fragmentation level” (number of forwarded records)…
Testing it now, problem is how to do this with 600 users on it
tested will have to do some research on help it gives. Tibor if you don’t mind I am going to post on the user group forum for this DB a link to your stuff. May become users hero but the vendors villian
Excellent blog topic. I learned heaps!
I see what you did there.
Absolutely, Tim. Go ahead. 🙂
Is there any advantage to adding a clustered index to small look up tables that do not change, For example list of states or counties or even status?
Jay – if you seek for specific rows, or if you sort them out in order, yes, it’s better to sort it once instead of every time you read the table.
This kind of table does not change for years at a time and most fit in one or at most two pages. Is it still a good choice?
I would add a clustered index to a table that doesn’t change if you are seeking on rows or sorting them. Remember that heaps are great for writes.
I love the teenager analogy, remind me of my daughter who’s only 10.
If you are interested I build an additional store procedure based on Ola Hallengren’s maintenance scripts to rebuild the heaps. If you are interested I can provide the code.
It is running on our environment already for some time, with the same frequency as the index optimize script from the maintenance plan.
It is only usable if the maintenance scripts from Ola are installed.
Feel free to share a link!
Can you share this script?
can you please share the script
Share what script? The demo scripts are in the post.
It seems Ludo created an additional store procedure based on Ola Hallengren’s maintenance scripts to rebuild the heaps. That is the script i was asking to be shared 🙂
I don’t know about Ludo’s script, but you can find mine here: http://karaszi.com/rebuild-all-fragmented-heaps
Ludo Bernarert , Would you share the code that you build an additional store procedure based on Ola Hallengren’s maintenance scripts ?
Can you share the additional store procedure based on Ola Hallengren’s maintenance scripts (HeapRebuild)?
What would one consider thresholds for heaps to be the same as clustered or nonclustered index? Ignore anything less than 1,000 pages, rebuild when >= 30%, 5%? I’m going with 5% myself and ignoring anything less than 1,000 pages.
I would be paying attention to tables with high forwarded records or those with lots of deletes. So I would have a custom script that checks for these 2 conditions and rebuilds them if either condition exists.
In my case at hand, believe it or not, all the forwarded record counts are zero, yet some tables are large (>50,000 pages) with 25% or more fragmentation. In this case, these are all auditing tables. Meaning they only a target for inserts, no updates. The deletes only occur when older data is purged (in batches). I suspect eliminating the fragmentation (or minimizing it might help). I’ll soon find out.
Deletes are a problem for heaps too. Deletes leave the space behind unless a table lock is used during the delete.
So add a clustered index to those auditing heaps that have deletes occurring on them.
Good point Tara! Thanks! Glad to see you’re part of the Ozar team!
Someone just brought this to my attention. I can’t imagine why Ola’s scripts would exclude heaps, but I guarantee you that Minion Reindex DOES reindex heaps. You can find it at http://www.MinionWare.net/reindex
From what I’ve heard from Ola, it’s because rebuilding a heap (alter table dbo.whatever rebuild) also rebuilds all the nonclustered indexes, too.
That can be messy if you’ve gone the path of deliberate heaps with nonclustered indexes, or if you’re using them accidentally and have indexed the table for other queries normally.
You may want to consider making that optional to avoid unnecessary rebuilds/maintenance time. Just a suggestion, though.
Good point!, I’d be interested in knowing what method you guys use to detect fragmentation on HEAPs, since the only reliable method it’s the forwarded_record_count (avg_fragmentation_in_percent will ‘only’ show extent fragmentation *) and it’s only displayed in ‘DETAILED’ mode, which can be kinda expensive if you have big tables.
Please, correct me if I’m wrong
*And I guess if you rebuild heaps in parallel you will never get rid of extent fragmentation.
Hi Brent, I have forwarded records on one of our heaps found by your BlitzFirst script, however it hasn’t provided me with the databaseID/Table Name. Is this by design? 3rd party applications database uses heaps :(. Thanks in advance
Hi Daniel, use sp_BlitzIndex to figure out which table has the high forwarded records.
how much forward record count per second is bad??
Kausar – I don’t think I’ve ever seen a definitive answer on that. I’d step back and ask, “What’s the problem you’re trying to solve?”
If you forced me to pick a number, I’d say anything over 200 per second should be looked into. The servers that I’ve seen with high forwarded records per second have all sorts of issues. It’s just more evidence that the server needs attention from an experienced performance tuner.
But I’m just throwing that number out there. Don’t hold me to it! 😉
Tara, we have a SQL Server with two third party apps running against the same SQL SVR 2016. After running BlitzFirst (took 1 min 33 sec) my priorities included at Line 2 in the results “Forwarded Fetches/Sec High” and we had over 17,000 Forwarded Records. I am trying to determine which table has the high forwarded records and I am not sure how to run sp_BlitzIndex because the way I ran it shows “No Major Problems Found” Glad to find your website. I am new to DBA tasks (IT generalist 10+ years).
Ran 2nd time 45 minutes later and this is not the issue anymore apparently… Looks like it may have been momentary. We do use Ola’s scripts perhaps they setup to include heaps (implemented by another engineer)…
Ran 2nd time our performance issue was CPU 53% Ring buffer, LATCH_EX 805sec, and CXPACKET 370sec followed by SOS_SCHEDULER_YIELD 158sec, and TRACEWRITE 92sec.
Thanks for any insight. I plan to leave this here and go over to your other page for wait-stats to dig into LATCH_EX etc…. Thanks again!
Hi Ace, Ola’s index maintenance scripts don’t rebuild heaps sadly. You can see what tables the heaps with high forwarded records are by running the SP_blitzIndex with the parameter – @Mode = 4 (will give you detailed findings of all issues however minor).
The blitzfirst only brings up the issues found at the time you run it, so i would recommend storing sp_BlitzFirst Results in a Table (shown in this link – https://www.brentozar.com/askbrent/) & then creating an SSRS report to alert you of any serious performance issues.
Run sp_BlitzIndex @Mode = 0 in each of the databases. Look for the forwarded records section. 17000 per second is the highest I’ve ever heard of. One of my clients had 5500 per second, and I thought that was outrageous.
I just ran it again (1min 13 sec) and got the high Forwarded Fetches/Sec again but this time it was at 156,944 !!! I will run sp_BlitzIndex @Mode = 0 in each db and find the database where this is occurring… It may take some time since we have 48 databases on this server. But in size our sum total of all DBs combined is around 70GB.
Thanks for your reply.
Hi Tara or Brent,
I ran sp_BlitzIndex @Mode = 0 against a suspicious DB and the top 3 results are: Row 2: “Aggressive Indexes: Total lock wait time > 5 minutes…” dbo.XYZ Row lock waits 47; duration: 6 minutes avg duration 8 seconds… Reads 192,137 it is 106MB
Row 3: “Multiple Index Personalities: Duplicate keys” Index Name: ABC123 Reads 424,897 it is 262MB
Row 4: “Indexaphobia: High value missing index with Low Impact” dbo.XYZ (again) 4,827 uses; Impact 99.1% avg query cost 4.0650 22NC indexes exist (188MB)
Actually Minion Reindex can do heaps. It’s a simple setting and you can even tell it how to handle them at the DB, or at the individual table level. You can get it at http://www.minionware.net/
Yup, you commented back in October with that info. 😉
Y/N. I commented that it could be done, and it could, but it required a little manipulation. This latest release turned it into a config option.
You wrote in October: “Someone just brought this to my attention. I can’t imagine why Ola’s scripts would exclude heaps, but I guarantee you that Minion Reindex DOES reindex heaps.” But I guess you were mistaken and have now added the feature. That’s cool. Brent wants to add the feature to IndexOptimize too now that it’s open source.
I never said i was mistaken. In fact, I just said that it was possible back then, but it took some manipulation. Now I’ve simplified it as a setting. I wanted it to be easier. There are a few things you can do with Minion Reindex that require a little manipulation. They allow you to do things that you can’t do with any other tool. And I’m looking at making them settings in the next major release. That doesn’t mean that I’m mistaken about whether you can do it now.
Cool. So sounds like you’ll be back here again next year to talk about it, eh? See you again next fall! Nice meeting up with you here in the same blog post comments every year.
Oh I’ll try to jump on it every chance I get… just for you.
It says to not add clustered indexes to fix staging and ETL tables with forwarded records. Is there a reason for this? Do we have to live with forwarded records on stage and ETL tables or is there a solution specific to them?
It’s in the first paragraph of the fix section. Schedule a job to run ALTER TABLE TableNameGoesHere REBUILD on those staging-type tables since it doesn’t make sense to add a clustered index to them.
Thanks, Tara. I wasn’t sure if there was any other way than rebuilding the heap.
Using sp_blitzFirst I get over 12K forwarded fetches in “Forwarded Fetches/Sec High: Temp Table” . I’m using sys.dm_db_index_physical_stats on tempDB to identify the problem tables, but it does not return any tables with forward fetch problems. How is sp_blitzFirst getting this result? And I’m guessing the fix is to go to the developers and tell them to stop using temp tables where a clustered table should be.
Andy – one of the awesome things about T-SQL scripts like sp_BlitzFirst is that you can open the code and read it. Go ahead and search for that string, and you can see exactly how it’s calculated.
The response email says this answer is “probably wrong, too”. The correct response was, “quit being so lazy and find the answer yourself”. But seriously, thanks for the kick in the pants.
Hahaha, you’re welcome.
Question about editing Ola’s scripts, do you only edit the one instance of indexes.[type] IN(1,2,3,4,5,6,7)’ to have 0, or do you add the 0, to all locations where it asks for the above code? It seems like indexes.[type] IN(1,2,3,4,5,6,7)’ exists in more than 1 place.
+1 on this. I’d like to make the change but am not sure if all the sections need to be updated.
I think all but see my comment in the trail. I think this may not work as Ola’s proc will never generate an ALTER TABLE statement, only ALTER INDEX, and the latter will not work for heaps. But I could be wrong.
Presume this phenomenon happens with temporary tables and table variables without clustered indexes too? How many forwarded records is bad before performance really suffers?
James – if you’re trying to find a threshold that you can personally live with for your own workloads, your best bet would be to run an experiment to find out. I can’t tell you what level of slowdown you’re okay with. 😉
I would assume by the nature of this issue, temp tables and table variables, that no, forwarded record counts is not of a concern in in those areas.
I just tested on both temp tables and table variables, and fwd record happens on them in an identical way as on regular tables. Whether one suffers from them is a different matter. But it isn’t unlikely that you use such a construct as a “scratchpad” and modify data heavily in the temp table / table variable, ending up with all those fwd records.
sys.dm_db_index_operational_stats is a nice DMV geared towards finding heaps with forwarded fetches (whether they live in a user database or TempDB).
Hi Brent / fellow DBAs. Brent, I suspect as you know this stuff infinitely better than me, that your suggestion that Ola’s IndexOptimize proc could be adapted by adding in index type 0 (ie change occurrences of indexes.[type] IN(1,2,3,4,5,6,7) with indexes.[type] IN(0,1,2,3,4,5,6,7)) must be right, but when I look through the proc, I cannot see that it would generate an ALTER TABLE statement, only ALTER INDEX, and I think to rebuild heaps you need the former? Apologies if I am wasting your time here. Thanks