Base Table
When you choose to forgo putting a clustered index on your table, you may find your queries utilizing forwarded fetches — SQL Server’s little change of address form for rows that don’t fit on the page anymore.
This typically isn’t a good thing, though. All that jumping around means extra reads and CPU that can be really confusing to troubleshoot.
All sorts of things might get blamed, like parameter sniffing, out of date stats, eye tax frog man station, and more common bogeypersons that take the blame for all SQL Server performance issues.
Dummo
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
CREATE TABLE el_heapo ( id INT IDENTITY, date_fudge DATE, stuffing VARCHAR(3000) ); INSERT dbo.el_heapo WITH (TABLOCKX) ( date_fudge, stuffing ) SELECT DATEADD(HOUR, x.n, GETDATE()), REPLICATE('a', 1000) FROM ( SELECT TOP (1000 * 1000) ROW_NUMBER() OVER (ORDER BY @@SPID) FROM sys.messages AS m CROSS JOIN sys.messages AS m2 ) AS x (n) CREATE NONCLUSTERED INDEX ix_heapo ON dbo.el_heapo (date_fudge); |
1 2 3 |
EXEC master.dbo.sp_BlitzIndex @DatabaseName = N'Crap', @SchemaName = 'dbo', @TableName = 'el_heapo'; |

When we run a simple query against it, we get a plan with a bookmark lookup.
1 2 3 4 5 |
SELECT * FROM dbo.el_heapo AS eh WHERE eh.date_fudge BETWEEN '2018-09-01' AND '2019-09-01' AND 1 = (SELECT 1) OPTION(MAXDOP 1); |
The MAXDOP 1 hint is just to make reading the tea leaves easier.

When we update the table, the fun starts. By fun I mean terribleness.
1 2 3 4 5 |
UPDATE eh SET eh.stuffing = REPLICATE('z', 3000) FROM dbo.el_heapo AS eh WHERE eh.date_fudge BETWEEN '2018-09-01' AND '2019-09-01' OPTION(MAXDOP 1) |
Now sp_BlitzIndex shows up something new!

We racked up ~6300 forwarded fetches just during the update.
If we re-run our original select query, that number doubles.

If we run Profiler to capture some query metrics, because it’s late and I’d rather enjoy this wine than use XE, the issues show themselves a bit more.

I think the change in metrics from before and after the update speak for themselves.
This is all on a relatively small heap, with queries that touch a small number of rows.
My select only returns a little under 9000 records, but it takes ~6000 extra reads to get them with the fetches involved.
CPU doesn’t do much, but it does show up where it hadn’t before.
Fear Of Endings
Heaps have their uses. Some of my favorite people love heaps.
But you have to be really careful when choosing which tables you’re going to leave clustered indexes off of.
Typically, if any part of the workload involves updates or deletes, a heap is a bad idea.
Thanks for reading!
4 Comments. Leave new
Yes, deletes on heaps without doing a table lock is bad, it doesn’t de-allocate the space in the table, so new rows will not get added there. You have to rebuild the table or temporarily add a clustered index to the table to do so. We had tables with gigs of unused space in them.
Rob – when you say “new rows will not get added there,” I assume you mean new rows from *other* indexes or tables? The space is available for reuse for the same object. You can prove it by creating a heap, loading it with 1GB of rows, delete without a table lock, and then insert another 1GB of data. The heap won’t be 2GB in size. (Just clarifying to make sure we’re on the same page.)
“eye tax frog man station” ?
Wypo (Wine typo), or just an old person’s reference that I don’t get cause I’m too darn young? 😛
Heh heh heh, yeah, deep cut there – it’s kinda like Cockney rhyming slang for index fragmentation.