Last week, Erik showed two queries that were aaaaalmost identical, with only one extra column – and the execution plans were dramatically different. Adding just one eensy column made all the difference in the world.
Now, check out these two queries – the first asks for top 100, and the second asks for top 101:
1 2 3 4 5 6 7 8 |
SELECT TOP 100 * FROM dbo.Posts ORDER BY CreationDate DESC; GO SELECT TOP 101 * FROM dbo.Posts ORDER BY CreationDate DESC; GO |
They produce estimated plans that seem identical (PasteThePlan), even down to the estimated costs – note that they’re both 50%:

But one of these has a couple of very, very bad performance issues, and if you look very closely in the plan’s XML, you’ll discover the gotchas.
13 Comments. Leave new
Looks like the second one(101 rows) is requesting a huge SerialDesiredMemory grant, which would make sense if SQL had to pull in all the rows to sort them Descending then filter to the top 101. The first query only requested 856 kb. My guess is the first query requested a too small memory grant, so it spent a lot of time doing the actual work of scanning the Posts table, so the performance issue would be with the first query.
Arthur – interesting! So just to make it clear: you’re expecting the TOP 100 to take longer, and the TOP 101 to be faster.
(Not saying you’re wrong or right just yet – letting other folks try it with the database to see what happens in the actual plans.)
Yep, that’s my guess. I’m interested to see the other responses
Though the max available memory is a lot lower than what the 101 row one is estimating, so either plan is going to wind up spilling to tempdb for the sorts, though I also think the higher memory grant for the 101 row plan should cause it to spill less and thus run faster.
Noah – you wrote, “so either plan is going to wind up spilling to tempdb for the sorts”
Question your assumptions on that by asking, “How much memory does each one need? And what would cause that to be different?”
Yeah, clearly my assumption that “oh, SQL must have no freaking idea what it’s doing with the TOP 100 and it’s under-requesting memory” was a little off 😀
I’m just too used to technology letting me down, I guess 😉
Noah – don’t feel bad! It’s amazing how many surprises you keep finding in query plans the more you play around. I like to joke in my classes by saying, “SQL Server is amazingly smart when I least expect it to be, and also amazingly…not-so-smart when I expect it to be a genius.”
The TOP 101 plan desires a much larger memory grant because it actually needs that memory! The implementation of Top N Sort for 101 (or more) rows is to first sort all of the rows (very memory and computationally expensive) and then take the first 101 rows.
For 100 (or fewer) rows, the TOP N Sort operator uses an alternative algorithm that does a single pass over the all rows and maintains a separate data structure — likely a heap, as described in https://en.wikipedia.org/wiki/Heap_(data_structure) — that contains the top 100 rows observed thus far. Once all the rows have been processed, only the (up to 100) rows in that separate data structure need to be sorted to compute the final result. Therefore, the memory grant required is quite small since only the top 100 rows observed thus far need to be retained in memory at any given point.
In practical conditions, this alternative algorithm nearly always provides far better performance (and avoids spills) when selecting the top 100 rows (vs. 101 rows) from a much larger number of rows. It’s unfortunate that the row threshold for the optimization cannot be configured or determined dynamically (it’s always 100), but it’s still an incredibly powerful optimization!
Paul White covers the topic in more depth, including an interesting contrived example in which the top 100 optimization can actually be detrimental to performance:
http://sqlblog.com/blogs/paul_white/archive/2010/08/27/sorting-row-goals-and-the-top-100-problem.aspx
Great stuff! I had no idea this could happen, and it certainly refutes the idea I had.
Arthur – don’t feel bad! Most people don’t know, thus the post, heh.
As soon as I saw the word seem (“They produce estimated plans that seem identical (PasteThePlan)”), I pasted the plans into a diff tool, which called attention to the memory grant described by Geoff and Arthur. I’ll note SSMS shows this grant on the properties of the select node within the plan UI, but as far as I can tell this information is not displayed within the PasteThePlan UI.
I can confirm that the 100 vs 101 behavior explained by Geoff occurs when querying my own database.
Brian – yep, exactly! It’s neat how you have to kinda know that the problem is happening in order to take it to the next level and use a diff tool. The different sort algorithms aren’t obvious from the plan.
Geoff – very good! It’s so fun to see this in action, and it’s one of those things that really catches people by surprise when they pick a number for their SELECT TOP ___. There’s nothing in the visual plan that indicates it’d be an issue, and you have to drill down pretty far to discover it in the memory grants.