How Much Can One Row Change A Query Plan? Part 1

Execution Plans
13 Comments

Last week, Erik showed two queries that were aaaaalmost identical, with only one extra column – and the execution plans were dramatically different. Adding just one eensy column made all the difference in the world.

Now, check out these two queries – the first asks for top 100, and the second asks for top 101:

They produce estimated plans that seem identical (PasteThePlan), even down to the estimated costs – note that they’re both 50%:

One of these things is much like the other

But one of these has a couple of very, very bad performance issues, and if you look very closely in the plan’s XML, you’ll discover the gotchas.

For the details, read on in Part 2.

Previous Post
How Much Can One Column Change A Query Plan? Part 2
Next Post
Indexing for Windowing Functions: WHERE vs. OVER

13 Comments. Leave new

  • Looks like the second one(101 rows) is requesting a huge SerialDesiredMemory grant, which would make sense if SQL had to pull in all the rows to sort them Descending then filter to the top 101. The first query only requested 856 kb. My guess is the first query requested a too small memory grant, so it spent a lot of time doing the actual work of scanning the Posts table, so the performance issue would be with the first query.

    Reply
    • Arthur – interesting! So just to make it clear: you’re expecting the TOP 100 to take longer, and the TOP 101 to be faster.

      (Not saying you’re wrong or right just yet – letting other folks try it with the database to see what happens in the actual plans.)

      Reply
    • Noah Engelberth
      September 12, 2017 9:07 am

      Though the max available memory is a lot lower than what the 101 row one is estimating, so either plan is going to wind up spilling to tempdb for the sorts, though I also think the higher memory grant for the 101 row plan should cause it to spill less and thus run faster.

      Reply
      • Noah – you wrote, “so either plan is going to wind up spilling to tempdb for the sorts”

        Question your assumptions on that by asking, “How much memory does each one need? And what would cause that to be different?”

        Reply
        • Noah Engelberth
          September 13, 2017 11:20 am

          Yeah, clearly my assumption that “oh, SQL must have no freaking idea what it’s doing with the TOP 100 and it’s under-requesting memory” was a little off 😀

          I’m just too used to technology letting me down, I guess 😉

          Reply
          • Noah – don’t feel bad! It’s amazing how many surprises you keep finding in query plans the more you play around. I like to joke in my classes by saying, “SQL Server is amazingly smart when I least expect it to be, and also amazingly…not-so-smart when I expect it to be a genius.”

  • Geoff Patterson
    September 12, 2017 10:47 am

    The TOP 101 plan desires a much larger memory grant because it actually needs that memory! The implementation of Top N Sort for 101 (or more) rows is to first sort all of the rows (very memory and computationally expensive) and then take the first 101 rows.

    For 100 (or fewer) rows, the TOP N Sort operator uses an alternative algorithm that does a single pass over the all rows and maintains a separate data structure — likely a heap, as described in https://en.wikipedia.org/wiki/Heap_(data_structure) — that contains the top 100 rows observed thus far. Once all the rows have been processed, only the (up to 100) rows in that separate data structure need to be sorted to compute the final result. Therefore, the memory grant required is quite small since only the top 100 rows observed thus far need to be retained in memory at any given point.

    In practical conditions, this alternative algorithm nearly always provides far better performance (and avoids spills) when selecting the top 100 rows (vs. 101 rows) from a much larger number of rows. It’s unfortunate that the row threshold for the optimization cannot be configured or determined dynamically (it’s always 100), but it’s still an incredibly powerful optimization!

    Paul White covers the topic in more depth, including an interesting contrived example in which the top 100 optimization can actually be detrimental to performance:
    http://sqlblog.com/blogs/paul_white/archive/2010/08/27/sorting-row-goals-and-the-top-100-problem.aspx

    Reply
    • Great stuff! I had no idea this could happen, and it certainly refutes the idea I had.

      Reply
    • As soon as I saw the word seem (“They produce estimated plans that seem identical (PasteThePlan)”), I pasted the plans into a diff tool, which called attention to the memory grant described by Geoff and Arthur. I’ll note SSMS shows this grant on the properties of the select node within the plan UI, but as far as I can tell this information is not displayed within the PasteThePlan UI.

      I can confirm that the 100 vs 101 behavior explained by Geoff occurs when querying my own database.

      Reply
      • Brian – yep, exactly! It’s neat how you have to kinda know that the problem is happening in order to take it to the next level and use a diff tool. The different sort algorithms aren’t obvious from the plan.

        Reply
    • Geoff – very good! It’s so fun to see this in action, and it’s one of those things that really catches people by surprise when they pick a number for their SELECT TOP ___. There’s nothing in the visual plan that indicates it’d be an issue, and you have to drill down pretty far to discover it in the memory grants.

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.