Finding Froid’s Limits: Testing Inlined User-Defined Functions

This week, I’ve been writing about how SQL Server 2019’s bringing a few new features to mitigate parameter sniffing, but they’re more complex than they appear at first glance: adaptive memory grants, air_quote_actual plans, and adaptive joins. Today, let’s talk about another common cause of wildly varying durations for a single query: user-defined functions.

Scalar UDFs and multi-statement table-valued functions have long been a bane of performance because as your data quantity grows, they still run row-by-agonizing-row.

Today, SQL Server hides the work of functions.

Create a function to filter on the number of Votes rows cast by each user, and add it to the stored procedure we’re working on this week. Eagle-eyed readers like yourself will notice that I’ve changed the TOP to just a TOP 100, and removed the order by, and the reason why will become apparent shortly:

Without the function, this query takes under a second. Add in the function, and the function flat out kills performance: in compatibility level 140 (2017), the query takes about about a minute. SET STATISTICS IO ON doesn’t show anything about reading from the Posts table:

Note that CPU time is much, much higher than elapsed time – indicating that something in the query went parallel. (That’ll be important later.)

As query tuners know by now, the query plan doesn’t show the work involved with the function:

Thankfully, sp_BlitzCache shows the ugly truth (and you’ll have to click to zoom for sure on this one):

The function runs 102 times, each time burning 2.5 seconds of CPU time in ~600ms of duration, doing 248K logical reads. That’s a lot of row-by-agonizing-row work. Okay, well, that’s not good, and that’s why user-defined functions have caused much gnashing of teeth.

SQL Server 2019 inlines the work, so
it goes slower. Yes, you read that right.

In the Froid white paper, Microsoft talked about how they were working on fixing the function problem. When I read that white paper, my mind absolutely reeled – it’s awesome work, and I love working with a database doing that kind of cool stuff. Now that 2019 is getting closer to release, I’ve been spending more time with it. Most of the time, it works phenomenally well, and it’s the kind of improvement that will drive adoption to SQL Server 2019. Here, though, I’ve specifically picked a query that runs worse only to show you that not every query will be better.

To activate Froid, just switch the compatibility level to 150 (2019), and the query runs in 1 minute, 45 seconds, or almost twice as slow. Dig into the actual plan to understand why?

There’s something present that we DON’T want: an eager index spool. Like Erik loves to say, that’s SQL Server passively-aggressively building a sorted copy of the data in TempDB, accessing it over and over, and refusing to build a missing index hint. “No no, it’s okay, you’re busy – I’ll take care of it for you. I know you don’t have time to build indexes or call your mother.”

There’s something missing that we DO want: parallelism. Remember how I kept harping about the 2017 plan going parallel, using more CPU time than clock time? Yeah, not here – this query is single-threaded despite a cost over 1,000 query bucks. Even if you clear the plan cache and try again with @Reputation = 1, you still get a single-threaded plan:

And it’s not that you can’t get a parallel plan with a scalar UDF – that limitation has been lifted for Froid-inlined functions, but there’s something wonky happening here in the plan generation. Sure, I understand that the eager spool is still single-threaded, but… what about the rest? Hell, even a COUNT(*) from Users goes parallel here, and that finishes in under a second:

<sigh>

So, to recap: the query runs slower, doesn’t generate missing index hints, and is still single-threaded. For this particular query, Froid isn’t getting us across the finish line.

Let’s try inlining the function ourselves.

Take the function and inline it:

And the difference is staggering: it runs in under a second, and everything in the plan goes parallel:

FroidSo to recap: am I saying Froid is bad? Absolutely not – I’m loving it. Just like adaptive memory grants and air_quote_actual plans, Froid is one of my favorite features of SQL Server 2019 because when it helps, it REALLY helps. The example in this blog post is a fairly unusual case in my testing so far.

However, like I wrote yesterday, you’ve really gotta test your code to be able to say with confidence that you’re going to see a difference, and that the difference won’t be adverse. It’s just not fair to expect Microsoft to deliver an update to SQL Server that makes every single edge case go faster. When a few queries regress, you’ve gotta have history of their prior plans in Query Store, and the knowledge to use that past plan to get an older version in place long enough for you to fix the query for the long haul. (Remember, as developers deploy a tweaked version of the query, like adding a field to it, the old Query Store forced plan will no longer apply as the query text changes.)

And again – this is just like the 2014 release when 99% of your code ran faster, but 1% ran slower, and that 1% came as a really ugly surprise that required hard, hard troubleshooting to narrow down. Joe Sack giveth, and Joe Sack taketh away. (Well, I guess even when he taketh, he giveth to uth consultanth.)

Updated April 12 – had the wrong inlined function in the “after” results, and was showing a manually-tuned version as running in 13 seconds. My  bad – that was from a prior draft of the post. Props to Bryan Rebok for catching that in the comments.

Previous Post
Parameter Sniffing in SQL Server 2019: Adaptive Joins
Next Post
Index scans aren’t always bad, and index seeks aren’t always great.

16 Comments. Leave new

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.