Cool Query Hints


The SQL Server documentation has a pretty cool list of query hints:

Yeah, I surf in dark mode. I’m a dark kinda guy. But wait – what’s that colored box? ENHANCE!

I had to open this page in incognito mode because that warning doesn't show up when *I* log in, baby

I could make jokes here, but … every single thing in that caution is true. SQL Server really does typically select the best execution plan for a query, like 99.999% of the time, and if you use a query hint, you’re probably making a mistake.

However, having said that, if you’re reading this blog, you’re already more experienced than most SQL Server users. No seriously, I know you’ve got impostor syndrome, but you’re doing alright, and I think it’s high time you considered using a few of these when your query tuning situations get really dire.

For more details about each of these, including the versions where they came in, read the query hint documentation. (And if you think you’ve found an error in the documentation, or you want to improve it by making it more specific, don’t forget that you can contribute changes to the documentation.

OPTION (MIN_GRANT_PERCENT = 5) or OPTION (MAX_GRANT_PERCENT = 10) – when the execution plans for your queries are asking for way too much (or not enough) memory, and you’ve already tried tuning the queries and the indexes, this hint is a useful last resort.

OPTION (FORCE ORDER) – if you’re struggling with an execution plan that doesn’t quite understand which table is the most selective or should really be processed first, this hint forces SQL Server to process the tables in the same order you wrote ’em in the query. I like this better than index hints because it still gives SQL Server flexibility in the way it accesses each table.

OPTION (MAXDOP 0) – if you’re stuck on a third party vendor’s server where they demand that you set MAXDOP = 1, did you know you can set MAXDOP higher for your own query by using a hint? Well, now you know: this query-level hint overrides the server’s MAXDOP setting. Great for building indexes or reporting tables.

The documentation has a lot more, and you certainly won’t use ’em often – but today’s probably a good day to go back in and check to see what new options are available in the version of SQL Server you’re running these days. I’m starting to cover ’em in Mastering Query Tuning, too.

Previous Post
Where Should You Tune Queries: Production, Staging, or Development?
Next Post
Updated First Responder Kit and Consultant Toolkit for January 2020

12 Comments. Leave new

  • Hugo Kornelis
    January 23, 2020 8:39 am

    The one hint I would add to your list of “useful hints for those who know what they’re doing” is OPTION (RECOMPILE). For naughty procedures that suffer from bad parameter sniffing, this hint is an easy fix for those cases where recompile time multiplied by number of executions per time unit doesn’t get too high.

  • Agree. For the following reason stated by none other than Sherlock Holmes – “When you have eliminated the impossible, whatever remains, however improbable, must be the truth. – Sir Arthur Conan Doyle, stated by Sherlock Holmes”

  • Bill Goetschius
    January 23, 2020 11:45 am

    Code can be created that is too complex for the optimizer. I have read an article on exactly that and we have experienced it first hand. The impact of a few sql statements can be devastating to the system. For example we had Parameter Sniffing causing us huge system CPU issues once or twice a month. We update our statistic nightly, but at times Parameter Sniffing crippled us. Although warned against it by SQL Server experts, we had done enough testing to know that turning Parameter Sniffing Off was a gain for us. The past 3 years have been 100% pain free due to Parameter Sniffing issues. Most of our code now always runs efficiently and effectively all the time.
    The Optimizer is very good and created by people light years ahead of me, but Optimizer’s are not perfect and never have been perfect and probably never will be. If it can not accurately parse a few SQL statements, it may not matter that 2,000 others work correctly. The impact of the few can be devastating to the system. For example when upgrading to SQL Server 2016 we found using this statement OPTION (USE HINT ( ‘FORCE_LEGACY_CARDINALITY_ESTIMATION’ )) to use the 2012 Cardinality Estimator saved us far too many times for us to trust any Optimizer’s perfection. I still have not found out if SQL Server 2019 will allow this statement to revert back to the 2012 Cardinality Estimator. Funny how the exact sql and statistics work wonderfully using 2012 Cardinality Estimator and crawls using 2017.

    • Noah Engelberth
      January 23, 2020 1:33 pm

      In my experience, FORCE_LEGACY_CARDINALITY_ESTIMATION is less useful for “fixing a bad estimate” from the 2016 CE and was more necessary for monkey patching bad queries (with poorly constructed self-joins or other WHERE clause things that cause bad estimates in BOTH CEs, but the way the 2016 CE handles the bad estimate results in a more obviously worse plan). I’ve done several upgrades, and I’ve never had CE problems except on very complex queries that, when we finally took the time to dig into them, had hidden self-joins (often from nested view chains) or other really bad practices that the 2012 CE had basically been papering over because it produced a barely good enough plan.

      • My experience with the Legacy Cardinality Estimator has been quite the opposite of yours. I’ve seen incredibly basic joins with single WHERE constraint queries with adequate indexes get completely destroyed by the newer CE. It’s truly a magic fix in those head-scratcher scenarios.

  • I’ve used the typical tricks of breaking down queries to add temporary tables, and tried switching between LEFT OUTER JOINS and EXISTS, and using OUTER APPLYs and all the little tricks to ween speed out of the machine, but sometimes you just have to use a hint.
    We have some partitioning on poor columns on extremely big tables, and hinting a FORCESEEK once or twice has made a massive difference.
    So many times the memory grant is entirely wrong and usually too large – I wish I could move up to VLatest SQL Server to get the memory grant amount tuning, so that hint may get used, but the one I really like on this post is the FORCE ORDER. I’d love to see if this worked things out better for me.

    More and more I wish I could mark parts of a plan to recalculate on the fly, like a partial RECOMPILE, or not a plan RECOMPILE, but an estimates RECOMPILE based on actual numbers. I’d wonder if our servers aren’t becoming fast enough to get us to a point where that might be a thing – or if the query engine could tune multiple optimal paths based on size of estimates, and cache a multi-plan optimisation rather than a single plan with parameter sniffing issues. Probably what the query store does, right? This is where I grumble about needing to upgrade again.

  • Oh – I like the prospect of FORCE ORDER. Total flashback to the mid-90s query writing against Oracle 6 (or Oracle 7 in compatibility mode) when humans did know better than the optimiser.

    (Not totally fair – our tuned application ran appallingly against the first statistical optimiser. However our untuned application with user base of 1-2 users ran 10 times faster)

  • Douglas Zane Coats
    February 3, 2020 7:23 am

    IDK if I am a glutton for punishment but I really do enjoy a good reason to go read documentation.

  • Great stuff Brent. Pure gold you are sharing all this with us. Don’t know if find time to have a look at my question. If you cant find time perhaps your readers can share ideas?

    A simple query with 5 tables performs better if we change table ordering (in the from clause). There is no ORDER BY in the statement.
    Logical reads from table which we ‘shift up in from clause order’ decreases with a factor 150000.
    Why would the SQL2016 engine not notice there is a better queryplan in the first place?
    This behaviour happens with a lot of queries on our DB. It looks like there is a not wanted ghostly OPTION (FORCE ORDER) by default in our system….
    What could be the cause?

    • Blog comments aren’t a great place for free personal consulting, unfortunately. If you’d like consulting, feel free to click on that at the top of the site. Thanks!

    • Hugo Kornelis
      March 6, 2020 1:07 am

      As a generic reply:
      1. Optimization starts at the current query, tries all kinds of changes, and then stops based on time spent vs estimated cost (“good enough plan in short enough time”). This means it won’t see all possible permutations, and which it sees depends on the starting point (in a way that is effectively unpredictable)
      2. If some joins are outer joins, changing order CAN change the query results if insufficient proper care is taken. At that point performance becomes irrelevant. (If all joins are inner joins, this doesn’t apply).
      3. If for a five table join an order change decreases reads by factor 150,000, I’m 100% sure that there is a bad cardinality estimate in play. That’s your real problem.

      (Sorry if I just cost you a consulting gig, Brent 😉 )


Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.