2 Features of SQL Server You Should Avoid

Crystal meth helps you lose weight and feel better about yourself.

It’s true.  Wikipedia says so.  Sure, the side effects of methamphetamine are more than just anorexia and euphoria, but if you’re looking to be an upbeat, skinny person, meth gets the job done.  SQL Server has a couple of similar features that might make your queries faster today – but they’ll make you sing the blues tomorrow.

Forcing Index Use with Index Hints

If you’re not happy about the indexes SQL Server chooses to use when executing your query, you can show it who’s boss by forcing it to use specific indexes like this:

This forces the SQL Server engine to use the index IX_Lname_Fname as part of the execution plan rather than looking at all of the available indexes and picking what it thinks is the best choice.  You’re telling SQL Server that you know more about the schema and the data than it does.

Whenever you hard-code an index into a query, you open up a can of performance tuning worms:

  • If you add a better index, the engine won’t use it – you’ll be tied down to that one index until you touch every hard-coded query
  • If that index gets dropped, your query will be in a world of hurt – you can’t consolidate indexes when you run into performance problems
  • DBAs can’t do performance tuning without the developer recompiling the app
  • When SQL Server is upgraded, it may be able to build a better query plan – but not if you forced a specific index

You might get a faster query today – but you can’t react as quickly to the problems of tomorrow.  The engine has another query plan trick, too.

Forcing Execution Plan Guides

Plan guides don’t just force SQL Server to use a specific index; the ambitious DBA can tweak all kinds of things about how the query is executed.  You can force things like table scans.

The nice thing about plan guides is that you can change them without recompiling your code and changing the queries.  The DBA can do performance tuning without getting developers involved every time.

Problem is, when you’re troubleshooting a query that isn’t behaving the way you’d expect, you can see index hints just by looking at the query.  Plan guides, on the other hand, are transparent – they’re happening inside the engine, behind the scenes, and you have to know to look for them.

But wait – it gets worse.  Plan guides exist at the server level.  When you’re troubleshooting the same query on development, QA, and production servers, and you’re getting wildly different results, do you stop to think if there’s a plan guide working behind the scenes?  Are your plan guides set the same across all three servers?  Are you sure?

SQL Server 2008 Query Performance Tuning Distilled

SQL Server 2008 Query Performance Tuning Distilled

A Better Solution: Learn More About the Engine

Rather than telling SQL Server who’s boss, consider picking up Grant Fritchey’s excellent book, SQL Server Query Performance Tuning Distilled.  It explains:

  • How statistics influence the accuracy of the execution plan
  • How to build better, more selective indexes that the engine will be more likely to use
  • How to understand if the execution plan is the problem, or just a symptom

Yes, we also cover this in our book SQL Server Internals and Troubleshooting, but if you’re the kind of person who’s considering index hints and plan guides in production, Grant’s book dives into more detail on these features.

Previous Post
Dear Blog Author
Next Post
Fusion-IO ioDrive Review

34 Comments. Leave new

  • Out of curiosity: is there anything good that may come out of these features?And if not, then why were they ever implemented by MS into SQL Server?

    • Feodor – two questions there. First, yes, good things can come out of these features but only if you really know what you’re doing, and if you have the time to continually follow up on your work to make sure it’s still correct.

      Second, I don’t have inside info as to why they were implemented, but I bet it was done by user request. There’s high-end DBAs who can actually put these features to good use, but they’re very, very, very few and far between.

  • Both these features come handy when you have to deal with bugs in the engine. You have to agree there are flaws in the engine, not exactly bugs, but flaws in the algorithms the cost based optimizer uses. Every greedy algorithm is fast but suboptimal. Thats where I’m using hints.

    Just this week I had to hint a query with FORCESEEK for 1000 fold speed up, just to find out that some clients use SQL2000 where this hint is not available so only INDEX hint could speed it. Yes, I could probably revise the indexes on the base tables because update stats (with fullscan) did not yeald anything, but this is a trial and error task that usually takes a lot of development time.

    cheers,
    </wqw>

    • WQW – true, but what happens when a fix comes out and there’s a new version? If there’s an improved query plan, you won’t get it if you’ve forced a particular one. It means you can never stop rechecking your code with every new SQL Server service pack and hotfix.

      • Your point about the possibilities of better optimizations in future releases is true, BUT …

        if you have to choose between using hints to produce stable and efficient plans versus hoping that a future version of SQL Server will fix the problem, which would you really choose?

        I’d rather be efficient (ie, nearly optimal) all the time and get to sleep at night rather than having to live with inconsistent performance for queries that waffle between plans when they recompile. And revisiting hinted stored procedures upon upgrade shouldn’t be that monumental of a task if you don’t use hints (or plans) too much.

        • Chris – unfortunately, if you use index hints to produce stable and efficient plans, you won’t be sleeping at all after you bring in a DBA. See, the DBA needs to build new indexes, consolidate existing ones, and remove the ones that aren’t performing. If you hard-coded index names into hints, you’ll be constantly revisiting that and rereleasing your code every time the DBA needs to tune indexes.

          It sounds great – right up until you learn that it doesn’t, and by that point, you’re in a world of hurt. (And sleeplessness.)

          • None of your worst-case scenarios are inevitable – they are just things you would have to deal with if you implemented this strategy stupidly.

            For example, some places may have workflow division like that, and they would need to redress your concerns with good communication between teams, and you’d hope that the system DBAs could be taught to scan the T-SQL code base for index references before doing drops. That’s easy, so your point is valid, but only to say that people need to do this right, not that people shouldn’t do it.

            In our case (and other places I’ve worked – 7 jobs), we recognize that the application DBAs (the T-SQL developers) are the best experts for tuning and indexing decisions. Our system DBAs (which it sounds like you are referring to) could get fired if they did what you are suggesting without talking to us first. We run our own missing index / unused index reports, and before we drop an index we have a due-diligence checklist that goes beyond “it hasn’t been used in 6 months” and also includes checks for hints.

            So, if you don’t mind me saying so, you’re presenting a straw man, and one that’s very easily overcome. And if the proof is in the pudding? I’ve been working with SQL Server since 4.21a, and I think that the number of problems we’ve permanently solved with hints is probably over 100, and the number of problems we’ve introduced is 0. Not because we’re smarter than everybody else, just because we replaced the FUD-y “never use hints” with the much wiser “use hints judiciously and safely.”

            By the way, in my current job, I have 5 T-SQL developers reporting to me, and another 4 at least in our office, with probably 20 at another site. And I would say that about one-third to one-half of them understand how to use hints wisely and effectively. Not quite as “few and far between” as you might think?

            Cheers,
            Chris

          • Chris – just as you say you’ve never run into that problem, I can say that I’ve very frequently run into this problem. Just because it’s always worked for you doesn’t mean it’s always worked. Best of luck, though!

  • Gotta love FUD. Thanks, Brent!

  • I actually won a copy of Grant’s book at the last HASSUG meeting and am about 100 pages into it. I can’t believe I never picked it up before that. Incredibly useful information.

  • Hi! Can you help-me?

    My English is basic.
    Today, I have this situation:
    select COlumnA, ColumnB, ColumnC,ColumnD,ColumnE,
    ColumnF,ColumnG
    from Table A
    where Data >= getdate()-5 and Data < getdate() and Req = 45

    This table is very large and has a clustered index on the column Data. This query use index for the Column Req and is very slow.
    When I force to use the index to data column is very fast.
    How can I resolve this without force the index?

    Thanks

  • And if you explicitly use an index name in your code and then some fellow with good intentions comes along and removes a duplicate index or renames one from the DTA 100 character default rubbish that someone blindly implemented, it will break the code.

    Today I was that DBA 🙁

  • I started to actively consider a plan guide today (this sounds a bit like a confessional, or we’re at an AA meeting). It’s a stored proc that goes massively, painfully sideways every few weeks or so, because of a classic parameter sniffing issue. Sure, a better option would be fixing the query or redesigning indexes, but its a vendor database and we’re not allowed to touch that sort of schema, but the concept of plan guides I think is considered detached enough to be permissible. We’ve band-aided it by scheduling sp_recompile and an execution of the proc with known good params after the index/stats maintenance, but that’s a messy solution and the data could change making the “known good” parameters give a bad plan, potentially. So that’s why a plan guide seemed to hold some promise.

    Now all that said, it didn’t work because of a curious syntax issue (the @stmt parameter was passed the exact statement, but it was “IF EXISTS(select something)” without the other half of the IF statement and it fails syntax checking. Weird issue, I tried it any number of ways including with the statement gleaned straight from Profiler, and it looks like it isn’t going to work in this situation.

    But I do agree, keep things simple. I think things like this can be beneficial in limited scenarios, but you have to be very, very deliberate about throwing that switch. And if you do implement “nonstandard” solutions like this, document the hell out of it for your fellow and future DBAs!

  • Hi, Brent. Curious if you’ve seen this sort of thing before:
    We’re running into deadlocking issues on very small tables (like, 200K and below with around 1,650 rows).
    I’m pretty well versed at troubleshooting deadlocks, but the customer isn’t letting me at their db, which is a lot like fixing a car in someone esle’s garage without being able to actually ENTER the garage. Frustrating.

    At any rate, we’re seeing a clustered index scan in the plan that we’re being given, and I’ve determined that the users table is so small, SQL Server just isn’t bothering to use the indexes that are there.

    The developer is insisting that plan guides might be the answer.

    Just curious to hear your general feedback. Forcing an index might lock fewer rows and alleviate deadlocks, I suppose, especially when we’re selecting based on a unique PK.

    • Joe – I’d rather see the execution plans and the deadlocks involved. Generally speaking, plan guides would be the wrong way to solve deadlocks, though.

  • Jacob Margulis
    October 26, 2014 11:55 am

    Hi Brent. I have to respectfully disagree with you and here is why: While in most cases SQL Server picks up the best execution plan, there are plenty of instances in challenging and complex environments when it cannot. There are instances when rewriting sql or tuning statistics simply does not help. In those instances using both index and join hints improves performance 10 and 100 – fold. …And very often it is critical to the business. You make a comment saying that “If that index gets dropped, your query will be in a world of hurt”. Come on – we both know that any half-decent DBA should never drop indexes that are being used without making sure that dropping them won’t break anything. And commenting on your other point: Can the new release of SQL Server provide on its own even a better performance without using the hints ? – Possibly but that is speculative. What is not speculative is that right now we have a performance problem and SQL Server cannot solve it on its own. Of course, what goes without saying is that using the hints should be done only as a last resort and left to highly skilled DBAs. Regards, Jacob Margulis, Senior SQL Server DBA.

    • Jacob – I’ll pick just one line in there to question. You wrote:

      “Come on – we both know that any half-decent DBA should never drop indexes that are being used without making sure that dropping them won’t break anything.”

      There’s a world of difference between “should” and “doesn’t.” I know I shouldn’t eat donuts, I should exercise more, and I shouldn’t have bought this iPhone 6 Plus. But back here in reality…

      • Jacob Margulis
        October 27, 2014 11:04 am

        Hi Brent. Thank you for the quick response ! – I know you are a busy man. I do not believe though that I follow you on the philosophical level: The things that you mentioned (eating donuts, exercising, etc.) are personal choices that we are free to make in a free society. Being a DBA, on the other hand, is a job that comes with responsibilities. One of them is to make sure that you do not drop an index that is being used (and we both know that it is very easy to check what indexes are being used and what indexes are not). If a DBA does not fulfill his basic responsibilities, then he or she should simply be dismissed. In my very long career as a dba I do not recall ever making that mistake. My main point as a dba remains: sometimes there is no realistic alternative to using Forced Hints other than suffer degraded performance at business expense. Best regards, Jacob Margulis.

        • Jacob – being a DBA is also a job that comes with choices. Some DBAs have more work than they can possibly take care of during their limited work hours. Sounds like you’ve got plenty of time and are able to attend to even the smallest of details, though, and that’s great for you!

  • I just had a resume come across my desk that listed two things they had done in a recent position.

    Googling for an easy article to explain to management why plan guides are bad, and this DBA might not be the best hire, I found this lovely article.

    Can you guess what #2 on the list was? You got it, INDEX Hints.

    Thank for the awesome article, two explanations in one!!!

  • Brent,

    Thanks as always for your excellent advice. I find myself in the unenviable position of taking over a SQL 2012(Ent)-based database that is heavy-oltp for a commercial sales website. My problems begin with the company’s choice of Sequelize for ORM which sends EVERYTHING in as strings (the mssql driver doesn’t support parameterization?!) – so all I see by the end of the day is 500K+ single-use plans whose query hashes group into a couple dozen buckets; I flush the cache nightly after index maintenance. Creating Query Plans seem like the way to go, or perhaps Force Parameterization. Degree of Difficulty – we’re migrating to AWS RDS this summer and will no longer have the luxury of making most of these decisions.

    Thoughts?

    P.S. – the dev team tells me there is no chance they can support stored procedures in the short term, the amount of js is just too great.

  • Today I was confused why my query wasn’t using a particular index, so I ran the query twice, once with the WITH Index hint and once without. Sure enough, the engine was not using this index, but it was faster without!

    So this just reinforces testing and benchmarking. I used the hint in an ad-hoc manner to understand the situation, and decided that it was clearly a bad idea from a performance standpoint, in addition to the maintenance nightmare.

    • Oh, and the important part was looking at the Actual execution plan for each and comparing. Should have said that in my OP.

  • I’m interested in your opinion on forcing a recompile via a plan guide. I have 3 specific queries on a Dynamics database that generate query plans that perform well for a while (3-5ms), then suddenly one of them will generates a new plan that runs for hours. Our initial fix was to kill the query, drop the plan from the cache and re-run the query. The re-run generates the good plan which once again runs in 3-5ms but relies on the user contacting our helpdesk, logging on, … etc. Since we can identify the query I’ve tested a plan guide to force a recompile. Being Dynamics this is the only option we currently have, and our testing shows this is working, but we are not ready to try it in production.

    • Hi Leo,

      How often do these queries run? If it’s quite often, forcing them to recompile all the time may be hell on your CPUs. It’s not exactly a lightweight task. Make sure you think that part through before forcing a recompile.

    • One thing we have done is create a job that runs every few minutes or so, examines the dm_exec_procedure_stats dmv for a few known procs, and when average logical reads breaches a known/stored threshold, we fire off an sp_recompile automatically. That limits the bleeding for very high volume proc executions that are parameter-sensitive, getting a bad plan. It’s not perfect, but it’s not 2016 either and we don’t have access to Query Store yet! 🙂 Also if your problem is ONE user getting a long running bad plan, that goes for hours (instead of ours where a wide amount of users get a plan that runs many seconds, completely bogging down the system) this may not help you as the logical reads won’t be logged if that user is still running the query. Query Store in 2016 definitely seems to have some promising potential for this kind of thing, though.

      Also make sure you dig in and try to find out exactly why that query gets a slow plan…differences in parameters it is calling, differences in the execution plan, and why the “bad” execution plan was generated. I found one instance that I had chalked up merely to the bad luck of parameter sniffing, that was an issue relating to the default sampled statistics on an index making it seem like values didn’t exist on a table. So instead of trying to react/bandaid the bad plan issue, I was able to solve it by scheduling a fullscan stats update for that index overnight. So sometimes you can manage to fix the root cause, but sometimes you just have to bandaid it…

    • All very good points. There are three queries causing a problem and they are part of the Dynamics core, so we can’t change them in any way. I strongly suspect the reason they cause a problem every now and then is because of this piece of code in the WHERE clause: AND (T4.QTY=(@P21-T1.QTY))) Classic parameter sniffing on some very large tables with a lot of cross joins.
      The queries are run about 20 – 80 times a day, so not a big recompile load, but a major problem is they run for hours compared to a few seconds.
      The question is still: Is this a reasonable time to use the Plan Guide? I think it is.

      • I realise this is necro posting but I’ll throw my experiences into the ring (FWIW my credentials go back to SQL Server 1.1 on OS/2, in 1991, so I guess I’ve been milking this donkey for over a quarter century).

        Rule 1. Don’t use plan guides.
        Rule 2. Don’t use plan guides.
        Rule 3. System is running like a dog, the business cannot work, and the Pointy Headed Boss (PHB) demands a quick result, and a plan guide hack might just fix it.

        In the real world, Rule 3 always overrides Rule 1 and Rule 2, no matter what any DBA says, even Brent Ozar!

        Plan Guides are indeed very much a last resort, and the only use cases I’ve ever implemented them is when an application and its SQL are machine generated and cannot be manipulated, or it’s vendor supplied code that cannot be changed. Neither are good situations to be in. But, when you’ve asked at least dozen times if the code can be tweaked at source, and the answer is always “No”, there are not always many options.

        If the end to end code is within your control, changing the query is always the way to fix performance problems. If not, as a DBA you have to come up with other methods. Sometimes it might be as simple as fiddling with indexes, but this can have unwanted side effects on other areas of the system, so be prepared for regression non-functional testing. Indexes, for example, are far more maintainable than plan guide. On the other hand, a plan guide is very targeted and is extremely unlikely to affect other parts of your system.

        Regretfully, and respectfully, it is a utopian world where you have the luxury of not having to resort to a less than perfect solution. It’s also a fallacy that the optimizer always makes the best decision, as any DBA who’s been around for a while will know. I remember the marketing mantra from Microsoft when SQL Server 2000 was released, it was “self tuning”, and, well, it was, sort of. What it actually meant is that the edge cases where the optimizer failed were fewer, but it also meant it became harder to fix the edge cases that remained.

        In these rare edge cases, the onus is always on the DBA to make it absolutely, and patently, clear that as a solution Plan Guides are a sticking plaster and are very difficult to maintain. For example, they don’t even survive a database name change. They will always be weak points in regression testing. In some scenarios, extracting the plan is a quite fiddly and time consuming process, necessitating end users to reproduce the scenario allowing you to sniff out the offending query in a production environment, and let you do your tweaking. That tweaking can take many hours. The worst offenders and most difficult to resolve in my experience are complex machine generated queries that use API server cursors, these can take several days to come up with an acceptable and documented fix. You definitely need to know what you are doing, and have a large bag of patience, when presented with a machine-generated 100+ table join!

        To reiterate, if you’ve always been in a situation where you’ve had control over the queries, then you shouldn’t ever need plan guides. If, however, you’ve had to support third party “database agnostic” products, or so-called RAD environments that produce machine-generated code, then all bets are off, but even then plan guides should be your last resort, they are almost completely unmaintainable.

        In closing, a bit of history for you: before Plan Guides, very occasionally I used to have to fiddle the stats to get the “right” result for the PHB. That was even worse than plan guides. At least plan guides are very targeted.

Menu
{"cart_token":"","hash":"","cart_data":""}