Query Tuning Week: What’s the Difference Between Locking and Blocking and Deadlocking?

If Erik takes out a lock on a row,
and no one else is around to hear it,
that’s just locking.

There’s nothing wrong with locking by itself. It’s perfectly fine. If you’re the only person running a query, and you have a lot of work to do, you probably want to take out the largest locks possible (the entire table) in order to get your work done quickly.

You don’t want to monitor for locking – it just happens, and that’s okay. Until…


Solves all your blocking problems
Solves all your blocking problems

If Erik takes out a lock on a row,
and shortly thereafter Tara wants that same row,
that’s blocking.

Blocking is bad, but note that you won’t see blocking until you see concurrency. This is one of the many reasons why that application runs so well in development, but then falls over in production – there’s not enough concurrency to cause blocking.

Blocking will solve itself whenever Erik’s query releases his locks. Tara’s query will wait forever for Erik’s query to finish. SQL Server won’t kill or time out Tara’s query automatically because there’s no reason to. SQL Server, sweet naive fella that it is, believes Erik’s query will eventually finish.

(Note that I’m not specifying the difference between readers and writers here – by default in the boxed product of SQL Server, readers can block writers, and writers can block readers.)

Users hate blocking because it exhibits as very slow query performance.


If Erik takes out a lock on a row,
and Tara takes out a lock on a different row,
and then Erik wants to add a lock on the row Tara already locked,
and then Tara wants to add a lock on Erik’s locked row,
that’s a deadlock.

Both queries want something that the other person has, and neither query can finish on its own. No query will automatically relinquish its lock – both of them are going to wait forever for the other query to finish because neither one is really aware of why it can’t get the lock it wants.

(I’m keeping this one simple by referring to rows of the same table, but this can happen with lots of different objects scattered through different databases.)

SQL Server fixes this one for you by waking up every few seconds, looking around for deadlocks, and then killing the query that it believes will be the cheapest to roll back.

Users only notice blocking if it’s in a user-facing application that doesn’t automatically retry deadlock victims. More often than not, deadlocks happen silently to service applications, and nobody even knows they’re happening – except the DBA. The developers just assume that their queries fail sometimes for no apparent reason, and it’s probably their own code, so they keep quiet.


How to Fix Locking, Blocking, and Deadlocks

Here’s the short version:

  1. Have enough nonclustered indexes to support your queries, but no more than that – typically, I recommend that you aim for 5 or less nonclustered indexes, with 5 or less fields each
  2. Keep your transactions as short and sweet as possible – don’t do a BEGIN TRAN, and then lump in a whole bunch of SELECT queries that grab configuration values from various tables
  3. Use the right isolation level – by default, SQL Server uses pessimistic concurrency (read committed), but there are better optimistic concurrency options like snapshot isolation and RCSI

Want the long version? We’ve got a whole ton of blocking resources to help you identify if you’ve got locked waits, which tables have the worst blocking problems, and find the lead blockers in blocking firestorms.

Previous Post
Question From Office Hours: SQL Handle vs. Plan Handle
Next Post
Query Tuning Week: How to Run sp_BlitzCache on a Single Query

15 Comments. Leave new

  • Wait, since when do developers “…assume that their queries fail sometimes for no apparent reason, and it’s probably their own code…”???

    It’s *ALWAYS* a problem with the server until you show them that it isn’t…
    🙂

    Seriously, though, a very nice, simple explanation of locking / blocking / deadlocks.

    Reply
  • I always like cookies and milk.

    If I have the cookies and I want the milk, and if you have the milk and you want the cookies, then we are deadlocked over cookies and milk.

    Reply
  • (Writing this “from db developer point of view”).
    Sorry if it sounds harsh, but sometimes I hate to hear advices like that. 🙁

    > Keep your transactions as short and sweet as possible – don’t do a BEGIN TRAN, and then lump in a whole bunch of SELECT queries that grab configuration values from various tables
    I presume correctness of my transactions depend on these “configuration values”, otherwise I just won’t be reading them… So, what’s the point (is it in using stale “configuration values”)?

    > Use the right isolation level – by default, SQL Server uses pessimistic concurrency (read committed), but there are better optimistic concurrency options like snapshot isolation and RCSI
    IMHO, bad advice, but frequently heard from DBA’s… ;(
    I suspect it’s given due the simple fact that any problems that occur afterwards are _developer’s_ fault, not DBA’s. So, my default is SERIALIZABLE (+SNAPSHOT for R/O transactions, ok), thanks, and if DBA couldn’t deal with performance… it’s _his_ problem. 😉

    Reply
    • Petr – interesting thoughts! Thanks for stopping by.

      Reply
    • Developers marry it, DBA’s live with it 😀

      Reply
    • “and if DBA couldn’t deal with performance… it’s _his_ problem”
      Always the DBA having to provide the answers to the developers lack of ability or thoroughness 🙂

      “So, what’s the point (is it in using stale “configuration values”)?”
      The application configuration shouldn’t really be changing that much during a working day so a read should be fine without the BEGIN TRAN or SERIALIZABLE isolation level. Configuration values really only change during deployments which are generally performed out of customers working hours which reduces/removes object contention.

      Reply
      • > Always the DBA having to provide the answers to the developers lack of ability or thoroughness.
        Why not? If our code is just _perfect_, who should answer, then? 😉

        > The application configuration shouldn’t really be changing that much during a working day…
        So, users should navigate carefully around the swamp created due “solutions” like this? 😉
        Put the other way: Why should developers introduce _correctness_ hazards in this case, at all? Just to make DBA’s life easier? 😉

        Reply
  • Jyoti P. Sankaran
    August 19, 2016 1:04 am

    Apropos your suggestion, in the context of avoidance of locking/blocking/deadlocks, to restrict the number of non-clustered indexes to a max of 5; is that to be understood as 5 per table or 5 for the number involved in a single query?

    Reply
    • 5 per table.

      Reply
      • Jyoti P Sankaran
        August 19, 2016 11:45 am

        Thanks. Sheer common sense suggested to me 5 per table, as 5 per query would mean dynamically dropping indexes for different queries at execution time, which would be well-nigh impossible. Wanted to confirm nevertheless since database performance can be an arcane subject !

        BTW, any chance that you may have softened your stubborn opposition to the use of schemas (introduced in v 2005)? I read your blog discussion on the subject and talked with you about it during a break in SQL Saturday 2015 at Microsoft Washington DC 🙂

        Kudos for doing such a stellar job educating the SQL Server community on performance!

        Reply
  • Good Summary. One solution that I recommend to business users who are typically querying the database during the day is to use WITH(NOLOCK) in their queries. Ofcourse this is for users and their queries not for applications. Letting applications use uncommitted data is not a good idea.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.