NOLOCK Is Bad And You Probably Shouldn’t Use It.

Eyes up here, kid
I’m waiting for the YOLOck option.

When you put NOLOCK in your query, SQL Server will:

  • Read rows twice
  • Skip rows altogether
  • Show you changes that never actually got committed
  • Let your query fail with an error

This is not a bug. This is by design. To understand what happens, read these blog posts in order:

  1. Using NOLOCK? Here’s how you’ll get the wrong query results.
  2. “But NOLOCK is okay when my data isn’t changing, right?”
  3. “But surely NOLOCK is okay if no one’s changing data, right?”

After reading those, you’re going to ask, “So what am I supposed to do if I need to avoid blocking problems?” There are 3 solutions:

  1. Have enough indexes that your queries go fast, but not so many that your deletes/updates/inserts take forever. I cover this in the locking module of my Mastering Index Tuning class.
  2. Keep your transactions short and sweet, and use batching to avoid lock escalation. I cover this in the deadlocking module and the batching module of my Mastering Query Tuning class.
  3. Use read committed snapshot isolation (RCSI) or snapshot isolation (SI) so that readers and writers can coexist. I cover this in the isolation levels module of my Mastering Server Tuning class.

I know: NOLOCK sounds so much easier because you can just slap it into a query and it feels like it’s going faster. Just remember the demos in those first 3 blog posts up above: sooner or later, your users are going to see inaccurate query results.

If it’s so bad,
why is NOLOCK an option?

Because in some applications, accuracy really doesn’t matter. I know it sounds like a joke, but I’m being serious.

For example, one of my clients is a high-volume online shopping site. Their admins want to see that data is moving through the shopping cart process, that users have been adding new shopping carts in the database. They use monitoring queries to check for the last 10 carts that were created recently and checked out recently. They don’t really care whether the results are transactionally accurate – they just wanna see that any new orders are moving through the system. NOLOCK works fine there.

Another example is the open source First Responder Kit scripts like sp_BlitzIndex. I don’t need transactional accuracy when I’m checking the usage statistics on an index – we’re just getting rough numbers. If the same index happened to be shown twice, you would simply go, “Well that’s odd,” and you’d rerun it to check.

But if your application involves money, health care, or anything else vitally important, you better think twice before using NOLOCK.

Previous Post
Free Live Class This Week: Fundamentals of TempDB
Next Post
[Video] Office Hours on the Balcony Part 2

9 Comments. Leave new

  • Whenever 99% of ppl say something is wrong, I tend to think it might have merit. Yours is one of the few blogs that says, “well, there might be times when this is OK.” Good for you. Dogma is bad. As you mentioned, “is data flowing” is one use case…anywhere else where a “fuzzy” answer is acceptable might be ok too. Data scientists like to look at the distribution of data in tables…we don’t need to be precise, but we need a feel for the data: nulls, outliers, etc. The best we have is nolock (which will still muck up your buffer cache unfortunately). As a data scientist often forced (or strongly told) to query transactional systems (against my advice)…NOLOCK is essential. Often no one knows if RCSI is setup and I want to avoid causing as many problems as I can.

    Often folks _think_ they need precise answers, and they rarely do. Tell your accountant the DR != CR and that’s career suicide. Tell your Chief Marketing Officer that the quarterly sales are off by $200K and you’ll get a diatribe about data quality and garbage in/garbage out and lack-of-faith in the data. If, after the harangue, you tell her, “well, that’s $200K over $200M (ie, less than 1% variance) and we are certain the QoQ numbers are trending in the right direction”…well, suddenly everyone calms down. Yeah, NOLOCK will skip and double-count…meh…I’m cool with that (in some cases).

    Reply
  • Yup, checking status of a live web server is a great example or where nolock is totally appropriate. How many people are using the website? If the result is 1227 when it’s really 1228, nobody will know or care. What everyone will care about is if you hold a lock on the sessions table for a second or two!
    Same for ‘people who bought this also bought’… a light touch trumps accuracy.

    Reply
  • NOLOCK = DONTCARE

    Reply
  • Alexander Speshilov
    November 23, 2021 6:36 pm

    Another case for nolock – service/technical queries to “spy” what your transaction in another spid is doing right now. But anyway – no one should build critical business logic or reporting on nolock since 2008 and RCSI.

    Reply
  • Gustav Swanepoel
    November 23, 2021 9:13 pm

    Great blog post as ever. Forwarded it on to my colleagues.

    Reply
  • I admit we’ve used NOLOCK in a couple of places where things were working fine for years, then it feels like one day suddenly the stored procedure just ground to a halt with blocking. We never really worked out why and the data issues are fine.
    Another case – we have some analysts using SAS from another server. When they query our Warehouse data, it seems that the command it sends to our server is SELECT * FROM , no matter what they actually write. That was causing issues on large tables that we update throughout the day. So my manager worked out how to at least get SAS to use SELECT * FROM ( NOLOCK ).

    Reply
  • Where I’ve seen NOLOCK used (or misused) most frequently is when data analysts are running ac-hoc queries or month end reports against a live production database. This is the scenario where it’s not really bad SQL coding habits – it’s that they feel a legitimate NEED to use nolock or read uncommitted isolation for performance reasons. For example, the query just won’t run to completion without it. They typically don’t need up to the moment fresh data, and in fact would prefer the data frozen as of a certain date.
    So, if setting up a readable replica or restoring from backup is too much trouble or expense, then a quicker and less expensive option is to create a snapshot database or live mount a database backup. This solves all of the problems mentioned above.

    Reply
  • Hello,

    I have considerations about nolock usage… First of all, I advocate against nolock usage and here I’m trying to collect arguments to avoid this bad practice. I can’t rely only on technical arguments, e.g, you are going to access dirty data etc… Why? Here is the reasons, if you have a software that shows data periodically, how could a final user know/identify that he is not reading a perfect result? And even he has any thoughts about an inconsistent result, he could try to generate it again and got a legit result, right? So, advocating against nolock, should only rely on a decision about “How much satisfied is our consciente that we are providing the right result”?

    Another final question, how could I trace a number of failed nolock queries to show the statistic to a technical team?

    Reply
    • Howdy. For personalized advice on production systems, click Consulting at the top of the screen. (I can’t do unlimited free Q&A here in the blog post comments – hope that’s fair, and hope you found the post useful.)

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.