“But Surely NOLOCK Is Okay If No One’s Changing Data, Right?”

Some of y’all, bless your hearts, are really, really, really in love with NOLOCK.

I’ve shown you how you get incorrect results when someone’s updating the rows, and I’ve shown how you get wrong-o results when someone’s updating unrelated rows. It doesn’t matter – there’s always one of you out there who believes NOLOCK is okay in their special situation.

The latest NOLOCK defender left me an angry comment:

I challenge you to show us one example of using nolock on a static database with incorrect results.

No problemo – I assume you mean tables that aren’t getting inserts, updates, or deletes. Well, folks can do index reorganize operations during business hours – it’s a completely online operation, even in Standard Edition – and that hoses NOLOCK too.

I’ll create a table and populate it with 100K rows, but do it in a way that makes it highly fragmented:

Then, in one window, I’ll reorganize the index:

While in another window, I’ll run a COUNT(*) repeatedly:

And presto: the results change, going up and down as the pages are moved around:

Even if you think no one’s doing index maintenance at your shop, the answer is still clear here: if you want to build safe, reliable code that produces accurate query results at all times, NOLOCK should not be involved.

I get why you use NOLOCK. You read a blog post somewhere once that said, “NOLOCK is faster because it does allocation scans instead of navigating those big hefty index pages.” But you never actually bothered to measure whether that’s a significant problem for you – and you never bothered to ask your users if they need accurate query results.

Sometimes, you really don’t need accurate query results, and in that case, NOLOCK can be fine. For example, one of my clients runs a monitoring query every few seconds just to see if any new carts have been started in their online store. They get the latest row, check to see whether it’s higher than the previous check, and if it is, they’re happy campers. If it isn’t, they don’t mind checking again to see whether they got bad data, and they check a couple of times before they sound an alarm. They don’t need accuracy on exactly how many carts have been started – they just want a rough idea that the latest value is going up. NOLOCK works fine for their needs.

But for the rest of y’all, when you’re ready to do things the right way, I welcome you to my SQL Server training classes.

You should probably take one of them quickly, before your boss reads this post and starts asking you why you were so vigorously defending your constant use of NOLOCK.

Previous Post
Free Webcast: Help! My SQL Server Maintenance is Taking Too Long!
Next Post
Can You Get Parameter Sniffing on Updates and Deletes?

32 Comments. Leave new

  • Nice demonstration of unpredictable consequences of No Lock hint!
    I have another use case though: sometimes I load huge amount of data into a staging table. Load time varies and depends on available system ressources. Using a query with nolock, I can do a sniff into the target table to watch progress.

  • I have argued over and over with “Architects” about this. I have shown them the data movement error , still they didn’t care but this just adds more ammo as to why not to use NOLOCK.

  • BWAAAA-HAAAAA-HAAAAA!!!! Brent, you wrote in the article, “Some of y’all, bless your hearts…”. I know what that means down South! 😀 It’s almost as bad as the old Asian curse of “I wish you an interesting life” and the doctor’s comment of “You must have a very smooth brain”. 😀

  • Heh… and I was laughing so hard, I couldn’t even type my 4 letter name correctly. Ok… moving away from the keyboard! 😀

  • Andrew Gothard
    January 27, 2021 10:03 am

    “of using nolock on a static database ”
    Just, WHAT, would you be trying to achieve with this anyway? “Hey, look at me I’ve found a really pointless justification for doing something absolutely assenine – yay me!”.

  • If you think you know 1001 ways how NOLOCK can ruin your life, Ozar comes in and shows you a 1002nd way 😀 😀 😀 I love this

  • Kenneth Fisher
    January 27, 2021 12:17 pm

    My favorite is when I was told that it’s perfectly ok to use NOLOCK with temp tables and that it was a best practice because it reduced the number of locks used.

    I’m going to have to remember this example. I’m sure it’s going to come up at some point. Probably sooner rather than later.

  • Hi Brent,
    Terrific, informative and fascinating post, as usual.

    I do have a use case, where I think NOLOCK is necessary:
    We have some long running processes that take several hours. They regularly write to several log tables. I’ve written procedures to monitor them by querying those log tables. If some of the results are a little off due to dirty reads, that’s not a problem.

    My fear is that querying one of the log tables will put a lock on it, slowing down the procedures I’m monitoring. It’s my understanding that NOLOCK on those queries will prevent that from happening.

    Am I thinking about that problem corectly?

    Thank You,

  • radityo prasetianto wibowo
    January 27, 2021 2:15 pm

    Hi Brent,
    how about non clustered table ?

  • Many, many years ago I did produce a report using NOLOCK
    The top of every page had the disclaimer of “Approx values only. Duplicate or missing pallets possible due to warehouse operations”. It was “good enough” for what they wanted to do. And they were happy with the explanation (slight lie) as to why it wasn’t 100% accurate.

  • NOLOCK on a “select-only database” is as helpful as a swim suit on a whale.
    Surely it saves <1ms of his select query, while the engine puts out the shared locks, eh.

  • Does “WITH (READUNCOMMITTED)” have the same issue?

  • Himanshu Sharma
    January 28, 2021 10:31 pm

    This is so true, I have run into this problem (NOLOCK dirty data issues, not exactly the Alter Index thing) many times in the past, and recently, I mean like 20 minutes before writing this comment (the moment I saw the anomaly, I thought of this post and hence I am here).
    Here is the scenario –
    Loading the data via an SSIS package from Server A to Server B, checking the count at the destination with a NOLOCK hint, and here are the results


    –2021-01-29 00:47:06.493 206430
    –2021-01-29 00:47:12.487 1059951
    –2021-01-29 00:47:18.363 1818028
    –2021-01-29 00:47:24.817 2725660
    –2021-01-29 00:47:30.260 3460784
    –2021-01-29 00:47:37.300 4349991
    –2021-01-29 00:47:45.387 5409394
    –2021-01-29 00:47:51.197 5917658
    –2021-01-29 00:47:57.380 6615011
    –2021-01-29 00:48:02.847 7,180866
    –2021-01-29 00:49:57.200 18,854468
    –2021-01-29 00:50:37.803 23,606184
    –2021-01-29 00:53:08.057 31,038418
    –2021-01-29 00:57:23.673 61,736722
    –2021-01-29 00:59:50.100 81,208540
    –2021-01-29 01:03:26.310 101,975887
    –2021-01-29 01:04:24.180 106,743843
    –2021-01-29 01:06:49.130 122,535775
    –2021-01-29 01:09:30.817 139,971685
    –2021-01-29 01:10:18.697 147,757216 <- here
    –2021-01-29 01:10:41.220 147,025469
    –2021-01-29 01:12:16.420 147,027168

    My source table only had 147,027,168 these many rows however at some instant in the destination I saw the count as 147,757,216 which is more than the source.

  • Why are you updating indexes on a table that never changes is data? See, it’s exactly because of people like you that I have to put nolock everywhere.

    • “i’m the only person who write this table, and i’ve just finished staging the next batch – trust me, nobody will be writing this at the same time”

  • What a great write up and that demo is AWESOME, so much better than what I had come up with for my old blog post/session I gave couple times! Thanks so much for sharing it!

  • Use of nolock or readuncomitted is ALMOST indefensible, and if the person excusing it is saying that it doesn’t matter because the data isn’t changing, they are undebatably wrong. This conversation has been going on for entirely too long. In few the cases where it’s use is justifiable, it isn’t because, “the data isn’t changing” where they are NOT getting a performance improvement they imagine in fantasyland and probably don’t understand why an allocation order scan happens either, or be able to answer, “if the data isn’t changing, why not use a tablock hint?”

  • Hey Brent,
    what would be the appropriate solution if the live reporting wants to show some (accurate) data, but doesn’t want to risk that the live, production application hangs or is blocked because of the reporting?
    When the reporting needs a few moment longer now and then or even doesn’t refreshes each run (lets say the querys are executed each 5 minutes, but when the productive application is on heavy load the reporting is okay to fail refreshing it’s data and keeps displaying the old ones), that would be okay for me. But i have to admit i have used nolock in the past because i wanted to ensure that the live application never runs in a blocking / wait time just because of the reporting querys :/…

  • 2 main cases
    1. write activity exists – NOLOCK is not blocked by writers and do not block writers,
    this also could be accomplished by SNAPSHOT with 12 byte overhead per row
    2. No write activity
    For query without lock hint, there are 3 cases
    a) query starts and stays with row locks
    b) query starts with row locks and then escalates to table lock
    c) query starts with table lock
    Back in SQL Server 2000 days, an on early 2000’s hardware, I looked at the actual costs of the various lock levels. see http://qdpma.com/CBO/s2kCBO.html
    a. The cost rows locks are highest. For loop joins it is noticeable. For Hash, it could be 50% higher than NOLOCK, In single table/index scan, row lock cost is much higher than NOLOCK cost
    Another really bad scenario is b, when the query starts with row lock, then escalates to table lock
    c. if the query starts with table lock, then there is only imperceptible difference with NOLOCK.
    for those of you who used SQL Server 7/2000, recall that the single page scan IO plan cost was 6 or 12 times more than for an index seek or key lookup, depending whether yor system has less than 1GB or 1GB+ memory.
    This was most likely because whoever came up with the model did not know small table scans use row lock
    while large scans start with table lock


Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.