NOLOCK Is Bad And You Probably Shouldn’t Use It.

Eyes up here, kid
I’m waiting for the YOLOck option.

When you put NOLOCK in your query, SQL Server will:

  • Read rows twice
  • Skip rows altogether
  • Show you changes that never actually got committed
  • Let your query fail with an error

This is not a bug. This is by design. To understand what happens, read these blog posts in order:

  1. Using NOLOCK? Here’s how you’ll get the wrong query results.
  2. “But NOLOCK is okay when my data isn’t changing, right?”
  3. “But surely NOLOCK is okay if no one’s changing data, right?”

After reading those, you’re going to ask, “So what am I supposed to do if I need to avoid blocking problems?” There are 3 solutions:

  1. Have enough indexes that your queries go fast, but not so many that your deletes/updates/inserts take forever. I cover this in the locking module of my Mastering Index Tuning class.
  2. Keep your transactions short and sweet, and use batching to avoid lock escalation. I cover this in the deadlocking module and the batching module of my Mastering Query Tuning class.
  3. Use read committed snapshot isolation (RCSI) or snapshot isolation (SI) so that readers and writers can coexist. I cover this in the isolation levels module of my Mastering Server Tuning class.

I know: NOLOCK sounds so much easier because you can just slap it into a query and it feels like it’s going faster. Just remember the demos in those first 3 blog posts up above: sooner or later, your users are going to see inaccurate query results.

If it’s so bad,
why is NOLOCK an option?

Because in some applications, accuracy really doesn’t matter. I know it sounds like a joke, but I’m being serious.

For example, one of my clients is a high-volume online shopping site. Their admins want to see that data is moving through the shopping cart process, that users have been adding new shopping carts in the database. They use monitoring queries to check for the last 10 carts that were created recently and checked out recently. They don’t really care whether the results are transactionally accurate – they just wanna see that any new orders are moving through the system. NOLOCK works fine there.

Another example is the open source First Responder Kit scripts like sp_BlitzIndex. I don’t need transactional accuracy when I’m checking the usage statistics on an index – we’re just getting rough numbers. If the same index happened to be shown twice, you would simply go, “Well that’s odd,” and you’d rerun it to check.

But if your application involves money, health care, or anything else vitally important, you better think twice before using NOLOCK.

Previous Post
Free Live Class This Week: Fundamentals of TempDB
Next Post
[Video] Office Hours on the Balcony Part 2

15 Comments. Leave new

  • Whenever 99% of ppl say something is wrong, I tend to think it might have merit. Yours is one of the few blogs that says, “well, there might be times when this is OK.” Good for you. Dogma is bad. As you mentioned, “is data flowing” is one use case…anywhere else where a “fuzzy” answer is acceptable might be ok too. Data scientists like to look at the distribution of data in tables…we don’t need to be precise, but we need a feel for the data: nulls, outliers, etc. The best we have is nolock (which will still muck up your buffer cache unfortunately). As a data scientist often forced (or strongly told) to query transactional systems (against my advice)…NOLOCK is essential. Often no one knows if RCSI is setup and I want to avoid causing as many problems as I can.

    Often folks _think_ they need precise answers, and they rarely do. Tell your accountant the DR != CR and that’s career suicide. Tell your Chief Marketing Officer that the quarterly sales are off by $200K and you’ll get a diatribe about data quality and garbage in/garbage out and lack-of-faith in the data. If, after the harangue, you tell her, “well, that’s $200K over $200M (ie, less than 1% variance) and we are certain the QoQ numbers are trending in the right direction”…well, suddenly everyone calms down. Yeah, NOLOCK will skip and double-count…meh…I’m cool with that (in some cases).

    Reply
  • Yup, checking status of a live web server is a great example or where nolock is totally appropriate. How many people are using the website? If the result is 1227 when it’s really 1228, nobody will know or care. What everyone will care about is if you hold a lock on the sessions table for a second or two!
    Same for ‘people who bought this also bought’… a light touch trumps accuracy.

    Reply
  • NOLOCK = DONTCARE

    Reply
  • Alexander Speshilov
    November 23, 2021 6:36 pm

    Another case for nolock – service/technical queries to “spy” what your transaction in another spid is doing right now. But anyway – no one should build critical business logic or reporting on nolock since 2008 and RCSI.

    Reply
  • Gustav Swanepoel
    November 23, 2021 9:13 pm

    Great blog post as ever. Forwarded it on to my colleagues.

    Reply
  • I admit we’ve used NOLOCK in a couple of places where things were working fine for years, then it feels like one day suddenly the stored procedure just ground to a halt with blocking. We never really worked out why and the data issues are fine.
    Another case – we have some analysts using SAS from another server. When they query our Warehouse data, it seems that the command it sends to our server is SELECT * FROM , no matter what they actually write. That was causing issues on large tables that we update throughout the day. So my manager worked out how to at least get SAS to use SELECT * FROM ( NOLOCK ).

    Reply
  • Where I’ve seen NOLOCK used (or misused) most frequently is when data analysts are running ac-hoc queries or month end reports against a live production database. This is the scenario where it’s not really bad SQL coding habits – it’s that they feel a legitimate NEED to use nolock or read uncommitted isolation for performance reasons. For example, the query just won’t run to completion without it. They typically don’t need up to the moment fresh data, and in fact would prefer the data frozen as of a certain date.
    So, if setting up a readable replica or restoring from backup is too much trouble or expense, then a quicker and less expensive option is to create a snapshot database or live mount a database backup. This solves all of the problems mentioned above.

    Reply
  • Hello,

    I have considerations about nolock usage… First of all, I advocate against nolock usage and here I’m trying to collect arguments to avoid this bad practice. I can’t rely only on technical arguments, e.g, you are going to access dirty data etc… Why? Here is the reasons, if you have a software that shows data periodically, how could a final user know/identify that he is not reading a perfect result? And even he has any thoughts about an inconsistent result, he could try to generate it again and got a legit result, right? So, advocating against nolock, should only rely on a decision about “How much satisfied is our consciente that we are providing the right result”?

    Another final question, how could I trace a number of failed nolock queries to show the statistic to a technical team?

    Reply
    • Howdy. For personalized advice on production systems, click Consulting at the top of the screen. (I can’t do unlimited free Q&A here in the blog post comments – hope that’s fair, and hope you found the post useful.)

      Reply
  • IMHO, (deprecated?) NOLOCK is a holdover from the days when SQL Server was still trying to catch up with Oracle.
    Why is that?
    In the eighties I worked with MS Access, which accessed Oracle databases.
    And even back then, Oracle delivered versioned transactions. By default!

    It took MS decades to implement READ_COMMITTED_SNAPSHOT, which now does the trick for OLTP.
    And a few more years to refrain from stress testing tempdb.
    Even with SQL Server 2022, the previous state of a versioned transaction is copied to tempdb in order to be available for readers.
    This will change in the future by applying ‘copy on write’ patterns, such as those used in transactional file systems, e.g. Apple APFS.
    This has the advantage that write operations are avoided.
    Instead of ‘copying the previous state somewhere else’, the previous state is simply left intact (it was and is already there).
    The new state is then to occupy completely new memory locations within the db.
    Instead of performing two write operations (one for the previous state and one for the new state), only one is required.

    Reply
    • I hate to break this to you, but pretty much everything in your comment is incorrect.

      RCSI arrived in 2005, 16 years after SQL Server 1.0. That’s not a plural of decades.

      If keeping versions in the user database instead of TempDB is important to you, look into Accelerated Database Recovery, introduced back in 2019.

      I’ll stop there, but suffice it to say you should click the Training link at the top of this page. Cheers!

      Reply
  • Many thanks for that!

    IMHO it still took MS 21 years to catch up with Oracle.

    Many of my colleagues still use NOLOCK today. Out of habit and without thinking. And it was only 10 years ago that I had great difficulty convincing a CTO to use READ_COMMITTED_SNAPSHOT instead of dirty reads (= NOLOCK) to avoid deadlocks.

    Oracle v4 introduced multiversion read consistency in 1984.

    SQL Server 1.0 (a port of Sybase Data Server) was introduced on OS/2 in 1989. The word Sybase still appeared in a few places in the spiral-bound documentation. When I switched from the file-based dBase to SQL, I immediately fell in love with transactions. I can still remember pulling the plug on the IBM AT that was running OS/2. And when I rebooted, the data consistency was still guaranteed.

    In 2005, snapshot isolation was introduced with SQL Server 2005.
    See also https://www.brentozar.com/training/fundamentals-of-tempdb/1-how-the-version-store-affects-tempdb/:
    ‘ … Things that use TempDB. At the top of the list are two of my favourite features: Read Committed Snapshot Isolation (RCSI) and Snapshot Isolation (SI) help you quickly mitigate blocking issues.’

    Thanks for pointing out Accelerated Database Recovery!
    As this feature has been improved in SQL Server 2022, I will advise our administrators to evaluate ADR + PVS and consider migrating from older SQL Server versions. And also to take advantage of your training offers 🙂

    Cheers!

    Reply
  • ScottPletcher
    March 25, 2025 7:39 pm

    Even READ COMMITTED (RC) can get most of the errors you’ve noted (except for dirty reads). Specifically, “[A] statement running under locking read committed isolation: (1) Can encounter the same row multiple times; (2) Can miss some rows completely” from https://sqlperformance.com/2014/04/t-sql-queries/the-read-committed-isolation-level by Paul White.

    So, sure, be careful how you use NOLOCK (NL), but if your transaction is critically dependent on 100% accurate results, be careful of how you use RC also.

    Also, note that NL *should routinely be used* for lookup tables (such as state tables, code/status lookups), since it saves overhead with no real risk.

    Oracle *forced* versioning for the longest time. That’s why you’d often get failures related to long-running transactions / rollback segments. That was just part of the huge overhead inherently involved in versioning rows. Therefore, SQL Server’s RCSI has huge overhead as well. Be sure to read up carefully before implementing RCSI. I’ve seen many people recommend “always” using RCSI, but that could be a very bad idea. As one example, RCSI adds 14 bytes to every modified row, which could cause massive numbers of page splits.

    Reply
  • Use a WHERE clause with a date time stamp and that solves your problems. Too textbook approach here with this question.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.