Do Foreign Keys Matter for Insert Speed

Do you have the key?
Do you have the key?

Do foreign keys hurt the performance of inserts? Okay, we all know that foreign keys do require some work, but the crazy people of the internet have wildly varying guesses as to just how much work is involved. Estimates varied from “barely any” to “as much as 50% slower”.

I figured that you were going to do the work, so I might as well do it.

How did the test work?

I created two tables – parent and child. The parent table has an int primary key and a fixed width filler column. I varied the size of the filler column across multiple test runs, just to make sure I wasn’t biasing the test by using an artificially large parent table. The testing happened with a parent table that had 2 rows per page, 4 rows per page, and 8 rows per page.

You can find the test code in fk.sql.

Testing SQL Server insert speed with foreign keys

The test code inserts 1,000,000 rows in batches of 5,000 rows. Although this isn’t strictly realistic, it’s better than timing a single batch of 1,000,000 inserts.

What happens during the test?

It turns out that no matter what the page size it takes approximately 40ms to insert 5,000 rows with the foreign key in place. When we remove the foreign key, inserting 5,000 rows takes ~26ms.

Although the difference looks big at 5,000 row batch sizes, each insert is taking, on average, 0.0072ms with the foreign key and 0.0052ms without the foreign key. These differences are hardly worth noting. And, let’s face it, 2 microseconds is a pretty minor price to pay for data integrity.

How much do foreign keys affect single row inserts?

I ran the test one last time with a batch size of 1 row, just to see how much difference there was in insert performance.

FK present? duration
yes 253,896 ms
no 241,195 ms

When it comes down to single row performance, the difference is neglible. We’re spending all of our time waiting for other parts of the system.

How real is this test?

It’s not terribly real, but it’s real enough to say “Yes, foreign keys have overhead” and follow that up with “No, you shouldn’t care, just add the keys.” The tests were done in VMware and the hardware in question is mid-2012 Retina MacBook Pro.

In the real world, we’ve got a lot more than inserts happening, but it’s worth quantifying the cost of a foreign key lookup and realizing that it’s worth having foreign keys.

Kendra says: Wanna know if your foreign keys are ‘trusted’? Check out our script here.

Previous Post
Not Everything Should be Automated
Next Post
How to Troubleshoot SQL Server Connection Timeouts

52 Comments. Leave new

  • Well, the perf costs depends on the query plan. You pay as much as a join to the referenced table costs. If the referenced table is huge and uncached then each validated row might cost you a disk seek (10ms) if a loop join plan is chosen. Or cheaper, if it is a hash or merge join which depends on the number and sorting of the dml row stream.

    If the referenced table is small then the perf costs will be minor (a cheap join).

    Reply
    • Yup, table and row size matter. That’s why I tested with multiple row sizes in the parent table.

      Reply
      • ..but you didn’t test with larger referenced tables. You say: “just to make sure I wasn’t biasing the test by using an artificially large parent table. “..what you actually do is biasing with an artificially small table. Whether you use 1 or 60 rows per page, it keeps being small for only 2048 rows..: it’s all cached: I guess that’s what ‘tobi’ complained about..
        I think your test says something about insert performance with cached foreign keys (one) only. Not much about the costs of foreign keys in general.Welcome to ‘the crazy people of the internet’..

        Reply
        • One of the really cool things about putting the code up on GitHub is that helpful people could make changes, submit pull requests, or even publish their own results.

          Another great thing about a blog is that those helpful people can suggest changes to make a blog post better and do it in positive and constructive ways.

          Reply
          • Sorry Jeremiah, if I insulted you, but I think as a (blog) writer you have a responsibility to do your research well before you write. Certainly if you write on a highly regarded blog as this.
            Hence my (ok, too sarcastic, sorry) welcome to the ‘crazy people of the internet’.
            But I am not going to do your homework

          • Mario – it sounds like you’re looking for deeply detailed information in a blog post, and I think you’re going to be disappointed. We do offer a lot of great free material here, but we can’t test every edge case for a blog post.

            I noticed that your email address is SQLinternals.com – but your web site is empty right now. I’m really looking forward to seeing what blog posts you put together over there! It sounds like you’re the kind of detail-oriented person who will be making great contributions to the community. Come on board and get started! Get your blog together with these instructions: https://www.brentozar.com/archive/2008/12/how-start-blog/

    • Right, but we should only be seeing merge or hash joins when we’ve got a lot of data being modified in one table, otherwise the loop join lookup should be fast enough. The loop joins that SQL Server uses are block optimized loops – hopefully you won’t get multiple disk hits in the parent table unless the database server is under enough memory pressure to push the parent table out of memory.

      But yeah, I agree – the performance depends on the cost of the query plan. For many people, and in many situations, FK lookups are largely in OLTP applications with single row lookups being performed as child objects are added to a parent.

      Reply
    • Nicolas Leveque
      October 17, 2018 7:58 am

      are you talking about Read performance, or write performance ? for writing, the unique index on the ref table will be used to validate the value exist, no ?

      Reply
  • Dave Wentzel
    May 26, 2015 9:05 am

    “but the crazy people of the internet have wildly varying guesses as to just how much work is involved. Estimates varied from “barely any” to “as much as 50% slower”

    Perhaps the misconception stems from the fact that people aren’t properly indexing the keys or pick a suboptimal datatype.

    More interesting is why people would NOT want to have DRI at the data tier. I’ve learned a lot about SQL Server by studying how other database managers do things. Vertica for instance does not enforce FKs during data loading but instead reports key violations at query time. Supposedly the reason is data loading performance would suffer. In my experience the Vertica pattern I see most often is 1)load the data 2)run a dummy query to see the key violations 3)fix key violations. So I’m not sure what Vertica’s paradigm is saving you. Seems like the better use case is to squirrel away DRI failures somewhere and deal with them later vs failing the whole batch. And this logic seems better handled in your ETL tool.

    Probably the real reason folks say that DRI at the data tier is 50% slower is because they have a vested interest in removing all logic from the data tier. ORM guys are but one example…preaching that all logic should be in other tiers and the database should be a dumb persistence mechanism. The FK performance argument, which it seems you debunked, is often used as a reason not to have DRI in the database. And most of us have seen the net result of these systems…garbage data everywhere. Performance arguments are no reason to remove keys from the data tier. Fact is…these fanbois can of course mock up some set of circumstances where FKs do actually lead to a 50% performance hit…I’ve no doubt about this. But does that really lead to the conclusion that DRI is bad?

    Reply
    • I agree – for bulk loads, the ETL tooling should be handing FK validation. And, let’s face it, many people who need ETL to go faster will usually throw some hardware dollars, or subject matter expert time, at the problem to get it fixed.

      OLTP is a different story, but I think we can preliminarily show that OLTP performance isn’t as hurt as you’d think, at least not with arbitrarily sized rows.

      Reply
    • ” ORM guys are but one example…preaching that all logic should be in other tiers and the database should be a dumb persistence mechanism. The FK performance argument, which it seems you debunked, is often used as a reason not to have DRI in the database”
      What? Not that “The foreign keys are causing errors in the application!”. I’ve had that put forward as a reason for a requesst to have them removed / not enforced before.

      Reply
    • 'No Bugs' Hare
      October 21, 2018 1:02 am

      “Probably the real reason folks say that DRI at the data tier is 50% slower is because they have a vested interest in removing all logic from the data tier.” – well, my 20+ years of experience as an architect of rather serious systems shows that: (a) “all logic at the data tier” is clearly NOT sufficient to maintain the data integrity in a sense which business needs it (even the most basic requirement of “sum of all the money is the database must be zero”) – (a1) that is, unless write accesses except for stored procedures are outright prohibited – but this has its own major drawbacks starting with 200% vendor lock-in and very serious scalability issues of such architectures (sic!); (b) hence, at least part of ensuring data integrity belongs to the DB-app layer; (c) having TWO teams to be responsible for data integrity means “no data integrity” (with fingerpointing being only one of the problems); (d) from (b) and (c) it follows that ALL the data integrity is better to be kept with the DB-app team (DBA team cannot ensure it anyway per (a)); (e) as soon as it belongs to the DB-app team – it is up to them what to use to enforce it (it becomes an implementation detail); (f) in practice, referential integrity happens to be the least of DB-app team worries integrity-wise – it is more complicated stuff such as “zero sum of money” which is complicated and difficult to track. (g) the only realistic case when FKs do happen to help – is to protect against ad-hoc command-line modifying SQL requests, but these have to be very thoroughly pre-tested for any serious operation anyway (it is very easy to kill serious DB even with a reading request causing cache poisoning, not to mention any modification).

      With (a)-(g) in mind, additional performance gains due to implementing the same referential restrictions without FKs becomes a side bonus (though it IS quite an important one for systems with tens of billions transactions per year).

      NB: the whole logic above relies on the understanding that there is a separate DB-app team, which is the only one dealing with SQL, and is SEPARATE from the web-app team (whoever thinks that webdevs can be allowed to issue modifying SQL requests over a database with some integrity requirements, is in a state of sin).

      NB2: I have to note that in all my 20+ years I did NOT see a valid use case for ORM 😉 .

      Reply
      • NBH – about that last line – the reason you probably haven’t seen it is that you’re an architect. As a small business owner, I saw a valid use case for an ORM in my very first year: we need to ship products quickly so my team can get paid. 😉

        Reply
  • I agree with the article. Define and enforce the FK relationship. If other best practices (e.g., using a narrow ever-increasing PK such as a bigint identity) are followed, it is well worth the overhead.

    I find that some developers bristle at enforced FK relationships because the developer wants to insert the rows into the tables a certain way but if they want to transitively generate orphans then that is even more reason to insist of enforced FK relationships. Inevitably I find orphans and widows on any database where the FK aren’t enforced.

    Thanks,
    John.

    Reply
  • Out of curiosity, as I am also a Mac user for home systems, have you used Parallels as a VM environment and if so, how does it compare against VMWare for what you’re doing? Currently I only run Parallels/Win 7/SQL 2014 from my iMac as my MacBook Air has too small of an HD and RAM to really be viable for virtualization.

    Reply
    • I used to use Parallels, but I’ve switched over to VMware. Performance comparisons between the two vary from release to release and even patch to patch. I settled on VMware because I can take a VMware VM and easily run it anywhere else or even move it into the cloud.

      Reply
      • I went back and forth on VMware’s ability to move the image to a server to run it. I love that functionality, but at that time I wasn’t working in a shop that ran VMware and still aren’t. If I were a consultant I think I’d change my tune.

        I was at PASS in ’08 and was quite pleased at seeing the number of MacBooks there.

        Reply
      • Curious as to the specs on the MacBook used in the test. I5/I7 ? RAM? SSD or HD?

        Reply
        • OS X Yosemite (10.10.3)
          MacBook Pro (Retina, Mid 2012)
          Processor: 2.7 GHz Intel Core i7
          Memory: 16 GB 1600 MHz DDR3
          Disk: 512GB SSD (Model #: APPLE SSD SM512E) with a queue depth of 32

          HDD buffering is set to “Automatic” in VMware.

          Reply
  • So those ‘50% slower’ dudes were right according to your figures- but slower than a very fast thing is still a very fast thing.

    Reply
    • 50% slower on batched inserts of 5,000 records at a time. That’s not a normal use case.

      When you break it down to single row inserts, which is far more normal, the difference between the two all but disappears.

      Reply
  • Nice article.

    Presuming that you’ve got foreign keys and you want to keep them would removing constraints before a bulk update then re-adding them with check give you any performance gain?

    Reply
  • Alex Friedman
    May 27, 2015 4:22 am

    With FKs I really worry more about locking and deadlocking. As much as I think that RI is important, it’s sometimes it comes down to a business decision of “we can live with some bad data that we’ll fix later, just don’t make the application crash”.

    Reply
    • That’s interesting. So, if I understand correctly, you’re saying that it was provably foreign key verification that caused locking and deadlocking. And no other efforts you tried – like indexing, query tuning, or looking at the snapshot isolation level for long running queries – were able to alleviate the problems. And someone in the business actually said “Meh, screw the data.”

      Is that right?

      Reply
      • Alex Friedman
        May 27, 2015 9:26 am

        I appreciate the sarcasm, and basically, yeah.
        We tune the crap out of queries and indexes, and synchronize order of operations between different SPs — but we can’t control the exact plan of FK access, in which order objects are locked.

        While the “screw the data” decision was made before my time here, and it does grate on my DBA nerves, I do see the business point of view.

        Reply
    • I’m with Jeremiah on this one. I’d be careful on declaring that FK verification caused a deadlock.

      If the FK is the clustered index on the reference table and the key is narrow, I would be hard pressed to imagine that situation. However, if it happens, try creating a non-clustered index on the clustered index column(s); the key lookup in the NCI may be faster than in the CI. Time Wiseman has a good article about the speed issues on mssqltips,com.

      Finally, “Is fast and wrong data better than slow and correct data?” — Pinal Dave

      Reply
  • My absolute favourite part of the article was:

    > No, you shouldn’t care

    I love it.

    Too often discussions get bogged down with “it depends” followed by a few extreme edge cases that completely ignore the most common situation at hand. While it’s fun to share humblebrags about our knowledge and experiences, it can also be misused and cause grief with management instead of simply telling them what they wanted to know – that this is generally safe and good practice so let’s go ahead unless we discover a really good reason not to 🙂

    Reply
  • I object to the testresults. I once was in a shop where there were no FK’s defined, because it was faster. They got data corruption (really?) and spend *weeks* trying to weed that out. So the formula could be No_FK_gain = (253,896 ms – 241,195 ms ) – a_few_weeks_of_hacking .

    Your mileage may vary.

    Reply
    • Unfortunately, when running tests we’re stuck reporting the results that we observe and can definitively quantify. An ASCII frowny face isn’t a rest result I can compare to other numbers.

      Also, I’d be willing to bet that data corruption didn’t come from a lack of foreign keys but really crappy storage.

      Reply
      • Wel, they didn’t define any referential integrity as that ‘would slow the database down’ , and R.I. was ‘ handled in the application layer anyway’ . The corruption wasn’t the datafiles, but the data: reports started to show inconsistent information.

        Reply
    • I think there’s a small terminology issue here. Maybe you meant data integrity rather than data corruption. Unless there’s an even better word for this that I’m missing.

      Reply
  • Great article!

    As a side note, one of the worst arguments I’ve heard for not wanting to use foreign keys is that the application doesn’t validate data and a user might enter bad data and bad data is better than no data.

    Reply
    • Great article indeed. Equally bad an argument, my friend Robert, is the opposite argument 😉 Not “let the bad data in in case the app – or when – hiccups and creates bad data.. Instead the “We don’t need DRI in our databases, the application and code will enforce it” (or the triggers will enforce it in some horrible edge cases 😉 )

      Reply
  • Barnaby Self
    May 30, 2015 2:56 am

    In data warehousing, fk’s can REALLY impact performance in a negative way when loading large fact tables. If you trust your etl, it can be a lot quicker to nocheck constraints then check them after the load. I do agree that oltp should have RI though. I think I have finally got this through to our developers ????

    Reply
    • I tend to avoid FKs in data warehouses, myself, unless I can prove that the foreign key helps produce better execution plans. Even then, I totally agree with you on getting rid of them for ETL – they slow ETL down considerably. They also slow down re-snapshotting replication.

      Reply
      • Completely disagree. I leave my constraints in and I do not find they have a significant impact. Warehouses without RI will always have issues. The ETL cannot be trusted to catch everything anymore than triggers can. In principle I have nothing against the temporary dropping and recreating/disabling and reenabling with check for certain circumstances. The execution plans show that the joins are very efficient. I’ve read that the optimizer benefits by understanding the relationships between the tables. I don’t know this through some technical tests, but having worked with ETLs with and without RIs on the tables, I’ll take the RI’d tables every time.

        Reply
        • Ron – that’s interesting to hear. Can you post your A/B testing results that you’ve done to check the impact?

          Reply
          • With Oracle databases, it is considered bad practice to remove the referential integrity in the lower levels of the DWH (e.g. data marts) As Ron said, this is because RI is taken into account by the Oracle Optimizer. See for a table pruning example http://aberdave.blogspot.co.uk/2011/04/referential-integrity-and-optimizer.html .
            I have no experience on this field with the SQL server optimizer. For more info, Google for Tom Kyte, one of the Oracle Guru’s.
            “Warehouses without RI will always have issues.” – that is my (limited) experience as well. Behind the eagerness to disable RI i often found a lack of performance tuning skills.

  • Corey lawson
    May 31, 2015 1:37 am

    Hmm… major ERP & accounting apps are doing it wrong then? Many of those (JDEdwards, Peoplesoft, microsoft Dynamics et al) don’t use fks… Just saying…
    Did you test the time to enter records in a table that has several foreign keys?

    Reply
    • I did not test insert performance on tables that have multiple foreign keys. Dynamics does let the user create foreign keys (see How to: Add a Relation to a Table [AX 2012]), but it also has relatively strict rules about how data is modified – basically, don’t do it unless you’re using the application. IIRC, a lot of these products use lock hints in the SQL they generate while they’re performing data modification. This certainly helps avoid the pain you can run into without FKs. Dynamics has a sordid history, too, depending on which Dynamics product we’re talking about. One of them used to run on both Oracle and SQL Server; C5 even ran on DB2.

      But, then again, none of them are what I would call the pinnacle of database design. They serve a particular niche exceptionally well and they break certain rules.

      Reply
  • “but the crazy people of the internet have wildly varying guesses as to just how much work is involved. Estimates varied from “barely any” to “as much as 50% slower” … and the experiment showed 46%, slap between barely any and 50%. Seems the people of the internet are not so crazy? 🙂

    Reply
    • That’s an interpretation. How often do you perform batch inserts of 5,000 rows vs inserts of 1 row? My experience tends to the latter being more common.

      Are you perhaps reading for the outcome you’d prefer?

      Reply
  • Late contribution, but having read this post and its comments, my recent experience may give some actual vs. theoretical/contrived numbers – your mileage,naturally, may differ:

    Target table comprises an InventoryID, PK, Clustered, and 20 GroupNNID columns, each of which has a trusted FK constraint to its parent, and each of which is indexed (20 Indexes). The GroupNN tables can contain a highly variable number of records, and each group is dynamically assigned its use when the App is configured. Row counts are from 3 to over 100K. Every ID is set after validation, so we are in full control of parent-child relationships – no orphans.

    For one client, they cannot provide deltas from their system, so we daily import 400K rows that expand at the Inventory level to over 38 million rows.

    Options are, 1. UPDATE, 2. TRUNCATE–INSERT

    If we don’t DISABLE the 20 GroupNNID Indexes, UPDATES and INSERTs take over 55 minutes each.
    Disabling INDEXes takes the runtime down to 35 minutes.
    Disabling FKs on top of disabled Indexes reduces the runtime to under 2 minutes for UPDATE, under 1 minute for truncate-insert.
    WITH CHECK CHECK on the 20 FKs takes 90 seconds (yes, on 38M rows, 20 times).
    REBUILD all 20 Indexes takes 10 minutes.

    INSERT had TABLOCKX in an attempt to induce minimal logging (Bulk_Logged recovery).
    This process is run once per day, never concurrent with anything affecting the GroupNN tables, nor the Inventory table.

    We have seen far better query plans overall with trusted FK constraints, and selecting based on GroupNNID value for particularly the higher-volume groups is helped by indexing.

    The downside of having all those FKs on large volumes, where large deltas or full replacement is involved, is the huge amount of IO incurred because SQL rightly verifies all parents. 160M Reads for the INSERT with all Indexes enabled but disabled FKs; 600M Reads for UPDATE with disabled indexes but enabled FKs; 900M Reads with Indexes and FKs enabled; under 1M reads for UPDATE with disabled indexes and FKs; 500K reads for TRUNCATE-INSERT with disabled indexes and FKs.

    As you can see, disable-uncheck insert/update then check-check-rebuild can have enormous time and IO benefits.

    Hope this puts some more meat on the subject’s bones.

    Reply
  • What happens to the timings if the table has 20-30 foreign keys?? It is a data warehouse application, but we’re seeing an excessive amount of CPU usage when inserting/updating the 90,000 row table.

    Reply
  • Let me clarify. These 20-30 foreign keys are identified in the table as foreign keys, so that’s my main question. I understand if they are being used as foreign keys, but I am not sure about the performance of SQL having to verify the constraints on every insert/update.

    Reply
  • 99% of the inserts I know of, come in one of two scenarios: (a) loads, (b) OLTP-style loads.

    For heavy loads (such as “load 100M rows”), the best practice performance-wise (well, if we can afford to do things offline) – is to (i) drop all the indexes, (ii) load, (iii) rebuild indexes. The reduction in time due to using this drop-load-rebuild procedure (compared to simplistic load-with-indexes procedure) can easily reach as much as 5x (!!!) – and as we’re often speaking about DAYS spent on loading, these 5x often mean the difference between “will be completed today” and “some time next week” . Oh, and believe me, you do NOT want to have an FK without an underlying index (well, unless the table is <1000 rows or so).

    As for heavy OLTP-style loads (like 10B write transactions/year) – FKs can _easily_ cause 2x penalty (especially if you're properly 3NF normalized, and even more so if your FKs force you to create indexes which are not necessary otherwise). IMNSHO – it is not even clear which of "the time to check FK" or "the time to maintain index for FK" which hurts more (my guess is that usually it is the latter, but it is pretty difficult to generalize).

    On the other hand, IF we're not speaking about loads or OLTP – there are no inserts, so there is no penalty ;_). Which can be rephrased as "for MOST of the people out there – FK performance hit doesn't matter at all, but for serious writing loads – FKs (especially when they cause excessive indexes) DO hurt performance significantly".

    Just my $0.02 based on personal observations and anecdotal evidence ;-).

    Reply
    • “No Bugs” – you wrote:

      For heavy loads (such as “load 100M rows”), the best practice performance-wise (well, if we can afford to do things offline) – is to (i) drop all the indexes, (ii) load, (iii) rebuild indexes.

      This note isn’t for you – I get the feeling you already know this – just mentioning it for other readers. If the table in question has a lot of data while you’re loading it – say, you’re only loading 10% more (and there’s already 1B rows) – then the advice would be different.

      Otherwise good stuff, and I agree – just wanted to have that out there.

      Reply
      • > If the table in question has a lot of data while you’re loading it – say, you’re only loading 10% more (and there’s already 1B rows) – then the advice would be different.

        Right (I should have said “load 100M rows into an empty table” from the very beginning).

        Reply
  • Wow, he ran one query and bestowed the wisdom that FKs don’t impact performance. How scientific!

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.