Store Files in a File System, Not a Relational Database.

Architecture
35 Comments

Since the dawn of time, database vendors have called out to developers, “Store everything inside the database. You know you want to. Here, we’ll make it easy for you by adding data types like binary and features like filestream.”

Ah, but this siren song leads you to your eventual doom.

Users wanna store historical versions of files. In the beginning, the design requirements might agree to only save the most recent version of a file, but that’s gonna change. Users are going to wanna save every version over time – and even when they don’t, they’re going to create “new” files by saving them under an ever-so-slightly different name. You know that 2022_Budget_Final_v2_with_Changes.xlsx file, and all its grandparents? Yeah, they’re all living in the database, forever.

Users create new versions frequently. You know how you click File, Save all the time because you’re used to crappy applications that crash and lose your progress? Now think about that hitting a relational database server every time, multiplied by all of the users who are working in this system.

Trust me, I have the scars

Users don’t purge anything. Over time, the files look like an episode of Hoarders. Nobody can clean things out because nobody knows what’s safe to delete, so “we’d better keep it all just to be safe.”

Files are huge. Relational databases work great with tables broken up into rows and columns. When we insert rows or update columns, we’re only affecting a relatively small amount of data each time, making it easy to track in transaction logs and write to storage quickly. Not so with files, which usually contain incompressible things like images.

A bigger database means slower backups. Not just your full backups, either – we’re talking slower transaction log backups, too. As your users constantly save new versions of files, those versions get written into the transaction log, and that’s gotta get backed up. (Filestream users: before you get all excited that your feature is “different”, go double-check your log backup sizes, because filestream doesn’t work the way you think it works.)

A bigger database means tougher HA & DR. Not only do we have to back up the transaction log, but we have to push those transactional changes over to our other replicas. This huge log volume means it’s very challenging to do synchronous replicas in other data centers, and it’s very easy for other data centers to get way behind on transactions. When a user adds a 100MB file, and then another user tries to insert a single row, that row has to wait until the 100MB file synchronizes first. The low-value file storage can cause much worse recovery time objectives for our more valuable data.

Database servers suck at re-synchronizing data. When replication breaks, or you need to re-add an Availability Group replica (or log shipping or replication or whatever), SQL Server isn’t smart enough to only synchronize the different data, or sync data from the closest replica. This isn’t a big deal with small transactional tables, but throw terabytes of file data in, and resynchronization is a nightmare.

Finally, database servers are an expensive, challenging bottleneck. If you make a list of all the servers in your company, and rank them by things that make you tear your hair out, the relational database servers are probably at the top of every list. Their licensing and support costs are expensive, they’re hard to performance tune, and you just don’t have that many people on staff who are truly experts on them.

All of this might come as a surprise if you’ve only been reading the brochure from the database vendors. That’s because they want you to store files in the database. Can you think about why they might be motivated to sing you that siren song? They make more money when you’re locked into their system for everything.

There’s a better way:
store files in a file system.

File systems have been around for a really long time, and they have tools to help address these challenges. Best of all, they’re insanely, insanely inexpensive, especially compared to relational databases.

When you think that storing files in a relational database is the right answer, stop for a second, write up your list of requirements, and go talk to your systems administrator.

Because if you’re not doing a relational join with it,
it doesn’t belong in a relational database.

Previous Post
Lock Escalation Sucks on Columnstore Indexes.
Next Post
Stop Procrastinating: Master Index Tuning in 1 Month for $195.

35 Comments. Leave new

  • Dave Wentzel
    July 1, 2021 1:51 pm

    Your last sentence is quite insightful. Might I add a corollary for our NoSQL friends:

    If you’re doing a relational join with it, _it doesn’t belong in NoSQL_.

    –dave

    Reply
  • David Wiseman
    July 1, 2021 2:56 pm

    If you are in the cloud use a blob store like s3 or azure blob. Cheaper and more reliable storage.
    Fulltext is probably the most compelling reason to store files in the DB but there are alternatives like elasticsearch you should consider.
    Filestream is *not* a best of both worlds fix for storing files in the database.

    Reply
  • Anon Blob Abuser
    July 1, 2021 4:14 pm

    11 years in on a solution that stores documentation files as blobs. Has lived from SQL2008R2 to 2019 currently. Around 700GB but we do regular purges of what we can and the blobs themselves are immutable. Has worked great. No scalability problems or DR issues with a warm failover always on group standby. Average blob size is < 10Kb but we have the occasional large one. But we are talking a single table, single blob field and its only accessed directly in one spot in SQL but it gets pretty heavy use.

    Reply
    • Then you’re probably paying for more SQL licenses than you need to. I worked for an online retailer a few years ago which used lots of customer images. They stored them in SQL. over 95% of the data going through the server was images. If they had used a file system, they could have run with far fewer CPU cores/SQL licenses.

      Reply
  • I’m not going to argue storing files in file system is a bad idea because nothing is that simp;le….like so many method decisions, it depends on your situation and needs. But….

    What about file formats change over time issue? That can be a massive. Even if you store file layouts with the files you still need ETL that are setup for those specific schemas to read the data.
    But if you load the contents into database tables you get 1) historically consistent format over time, 2) ability easily join data across what was in files.

    Yes I know you can use Sparc SQL to join files in data lake but dude….that’s SO much more complicated than a simple SQL join inside of SSMS and data lake and sparc pools aint free. Why does it seem like so much of our industry is overly complicating things….like it’s ok to spend a lot more effort to do super simple things. Pretty soon vendors will start selling us that JCL and Cobol will solve all our problems…I’ve been in this industry to long…

    Reply
  • I briefly worked for a company in the real estate industry that had a database full of pictures of properties. All in one table with limited metadata. Their DR solution was nearly instantaneous on failover, and would require over two weeks of downtime to failback to primary. They were hitting the storage limits on the VMs they hosted SQL on. Queries against that database were horrifically slow. And they didn’t want to convert to storing the image files directly and keeping pointers in the database. It was a disaster.

    Reply
  • Kelley Prebil
    July 1, 2021 4:57 pm

    Another reason to not store files unnecessarily in your database is that the bigger your database is, the longer (HOURS) it takes to come back online after a reboot (hello Microsoft Patch Tuesday parties!).

    Reply
    • David Wiseman
      July 1, 2021 7:12 pm

      The time it takes to bring a DB online shouldn’t increase with the size of the DB. I’ve worked with DBs up to 160TB in size and that hasn’t been the case.
      Depending what was running before the shutdown the redo/undo of transactions could take a while. Newer sql versions and enterprise edition can help here.
      The number of vlfs can impact this – this can occur with a large transaction log that has grown in small increments.
      Query store can impact this – limit the size of QS and there are trace flags that can help.
      Patching windows is slow and painful but not impacted by size of db. Failover cluster or availability groups can help there.
      Worth looking into if your db takes hours to come online.

      Reply
      • Kelley Prebil
        July 1, 2021 10:52 pm

        I’ve got a S2K8R2 1.15TB database that takes 4 hours to come back online after a reboot. It’s a building security database/app so can’t get upgraded since they’re still on *old* technology. So advancements in SQL Server versions since then to get databases back online quicker aren’t applicable.

        Reply
        • David Wiseman
          July 2, 2021 6:35 am

          Taking 4hrs to come online after a reboot isn’t normal and I don’t think it’s related to the size of the DB – even on older instances like 2008R2. If there is a lot of transaction log data to roll forward/back that might explain it. If it always take this long there could be something else going on.
          First I would run this command on the DB:
          DBCC LOGINFO()
          The number of rows returned gives you the VLF count. If it’s more than 1000 you might want to take steps to reduce your VLF count – it might need to be much higher than this to explain the 4hr startup time.
          Query store isn’t available on 2008R2 so that’s not impacting anything. The next time you reboot I’d check the SQL error log and see if that gives any clues as to what’s going on. Also, maybe run sp_whoisactive @show_system_spids=1 to see what’s running the next time you reboot.
          http://whoisactive.com/
          A DB taking 4hrs to come online following a restart would make me nervous. Hope you get to the bottom of it.

          Reply
        • Thomas Franz
          July 2, 2021 8:28 am

          We are using the filestream as temporary (for a few days until it will be cleaned up) for our ETL processes (all files that should be imported will be stored on the filestream “directories”).

          It works well, and yes, it will affect the backup time, but not the log backups, because the filestream database is in simple recovery mode.

          The main problem with the filestream feature is, that the build in functions as GetFileNamespacePath() are scalar and VERY slow, when you have more then a few files (I had directories with 40k small files before cleaning up), so I had to write my own inline table value functions that are using self referencing CTE’s etc. to speed up stuff as listing files in a directory.

          Moving files from one folder into another (as long it is the same file table) is much (!) faster with (optimized!) SQL than with the Windows Explorer / Total Commander etc., but still slow, because filestream uses the HIERARCHYID data type and you have to work with the (scalar) GetDescendant() function to get a valid path locator for the target directory.

          PS: Another big point which I miss: TDE (Transparent data encryption) does not work for filestream data groups, so your CSV files with credential informations are saved unencrypted on the HDD, even if you are encrypting all databases. On the other hand this would be true for usual file storage too, except your HDD controller / raid system has an hardware encryption included.

          Reply
  • I’m also the backup admin, which I’m ok with because who really trusts someone else to do that anyway….
    Our backup apps is tier priced for storage. DB storage is a premium tier price.
    1TB of file storage is < 1/3 the cost of 1TB DB.
    So it's possible you could have a hidden cost of the backup itself.
    So yeah those bill images going back to 2002 are costing you in more ways than one.

    Reply
  • J.D. Walker
    July 1, 2021 6:01 pm

    I did some work for a law enforcement office prosecuting kiddie porn cases. They had a Microsoft Access application that showed pictures for evidence. The pictures were stored in the .mdb (yes, it was that long ago, I’m old, shut up). One case took up 7 full double sided DVDs worth of space.

    I converted it to store the pictures in the file system, and have the database point to the location. It went down from 7 full double sided DVDs to one regular CD with the Access application on it, plus all the files.

    Don’t store files in the database.

    Reply
  • Brandon Forest
    July 1, 2021 9:29 pm

    I absolutely agree. I’ve been preaching this to the developers and managers forever. When a file is stored as a BLOB, the overhead that SQL adds to each record is almost three times the size of the BLOB on disk. The FileStream service keeps it all working.

    Reply
  • pareto priniciple
    July 2, 2021 4:39 am

    I rarely post and reply in brentozar, but this article hit me.

    Instead of using a fileserver, I inherited a production application that used filestream technology instead of persisting user and system-generated files in a separate fileserver and just use metadata at the database level to point to the actual file. The previous architect(s) just used all the shining and shimmering features of SQL 2012 including TVFs and SVFs and implemented our application using these technologies. These “architect” then left the application to us to maintain and extend its features, all the burden now is on the database personnel to maintain a clunky and poorly designed architecture while these architects’ LinkedIn credentials boast nothing but “BROUGHT X # OF APPLICATIONS INTO PRODUCTION INCORPORATING SQL SERVER’S NEWEST FEATURES AND TILL NOW ARE ONLINE”

    yep, you are correct, though your design sucks and your design was good only for a few users, and stakeholders didn’t listen to us blowing all the whistles, saying that we cant afford a rearchitecture! Boy oh boy, years later, it now has become a ticking time bomb and the application now is almost on its technological limits .

    We eventually encountered the same problems that this article has stated, huge and slow backups and restores, terribly large data files, etc, and our application has been in production for a few years now that it will be difficult to move the tens of thousands of binary files out of the filestream. I also remember that any network related error would bring the whole filestream service down and we also have to clean orphaned entries in the filestream because of this.

    Filestream service is great for applications need to generate maybe a few hundred to thousand “static” files every now and then, but if you have tens of thousands of users generating files, and each user generating multiple times, FILESTREAM is not the way to go. It is still fileservice + metadata/ mapping.

    and to the “architects” out there that are obsessed on using the LATEST and GREATEST features of whatever database tech or software out there, HOLD YOUR HORSES and think about maintainability and sustainability in the long run.

    Sure your LINKEDIN says you’re an architect that does this , and did that, but no one really know your design sucks.

    Reply
    • David Wiseman
      July 2, 2021 6:58 am

      I feel your pain. The more data you have, the harder it is to migrate out of the database. I’ve gone through a couple of these migrations – the largest one involved moving 160TB out of the DB. It can be done but it has risks associated with it and needs to be handled with care. To scale BLOB data on the DB side it can help if the data is partitioned so you can use filegroup backups. If the data is immutable old partitions can be marked readonly – you take a one off backup and can exclude from your regular backups. On the restore side you can look at online piecemeal restores. HA/DR becomes pretty essential for VLDB and you will want more than one additional copy of the DB.

      Reply
    • Was the architect hired on permanent role in your company or was he a consultant?

      A couple of years ago we had discussion (I was on L3 support) with my colleague (service manager) during transition of a new part of sofware done by project team. Developers, testers were consultants. Fortunately architects were permanent employees. And the conclusion is that consultant (developer, tester) will do only the part he/she is paid. There will be no thinking how it can impact the service in 1 month, 1 year or longer. And consultant will focus only on now because is paid for it. No future… happens 🙂

      Reply
      • pareto priniciple
        July 3, 2021 5:14 am

        The architects were permanent employees during that time, BUT eventually got poached/ went to other companies.

        Reply
  • Hello,
    I manage an application that saves documents in the database (usually between 100KB and 1MB). At the moment it’s 5 TB, constantly increasing (about 2 TB/year).

    Full weekly backup with retention 15 day (long and expensive), daily differential, log every 15 min. AlwaysOn (asynchronous) replication database between primary site and DR site.
    Everything OK, as long as there is no need for a complete restore (e.g. for accidental deletion of data), with hours of unavailability…

    Table partitioning in our case would not bring benefits, due to how the data are structured and updated…

    I’ve used filestream in the past, without benefits (indeed, some backup tools, such as Microsoft DPM, had problems managing millions of files through SQL Server, because each time it internally generates an xml index file with the list of physical files …).

    The solution db + documents on file system would be much more convenient … but what about the guarantee of consistency between metadata and file system?
    In case of recovery I need to have a consistent state, or at least have a MEASURE of the level of inconsistency (easy for db, much more difficult for separate backup/restore of millions of files …).
    And what about replication on DR site? File sync tool?

    I suggest to those who develop to create flexible and scalable applications with respect to document management: the possibility of having N archives and adding them when the size grows too much (among the metadata there will be in which archive it’s saved).

    Reply
  • Alexandre Jobin
    July 5, 2021 12:55 pm

    We have a project that store files in BLOB. We did this because it was easier to backup/restore and have consistency between the metdata and the files itself. Everything in one place! But I understand the concerns and the cons of doing it that way. Now, what’s your strategy to be cause that your database backup is consistent with your filesystem backup? How can you be sure that your database is always in sync with the filesystem storage?

    Reply
    • Great question! The answer has two parts.

      First, in the database, save pointers to the files, not the files themselves. A common technique is to use UNC paths, and store the path name in a field in the database.

      Second, when you save files to the file system, always store new revisions in new paths or as new names.

      That way, if you need to restore the database to a prior point in time, you can do that without having to restore the file system. The file system will have extra new files – but that won’t affect the database since they’re not stored in the database.

      If you want to discuss architectures like that more deeply for an upcoming project, feel free to click Consulting above. Hope that helps!

      Reply
      • Alexandre Jobin
        July 5, 2021 7:48 pm

        This work if you have to restore the database. You might have more files in the filesystem than what the database really need. But if you inverse the problem and you have to restore the filesystem to another point-in-time, now you need to do something on your database because it have data that point to files that no longer exists. I guest I will have to restore the database to the same point-in-time too.

        Reply
        • Alexandre – read the second to last paragraph in the post. I can’t emphasize that enough.

          I know you’re completely confident that you have a good handle on all of the possible backup and recovery tools for file systems. However, you’re working with outdated knowledge. It’s time to sit down with your sysadmins and get a fresh look at what’s out there. Hope that helps!

          Reply
          • Alexandre Jobin
            July 6, 2021 1:21 pm

            Great. You are surely right when you say that I’m outdated 🙂 One month pass and you are already outdated anyway so…

            When the time will come, i will check that for sure.

    • … and (see my previous comment):
      1) how to guarantee sync of file system also inter data center (DR site)?
      2) how to manage consistency in case of failure and restore of the entire file repository?

      Reply
  • cough cough *Sharepoint* cough cough!

    Kept our storage vendors grinning ear to ear as we added TB’s of storage just about on a monthly basis.
    All that batch automation spitting out .xls files daily everywhere among other stuff, and everyone wants to retain every copy forever.
    Thank god they moved it all to that MS cloud based Sharepoint service.

    Reply
  • Josef Richberg
    July 7, 2021 8:47 pm

    We stored pdfs in blobs. It solves the problem of keeping the myriad of locations and hierarchical structures and the infrastructure required to manage those out of the equation. When migrating from one storage location to another (this was many years ago) files were lost, because the link stored in the database (just the full path to the file) was never updated.

    The ingest was simple and efficient and one nightly (customer service invoices).

    Yes, it can causes the size of your DB to increase tremendously. Yes, it makes certain things like DR and synchronization an issue, but it guarantees data is where is needs to be (linked to the ancillary data necessary to make sense of the pdf). It was a data element just like every other int, varchar, date column.

    There are exceptions to every rule, outliers to all cases.

    Reply
  • Perry Whittle
    July 10, 2021 5:20 am

    Get what you’re saying but Filestream does not store files in the database, they’re stored on the file system, a link to the file is stored in the database.
    The filstream file system uses ntfs acls to secure the files

    Reply
    • Perry – when the database is backed up, do you think the backup excludes the files?

      When files are added, do you think they’re copied to other database servers that are replicas of this server, like with Always On Availability Groups, Database Mirroring, Log Shipping, etc?

      Reply
  • In the case of storing files outside the database, how do you ensure the integrity of your data when backing up?
    I mean how do you ensure your database dump is sync with your files backup?

    Reply
    • Backup architecture is outside of the scope of this blog post, but for personalized help, click Consulting at the top of the screen. I’ve helped lots of companies with easy answers for this one.

      Reply
  • Peter Kulek
    June 15, 2023 5:19 pm

    if you need to store blobs, then SQL server is not the database to use.
    use an isam database. Mysql or mariadb with myisam, ADS, even xbase with Harbour.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.