Blog

Geeks love duct tape.  Sure, we pride ourselves on building a rock-solid solution the right way, but when the brown stuff hits the moving metal stuff, we love to show off our ingenuity.  We carry our money in duct tape wallets, wear duct tape shirts, practice ductigami, and hang our duct tape from custom metal brackets.

I’m about to discuss a feature of SQL Server that’s probably going to horrify you at first, but take a step back and think of it as duct tape.  I don’t want you to build a solution from scratch with it, but it’s a killer tool that deserves a prominent spot in your bag of tricks.  Since I’ve started using SQL 2012 in my lab, not a week has gone by that I haven’t used this feature to do something faster and easier.

A Brief History Lesson on Trace Flag 1807

In the past, Microsoft SQL Server required its database files to be stored on local drives or SAN storage.  The SQL Server 2000 I/O Basics whitepaper explains in stern terms why not just any storage will do – we have to be sure our data will hit the disk in a calm, orderly fashion.  Server vendors got the point, and everybody’s local storage and SAN storage followed the guidelines.  Over time, we stopped buying hardware from a Windows Hardware Compatibility List because, well, everything worked.  If you could buy it off the shelf from a major vendor, you were fine.

For ambitious people who wanted to gamble with their data, trace flag 1807 let you store your database files on network shares or mapped drives.  This was a pretty bone-headed idea – heck, it was hard enough to get our local and SAN storage to be reliable enough – so few people used it.  Database administrators are usually way too paranoid to let their databases get corrupted just because somebody tripped over a network cable or a $200 NAS box failed.  Even if you didn’t mind unreliability, performance was a major problem – network file shares just weren’t fast enough to handle database access.

Network attached storage gained mainstream credibility over the last few years, and it’s gained widespread use thanks to virtualization and the Network File System protocol (NFS).  Under virtualization, each drive on your virtual server is really just one big file (VMDK or VHD), and it’s easier to manage accessing big files on a file share rather than mapping a bunch of LUNs to VMware hosts.

With tuning, NFS performance is fine for virtualization, and this has a bit of a hidden meaning: if we’re running our entire SQL Server in VMware, then we’re already doing database access over NFS.  So why can’t we let physical SQL Servers access their own databases via NFS too?

SQL Server 2008R2 and 2012 <3 Your NAS

SQL Server 2008R2 did away with the trace flag and lets you put your data & log files anywhere.  This statement works fine:

CREATE DATABASE [DestinedForGreatness]
ON  PRIMARY
( NAME = N'DestinedForGreatness', FILENAME = N'\\LittleBlackBox\SQL2012data\MSSQL\Data\DestinedForGreatness.mdf' )
LOG ON
( NAME = N'DestinedForGreatness_log', FILENAME = N'\\LittleBlackBox\SQL2012data\MSSQL\Data\DestinedForGreatness_log.ldf')
GO

And just to make things really clear, that statement doesn’t just work fine – it doesn’t give you a single warning.  Just like that, you’ve got a database relying on a network share, along with all the gotchas that entails.  Its database icon looks just like the rest in SSMS, and there’s nothing to suggest that your hosting strategy just got a lot riskier.

The Six Million Dollar Man's children had smaller budgets to work with.

File share access goes over the same network paths as your client communications (queries, results, RDP sessions, etc).  A heavily accessed database could saturate your network cards, thereby slowing down everything else – or vice versa.  An unrelated file copy or backup could bring your network-attached database to its knees.

You also have to start monitoring the remote file share’s performance.  It can get slowed down by CPU or memory issues, or just by other activity happening on that same file share.  This means we have to capture Perfmon data on that remote file server, including the physical disk counters.  I don’t see this as a drawback – after all, I need to do this same level of performance monitoring on my SAN, too.

Use Cases for Databases on NAS

When I first read about this SQL Server feature, I had the same reaction you’re probably having: nobody in their right mind should use this, right?  As it turns out, though, I keep using it to make my life easier.

Low log file space emergency?  Add a file on a NAS.  With a few mouse clicks, disaster is averted.  Let your transactions continue (albeit slowly), then go back later to remove that extra log file.

Need to restore a database fast?  Attach it from anywhere.  I’ve often needed to restore just one table from a big database backup, but I haven’t had the room to restore the entire database in production.  No problem – just restore the database on another server (like development), but use a network share name like \\MyFileServer\TempFolder\DatabaseName.mdf as the target.  All of the disk-intensive and network-intensive work happens on the dev server.  When the restore finishes, detach the database in development, and the production server can attach the database while it sits on the network share.  Copy out the tables you need, detach the database, and you’re off.

Hello Kitty Duck Tape

Hate drive sizing hassles?  Use NFS.  I run a lot of SQL Servers in virtualization, and I move databases around from server to server.  Rather than worrying about whether a server has enough free drive space, I’ve just created a file share on my NAS, and all my SQL Server 2012 instances attach their databases from that share.  I can manage free space as a pool, and I never have to worry about an individual server running out of free space again.

Should You Run Databases Over the Network?

The amateur question is, “Do you feel lucky, punk?”

The professional question is, “Do the performance and reliability of your network file shares match up with the performance and reliability needs of the database?”

When I’ve asked myself the professional question, I’ve been surprised at some of my answers.  Network storage probably shouldn’t be your first default choice for new databases and files, but give it a chance.

The key to success is getting everybody to agree on both the capabilities of the NAS and the requirements of the database.

Want to learn more? We’ve got video training. Our VMware, SANs, and Hardware for SQL Server DBAs Training Video is a 5-hour training video series explaining how to buy the right hardware, configure it, and set up SQL Server for the best performance and reliability. Here’s a preview:

Buy it now.

↑ Back to top
  1. Great article Brent! I’m constantly amazed at how many system admins will ding a solution “just because” or without really understanding how it works and the pros and cons, but then elevate another solution as “the answer” but don’t have any greater understanding of that solution either.

    In the end, its “what they know” even though they don’t really understand why things work the way the do.

  2. Heresy, complete unabridged Heresy.

    Oh wait, the NetApp is calling…….

  3. One issue that I ran into is if you are backing up your databases with DPM 2010, it cannot handle the underlying database files being on UNC paths.

  4. Very nice read on SQL 2012 and virtualization.
    Thanks for the informative writeup and Nice blog by the way.

  5. Great article.

    A thought though, would I be able to run a DB in SQL 2000 compatibility using a network share. Could enable quick DR data testing without changing application, DNS Server names or ODBC type definitions.

    Cheers
    Paul

  6. Good read. Thanks again Brent, adding a much-needed additional LDF to a network share is a great temporary fix.

  7. Excellent article! See, SQL server is never boring. There is always something interesting to do and learn.

    Cheers!

    Rudy

  8. Thanks for pointing out some new nice tricks to have in your toolchest, I have at times been restoring to a secondary server and moving SAN LUNs, but this approach makes it even easier when I don’t have to involve the SAN admin.

  9. Hi Brent,
    What about configuring an iSCSI target on the storage server rather than mapping a network drive? It’s free and might be more resilient

    http://www.microsoft.com/download/en/details.aspx?id=19867

    Regards
    Paul

    • Paul – it’s only free if you’ve got iSCSI set up, and most folks don’t. The reason this feature is attractive to me is its instant flexibility. You don’t need anything set up in order to do it. Tell your network admins that you’re running iSCSI over the regular Windows network, though, and you’re liable to have a mutiny on your hands. ;-) If you want more resiliency, I’d definitely go the iSCSI route but with a separate network for the traffic (which will also help the resiliency angle.)

      • Brent,
        I wrote a technical article for sqlservercentral about ‘How to configure a virtual SQL Server 2012 cluster using an iSCSI SAN’ which they are publishing on Monday 13th. I really wish I’d put a footnote in the article saying ‘Think carefully before routing your SAN related Disk IO over your virtual IP network’.
        Best wishes
        Paul

  10. This will give an awful lot of flexibilty for on the fly “what the? …” moments when you discover the BAs really badly underestimated database growth and activity.

    In our situation, would I run a db for any period of time off the network? Unlikely – the frequency with which our network breaks or does odd things ( yes we’d like to sack a few people, but I digress ) makes it inadvisable.

    A case in point where we had so many outages for a network segment that had mirrored SQL 2008 servers that when the proverbial dust settled, we have half our databases on the principal server and half on the mirror because the main switch went up and down so many times in one night.

    But in general, yes very useful. Thanks for bringing it up.

  11. Pingback: SQL2012 Clustering – the drive letter is gone! « Rabin's Ramblings

  12. Great Article. For sure , someone will come along and be critical of work you do to save resources.
    But it’s great to see the tape solutions.

    If fact the older guys used a lot of tape in IT.
    /:>

  13. I am running SQL Server 2008R2 on NFS because we are (re)building iSCSI solution with our crappy HP MSA SAN. I did not have room on the ESX host or on physicals for the data, so I provisioned tiny 4TB NAS and then attached databases to it from the SQL (running physical on small 67GB 1U pizza box).

    To minimize network footprint I attached separate 1GB switch between SQL host and NAS, and used separate 172.28.0/24 network to connect them. So no database traffic is ever going over the public switch, which I was pretty sure would eliminate disk probs for me.

    I’m not in the office right now but I just got an email from coworker saying that SQL reported “LogWriter: Operating system error 64 (the specified network name is no longer available)”.

    Any perf tuning I should do at the SQL Server level to account for NFS drives?

    Thanks for the article…

    • Andrew – if you get an error saying the network name for the shared storage is no longer available, that’s actually an availability problem, not a performance problem. It’s failing to perform period, heh. (I shouldn’t be laughing, but…) This is not an unusual problem with low-end (sub-$5k) network-attached storage devices. Depending on how much load you’re trying to put on them, they may just fall over and reboot – which of course is bad news for SQL Server.

      I don’t have a quick and easy silver bullet here. I would avoid running SQL Server on a NAS unless you’ve load tested the NAS under extreme conditions and you know that it won’t reboot no matter what.

  14. Hi,

    I have a client that generates a fair volume of data, running SQL 2008R2, requesting for a storage device. From your article, seems a SAN will work just fine, rather than investing on a SAN server….right?
    Nice read.

  15. Thank you Brent for your response.

    Well, they need a high availability server for SQL server 2008 backup. I would have recommended a NAS but really cost is not an issue to my client. So I am considering either HP Storageworks x1400 or X3400. Does this make sense?

  16. Hi All, Finally got my first SQL 2008 R2 cluster built (VMs under ESX 3.5 as well). Cluster is running on sub-par after-market dedicated iSCSI switches using DataCore SANsymphony-V to wrap the two legacy HP MSA1XXX SANs I had available (I know, I know…what a mess).

    The original SQL drive share problem I had with the NAS device was that with databases over 11GB: DBCC and other calls would fall over mid-processing. The NAS was connected to the (physical) SQL server using dedicated 1GB switch, Windows Updates turned off, and problems were observed even in simple test scenarios (e.g. running dbcc and other maintenance commands from query manager).

    In my shiny new SAN setup where the network connections are even worse due to slowness imposed by ESX vswif and general PE2650 crappiness as an (x86!) ESX host, the same failing functions…work great!

    I can only assume that iSCSI protocol, in conjunction with full 60second Windows timeout at guest OS level, is able to handle many “hiccups” on the iSCSI fabric that a standard Windows Share is not.

    Based on this admittedly small set of empirical data, I have to say I won’t be trying a NAS again for SQL database of any size (small databases were working fine over the share). Instead, a proper SAN – albeit low end – appears more reliable.

    Setting up the cluster is easy – once you’ve done it a few times. That first time was a great way to burn late-night oil, though…

  17. I would like to know what happens to a database actively running on an NFS fileshare when the NAS service goes down. I have a HA NAS environment (failovers do occur) and would only consider running databases on an NFS share if a NAS service failure doesn’t result in database corruption.

    • Wayne – hmm, let’s take a step back here. In any shared storage system, there’s a possibility that the shared storage will suddenly disappear. This is true with even the most redundant storage systems. I’ve got not-so-fond memories of a SAN vendor coming onsite to do a firmware upgrade (because they said I’d done the last one incorrectly) and they proceeded to accidentally bring down the entire storage network before all of the servers were down. Anytime there’s humans, there’s the chance for human error.

      Anytime storage disappears without warning, there’s a chance for bad things to happen to the storage. But again, let’s take a step back – even when the storage doesn’t disappear, there’s a chance for corruption. You can never just say, “We’re okay, I don’t need to worry about corruption.”

      • Thankyou for the reply. I probably didn’t word the question well – I’m a Linux system administrator not a DBA. Would a NAS service failure lead to a need for restoration from overnight dumps?
        I’m trying to decide whether to spend $15K to upgrade storage hardware (add controllers etc.) or present an NFS share to the person running the database.

  18. hi

    I have run a database from network share on Windows server 2008, and after I restarted the OS, I found the database can’t be access with the error “objectexplore”. Do you know the reason?

    Thanks.

    Stacey

  19. Hey Brent,

    Know where this is super useful? The cloud, where you run VMs that have all kinds of hard drive limitations put on by the cloud vendor.

    Ike

    • Ike – that’s interesting. So you’ve found networks to be very reliable in cloud VMs?

      • Reliability from cloud vendors? Ugh..you ask the tough questions.

        Who’s to say that they aren’t doing that underneath when they offer locally attached storage on the cloud? It’s all obfuscated from us.

        • You know what would be crazy? If the network-attached was FASTER than whatever layer they’re using with local attached. In fact, I’m testing that tonight.

        • Heh – well, it’s actually fairly easy to tell with cloud providers like Amazon when you get a machine with local SSDs. If you load test it, it’s quite clear that it is indeed local storage.

          • On a Windows Azure VM at the bottom, you have an “Attach Device” button that allows you to attach up to 16 VHDs to an Azure VM. That gives you 500 IOPS per VHD. Those VHDs are stored in Azure Blog Storage. They have a unique driver that allows them to mount those blog store files as local drives. As far as I know, Azure Blob Storage only has a REST interface….super slow.

  20. Brent, good explanation on why one should not use NAS with SQL Server. However, I was thinking about your view on storing just SQL Server database backups onto the NAS, rather than the actual data and log files.

    Do you see any issues there? I understand that we have to use the UNC path into the backup command and provide the access to the SQL Server service account (a domain account) to a backup folder on NAS.

  21. hi brent,

    could you run sql server “Express edition” in NAS with trace 1807 on.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

css.php