Storage Protocol Basics: iSCSI, NFS, Fibre Channel, and FCoE

SQL Server, Storage
9 Comments

Wanna get your storage learn on?  VMware has a well-laid-out explanation of the pros and cons of different ways to connect to shared storage.  The guide covers the four storage protocols, but let’s get you a quick background primer first.

iSCSI, NFS, FC, and FCoE Basics

iSCSI means you map your storage over TCPIP.  You typically put in dedicated Ethernet network cards and a separate network switch.  Each server and each storage device has its own IP address(es), and you connect by specifying an IP address where your drive lives.  In Windows, each drive shows up in Computer Manager as a hard drive, and you format it.  This is called block storage.

NFS means you access a file share like \\MyFileServerName\MyShareName, and you put files on it.  In Windows, this is a mapped network drive.  You access folders and files there, but you don’t see the network mapped drive in Computer Manager as a local drive letter.  You don’t get exclusive access to NFS drives.  You don’t need a separate network cable for NFS – you just access your file shares over whatever network you want.

Fibre Channel is a lot like iSCSI, except it uses fiberoptic cables instead of Ethernet cables.  It’s a separate dedicated network just for storage, so you don’t have to worry as much about performance contention – although you do still have to worry.

Fibre Channel Over Ethernet runs the FC protocol over Ethernet cables, specifically 10Gb Ethernet.  This gained niche popularity because you can use just one network (10Gb Ethernet) for both regular network traffic and storage network traffic rather than having one set of switches for fiber and one set for Ethernet.

Now that you’re armed with the basics, check out VMware’s PDF guide, then read on for my thoughts.

What I See in the Wild

1Gb iSCSI is cheap as all get out, and just as slow.  It’s a great way to get started with virtualization because you don’t usually need much storage throughput anyway – your storage is constrained by multiple VMs sharing the same spindles, so you’re getting random access, and it’s slow anyway.  It’s really easy to configure 1Gb iSCSI because you’ve already got a 1Gb network switch infrastructure.  SQL Server on 1Gb iSCSI sucks, though – you’re constrained big time during backups, index rebuilds, table scans, etc.  These large sequential operations that can easily saturate a 1Gb pipe, and storage becomes your bottleneck in no time.

NFS is the easiest way to manage virtualization, and I see a lot of success with it.  It’s probably an easy way to manage SQL clusters, too, but I’m not about to go there yet.  It’s just too risky if you’re using the same network for both data traffic and storage traffic – a big stream of sudden network traffic (like backups) over the same network pipes is a real danger for SQL Server’s infamous 15 second IO errors.  Using 10Gb Ethernet mitigates this risk, though.

Fibre Channel is the easiest way to maximize performance because you rule out the possibility of data traffic interfering with storage traffic.  It’s really hard to troubleshoot, and requires a dedicated full time SAN admin, but once it’s in and configured correctly, it’s happy days for the DBA.

Want to learn more? We’ve got video training. Our VMware, SANs, and Hardware for SQL Server DBAs Training Video is a 5-hour training video series explaining how to buy the right hardware, configure it, and set up SQL Server for the best performance and reliability. Here’s a preview:

Buy it now.

Previous Post
SQL Server Tricks: How Can I Restore a Subset of My Tables to A Prior Point in Time?
Next Post
The SQL Server Performance Checkup from a Consultant’s Perspective

9 Comments. Leave new

  • [Disclosure, I’m a product manager for FCoE for Cisco]

    Hi Brent,

    I like the idea of putting together a primer of storage protocols for people who are just coming into the subject and want to know the basics. Good luck on tackling that behemoth task. 🙂

    As I mentioned on Twitter, I’d be careful using the *cabling* types as a differentiator between protocols. The physical cable really does not make a difference to the protocols that run on top of it. For instance, Ethernet uses both Copper and Optical cabling, as does Fibre Channel, as does Fibre Channel over Ethernet (FCoE).

    Perhaps another way to think of it is to identify the reasons why people use them. Fibre Channel SANs are considered deterministic. That is, we know the relationship between a host/storage (called target/initator in storage parlance) *before* we connect them up. This gives the best, most predictable performance for storage traffic.

    iSCSI, on the other hand, is assigned a path once the initiators and targets are already in place. In other words, the relationship is not pre-determined, but established “on the fly.” This means that (generally speaking) iSCSI has been easier to deploy because the network does the heavy lifting (e.g., establishing the addresses of the nodes, setting up the path, preventing loops, etc.). Historically this has been a very quick and inexpensive way to get block storage connectivity.

    FCoE has more in common with FC than iSCSI, though. Like FC, it is deterministic. Fundamentally, it uses the same considerations and has the same requirements as FC. However, one of the advantages is that you can *also* run other types of traffic – everything from iSCSI to your Facebook traffic to sending this blog over the Internet – without any of them interfering with each other.

    Like iSCSI, you can get hardware or software initiators for FCoE for free, which can help bring down the cost considerably as well. I’m not sure if I’d qualify it as a ‘niche’ technology, as it’s still in the early growth cycle and hasn’t plateaued yet.

    Nevertheless, I hope this may help in some way.

    Best,
    J

    Reply
  • (Ooops. I may have risked misleading people: host = initiator, storage = target. I reversed it above. Sorry!)

    Reply
  • Thank you for that update… I’m just learning this stuff, and read that and thought ‘that’s backwards from what I thought’ – darn, I thought I was starting to understand slightly. After your update, I think I’m good.

    Reply
  • When you say “NFS”, don’t you mean CIFS? NFS is a Unix protocol, it’s hardly ever used in Windows, where as CIFS is far more commonly used in Windows environments than in Unix environments.

    Reply
  • Any thoughts on 10Gb iSCSI?

    From what I am reading, this is comparable to most FC solutions. The general suggestions I have seen so far have been “If you have FC, then extend. If you have nothing, the iSCSI”.

    Any comment?

    Reply
  • Do you have any tips on gathering metrics to identify what solution is best needed?

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.