Blog

Do you know how much bandwidth you have from your SQL Server to your storage? Are you clear on what all the options are and how they compare?

If you’re not sure, don’t worry. In this blog post I’ll explain the basics and link you up with a poster that will make everything more clear.

How Fast is That Connection?

Check out the Bandwidth Reference Poster

Check out the Bandwidth Reference Poster

The first thing to know is that “Gigabit” is NOT the same as “Gigabyte”.

When I talk to clients using SAN storage with their SQL Servers, I find that usually the connection speed and type is one of these:

  • 1Gb iSCSI
  • 4Gb Fibre Channel

Notice the small ‘b’ in each of those? That means ‘Gigabit‘. This isn’t obvious to most people because of the way this is commonly said out loud. Most people say “One Gig Eye-Scuzzy” or “Gig-E” (the E is for Ethernet) for the first option, and “Four Gig Fibre” for the second option. Based on the “gig” in there, it’s easy to think that this connection type can transfer 1 gigabyte of data per second.

That’s a long way from the truth. We’re talking one gigabit per second. That’s 1 billion bits. That sounds like a lot until you do the math. There are 8 bits per 1 byte. If we translate 1 billion bits per second into megabytes, we have a theoretical maximum of transferring only about 125 megabytes/sec through a 1Gb iSCSI connection. That’s also theoretical— in the real world, we aren’t going to get that much. Even if we could, when it comes to modern databases 125 megabytes/sec is a pretty tiny straw to slurp gigabytes and gigabytes of data through!

Why is 1Gb Ethernet So Common?

I find clients using 1Gb iSCSI a lot— a few are using 10Gb iSCSI, and others use fiber channel, but there’s still a lot of 1 Gb iSCSI out there. Looking at the comparatively tiny size of this pipe on the chart, that might seem pretty strange.

The use of 1Gb connections is because of cost. Historically, 10Gb Ethernet was super pricey. In 2002, the per-port cost of 10Gb was around $39K, compared to $1K per port for 1Gb. These days, you’re looking more at around $750 per 10Gb port— but lots of companies still have older equipment around. Change, like 1Gb iSCSI, is slow.

There’s Often More to the Story

If you have performance problems, don’t go lighting a fire in your storage administrator’s office immediately after looking at the Bandwidth Reference. There are lots of things to factor in to get to the bottom of a performance problem, and the connection speed to storage is just one of them.  How many connections do you have? Can you use multi-pathing? How fast is your storage? Is storage really your big bottleneck? There’s lots to dig into.

Also, this is only the interface bandwidth.  If you’re using a single magnetic hard drive doing random reads, it doesn’t really matter what interface you’re using, because you can’t fill a USB2 pipe.  This chart matters most when you’re using multiple drives in a RAID array or when you’re using SSDs. (Not using RAID for your databases in production? Maybe the fire isn’t such a bad idea.)

To gets started, maybe just start some friendly conversations to figure out what your infrastructure is like.

Why is Memory on the Poster?

I like the Bandwidth Reference because it illustrates something else besides how slow some of the commonly used connection types are.

Check out the theoretical maximum bandwidth for memory. The bandwidth for memory is so big that 40% of its pipe needed to bend to fit on the page! This is staggering to think about in terms of application performance. We know logically that performance is better if we don’t have to read from disk, but this graphically shows how much faster that can be.

Remember one more thing, too– you can get these same speeds reading data from memory in more places than just your database server. You can also read from data cached in memory on your application servers without even talking to the database. Those are the fastest queries of all.

Download the Poster

To get the full sized goods, you just need to create a free login to our site. We promise not to share or sell your information with others. Download the poster here.

↑ Back to top
  1. Pingback: T-SQL Tuesday #028 – Jack of All Trades, Master of None? | Jason Crider

  2. Getting a “members only” error when I try to download the zip

    • Hi Dave,

      You need to be signed into the community to download the poster. Apologies if it’s a little confusing. You create a free sign-in here: http://www.brentozar.com/community/

      Then, once you’re signed in, it should let you grab it. I just tested it with my user-level account (not an admin) and it worked for me. Let me know if it gives you problems.

      When you set up a community account you will receive our weekly newsletter with links from around the internet. We don’t sell your name or anything like that.

      Thanks!
      Kendra

  3. I have seen some database systems are starting to use InfiniBand. How does InfiniBand compare?

  4. My company has the dreaded 1Gb iSCSI and currently we are looking at migrating SQL databases to our VMWare infrastructure. What make this situation a bit unique is that the last database server was installed as a physical box with Microsoft iSCSI LUN’s; when we do IO tests (iometer) it seems that our VMWare boxes get higher throughput. We have a VMWare server farm, HA, SRM etc. and our SQL server is fairly low on the transaction scale (less than 15-20 concurrent users making minor changes).

    Does this seem like an environment that would be prime for virtualizing SQL? We are looking at trying to increase our throughput as much as possible on our VMWare boxes and hope to have multipathing up with 4Gb.

    • Alex – well, giving customized advice like this is beyond what we can do in a comment. I’d want to step back and ask about the user satisfaction of the current performance, your goals for RPO/RTO, and versions/editions of SQL Server for licensing purposes. These questions – plus the storage – all combine to get to the right answer. If you’d like to get help with us, shoot us an email at help@brentozar.com.

  5. Infiniband is a more mature technology than Ethernet regarding robust database storage layer due to inherent high bandwidth support. Ethernet protocols are fine into obtaining decent throughput but just do *not* scale up in terms of IOPS. As a consequence choosing Ethernet oner IB prevents any physical decent physical consolidation of multiple applications on a single platform, while insuring decent IOPS concurrency quotas to each application.

    Knowing that Infiniband price considerably decreased in the last few years, one can get a pretty decent 8 port switch for 5000 dollars and 1000 dollars card per server for 56Gb/sec for storage systems per port who can can go well easily above 60KIOPS for a below 20000 dollars price tag (including disks).

    Unfortunately, Ethernet vendors are pushing so hard onto selling ethernet 10Gb infrastructure that most gullible storage and system architects buy into bandwidth claims but totally ignore IOPS importance for databases. As a consequence, I consider Ethernet (with few exceptions) a less competitive network storage media than IB. Tried and conducted POC on both for major customers and I have no doubt about this. But I would love to change my mind if somebody has a different experience.

    Hope this helps.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

css.php