Do you know how much bandwidth you have from your SQL Server to your storage? Are you clear on what all the options are and how they compare?
If you’re not sure, don’t worry. In this blog post I’ll explain the basics and link you up with a poster that will make everything more clear.
How Fast is That Connection?
The first thing to know is that “Gigabit” is NOT the same as “Gigabyte”.
When I talk to clients using SAN storage with their SQL Servers, I find that usually the connection speed and type is one of these:
- 1Gb iSCSI
- 4Gb Fibre Channel
Notice the small ‘b’ in each of those? That means ‘Gigabit‘. This isn’t obvious to most people because of the way this is commonly said out loud. Most people say “One Gig Eye-Scuzzy” or “Gig-E” (the E is for Ethernet) for the first option, and “Four Gig Fibre” for the second option. Based on the “gig” in there, it’s easy to think that this connection type can transfer 1 gigabyte of data per second.
That’s a long way from the truth. We’re talking one gigabit per second. That’s 1 billion bits. That sounds like a lot until you do the math. There are 8 bits per 1 byte. If we translate 1 billion bits per second into megabytes, we have a theoretical maximum of transferring only about 125 megabytes/sec through a 1Gb iSCSI connection. That’s also theoretical— in the real world, we aren’t going to get that much. Even if we could, when it comes to modern databases 125 megabytes/sec is a pretty tiny straw to slurp gigabytes and gigabytes of data through!
Why is 1Gb Ethernet So Common?
I find clients using 1Gb iSCSI a lot— a few are using 10Gb iSCSI, and others use fiber channel, but there’s still a lot of 1 Gb iSCSI out there. Looking at the comparatively tiny size of this pipe on the chart, that might seem pretty strange.
The use of 1Gb connections is because of cost. Historically, 10Gb Ethernet was super pricey. In 2002, the per-port cost of 10Gb was around $39K, compared to $1K per port for 1Gb. These days, you’re looking more at around $750 per 10Gb port— but lots of companies still have older equipment around. Change, like 1Gb iSCSI, is slow.
There’s Often More to the Story
If you have performance problems, don’t go lighting a fire in your storage administrator’s office immediately after looking at the Bandwidth Reference. There are lots of things to factor in to get to the bottom of a performance problem, and the connection speed to storage is just one of them. How many connections do you have? Can you use multi-pathing? How fast is your storage? Is storage really your big bottleneck? There’s lots to dig into.
Also, this is only the interface bandwidth. If you’re using a single magnetic hard drive doing random reads, it doesn’t really matter what interface you’re using, because you can’t fill a USB2 pipe. This chart matters most when you’re using multiple drives in a RAID array or when you’re using SSDs. (Not using RAID for your databases in production? Maybe the fire isn’t such a bad idea.)
To gets started, maybe just start some friendly conversations to figure out what your infrastructure is like.
Why is Memory on the Poster?
I like the Bandwidth Reference because it illustrates something else besides how slow some of the commonly used connection types are.
Check out the theoretical maximum bandwidth for memory. The bandwidth for memory is so big that 40% of its pipe needed to bend to fit on the page! This is staggering to think about in terms of application performance. We know logically that performance is better if we don’t have to read from disk, but this graphically shows how much faster that can be.
Remember one more thing, too– you can get these same speeds reading data from memory in more places than just your database server. You can also read from data cached in memory on your application servers without even talking to the database. Those are the fastest queries of all.
Download the Poster
To get the full sized goods, you just need to create a free login to our site. We promise not to share or sell your information with others. Download the poster here.