SQL Server on a SAN: Dedicated or Shared Drives?

SQL Server, Storage
36 Comments

A reader wrote in and asked:

We’re running SQL Server with blades and a NetApp SAN. We have a few hundred databases from 100mb to 200gb.  All data, logs, tempdb, etc. are located in the same 30-disk pool.

Apparently this was setup using NetApp’s guidelines. NetApp recommendations are to put everything in one aggregate, or a couple of aggregates with the caveat that performance will suffer if you don’t use all disks in the same aggregate. They actually recommend putting data and log files together!

Have you ever seen anything like this before? Is this a stable system or is it craziness?

By the way, your blog is absolute genius!  I love everything you’ve ever read, and your SQL Server expertise is surpassed only by your good looks!

(Okay, so maybe I added that last paragraph myself.)

First off, yeah, I’ve seen that before, and it’s not craziness.  There’s times when this is the right way to do a SAN configuration, and there’s times where it’s wrong.  Some SAN vendors (like EMC and NetApp) recommend this type of configuration as a default, and they only recommend changing it when you can justify better performance in a dedicated drive setup – meaning, physical drives are dedicated to specific tasks, like a six-disk raid 10 setup for your transaction logs.

Before I give an answer for this one specific scenario, let’s talk about the decision factors.

When you’re choosing between two things, you want to be as specific as possible about the options.  If someone asks you what’s better, a Chevy or a Mercedes, you can’t just say Mercedes.  They may be asking about a late-model Corvette versus a clapped-out 1970’s Mercedes wagon.  Or that Mercedes might be pristinely restored, whereas the Corvette’s been in an accident.  You get the idea.  If you were choosing between cars, you’d want to know the age, the mileage, and what the person wanted the car for – performance, family travel, reliability, etc.

When you’re choosing between SQL Server storage options, here’s what you need to know about each of your choices:

  • The number of drives – sounds simple, but try to get that information out of your SAN administrator in a shared-drive environment.  Then, just for laughs, ask them, “If I was going to take my drives out of the shared pool and switch to my own dedicated spindles, how many spindles will I get?”  It’s probably going to be a pretty small number, and you’ll need to know this before you decide to switch.
  • The RAID level – some SAN vendors will say that the RAID level doesn’t matter anymore, and in huge shared pools, that’s vaguely true.  A 30-drive RAID 5 and a 30-drive RAID 10 are going to overwhelm your HBAs anyway.  However, in dedicated drive setups, we’re probably talking about much lower numbers of drives, and then it starts to matter.
  • The peak load windows for other apps on those drives – if you’re sharing drives with a couple of tiny servers, and you’re the biggest load in the group, then this probably isn’t an issue.  On the other hand, if you’re sharing drives with a large ERP system with hundreds of users that all log in around 8am, you’re going to want to expect that.
  • The backup method – if you’re using SAN snapshots, you want as many spindles as possible to make your backups less invasive.  I’m not saying SAN snapshot backups are invasive, but if you suddenly present that snapshot to another server on the SAN, your load on those hard drives just doubled.  That changes your SAN bottleneck, and for this, you’ll want shared spindles.
  • The current performance bottleneck – We could write a whole training course on this, and we have. Well, to condense it into a single bullet point, you need to find out where your performance bottleneck is, because it may not be your hard drives.  You want to focus on your bottleneck first, eliminate it, and then move on to the next bottleneck.  If the current one is the drive arrays, then even moving to dedicated spindles may not be the answer if we don’t get enough of them (see our first bullet point).
  • And there’s more – like the amount of time you want to spend managing the SAN, whether you want to use SQL Server partitioning, and so on – but these are a good start.

Now that we know the questions, let’s look at some of the answers for this one scenario.

The number of drives – right now you’ve got 30 drives in a single pool for a cluster.  If you carved it up into dedicated drives, you might do something like this:

  • 12 drives in a raid 10 for data
  • 10 drives in a raid 10 for logs
  • 8 drives in a raid 10 for TempDB

I’m just pulling these out of the air to illustrate that you’ve got a lot of drives, and you can play around with the config here.  To make a good design decision, you’d want to know the read/write mix and the activity on TempDB.  Bottom line, you’ve got enough drives that you could get some performance with dedicated drives instead of shared.  If you were running, say, 10 drives altogether in the pool, then the decision changes.

The backup method – NetApp’s snapshot solution has some great SQL Server integration, so just generally, if you’re using NetApp, I would use a shared drive config.  You may not be using their snapshots now, but as your databases get bigger, it’s nice to have the snapshot option available.  If your backup window gets out of control, call your NetApp guys about getting a demo.

The current performance bottleneck – it takes a lot of work to figure this out, but the key word in the question was “blades”.  I love blades, but I’ve seen a few implementations where people have shoved a bunch of SQL Servers in a single blade chassis, only put two HBAs in each blade, and only used two fiber cables to connect the entire blade chassis to the SAN.  That means all of the servers are choking on a small amount of bandwidth.

When we look at these answers together, I’d say that the shared drives are not holding back this server’s performance.  Before changing from shared to dedicated drives, I’d add HBAs in each SQL Server, enable & test multipathing software, and connect the blade chassis to the SAN infrastructure with as many fiber cables as feasible.  Otherwise, changing from shared to dedicated spindles for this setup won’t make a performance difference.

At the same time, I would only make these changes if you’re seeing disk performance bottlenecks on the SQL Server.  (This reader mentioned that he used Quest Spotlight, and it’s easy to see this in Spotlight.)

Does that mean NetApp’s original recommendation was right?  Not necessarily, but here’s why they do it: most people have setups just like this.  People will throw hundreds of thousands of dollars into a storage controller and a bunch of drive enclosures, only to kill the performance by not giving it any bandwidth.  The more time and effort you put into the setup around the edges, the more it’ll pay off in performance.

Previous Post
SQL P2V: What Really Killed the Dinosaurs
Next Post
LiteSpeed v5.0 is out!

36 Comments. Leave new

  • when we shopped for a new SAN last year we looked at NetApp and they said this is the way they do it. EMC takes smaller numbers of drives to create their pools. and they have tiering where faster more expensive drives are for tier 1 and slower drives with no RAID are for lower tiers of storage.

    EMC has a 200 page document where they say that you can run different apps on the same drives with SQL on them. only caveat is to mix hi I/O and low I/O apps together.

    Reply
  • Alen – yep, every SAN vendor does tiering. That’s not anything special, despite what the sales folks tell you.

    About EMC’s 200 page document – I’m not saying anything about EMC specifically, but every SAN vendor has reams of paper describing how their system and their standard config is the Best Thing Since Sliced Bread(TM). Take that with a grain of salt.

    Reply
  • I like the large volume strategy. A lot of the confusion comes from the old school books that recommended separate drives for everything. The same books recommend RAID1 for tlogs. Nowadays, SQLCAT recommends RAID5 for data, if you are on a budget, but always use 0+1 for logs.
    Unless you can load test your app, get the test usage patterns right, confirm that the usage pattern will not change over time, you are basically guessing when you split up physical disks. It is pretty painful to fix if you get it wrong.
    Now separate physical drives can help in a disaster recovery situation. However, lot of enterprise SAN’s can lose a whole enclosure of drives before the OS starts losing drives these days.

    Reply
  • Jason – can you elaborate a little more on this statement: “It is pretty painful to fix if you get it wrong.” In the SANs I’ve worked with (IBM, NetApp, EMC) it’s really easy to add drives to a dedicated disk setup – in fact, you can do it online, without the host even knowing. Of course, this assumes that you start with less disks than you need – always a safe way to get started, and add disks as performance merits. Nobody wants to overspend and then have spindles sitting around idle. But I want to make sure we’re both on the same page, and that I’m not misunderstanding your comment.

    Reply
  • Here is an extreme example to illustrate idea. Say you set it up like this because you heard it was best to separate everything.
    4 drives 0+1 for data
    4 drives 0+1 for indexes
    4 drives 0+1 for tempDB
    4 drives 0+1 for LOB data
    4 drives 0+1 for logs

    Fast forward 6 months and 100GB later, the data drive has double digit queues while everything else is idle. While a 20 drive 0+1 disk would probably support the app, it is hard to get there with the existing HW without a major outage.

    I have actually seen something similar except everything was RAID1. :O

    Reply
  • Jason – in this post (and indeed, in my entire site) I’ve never advocated separating indexes and LOB data onto a separate array from the SQL data on such a small number of drives. We would both agree that would be a problem, and if that information is out on the web anywhere, I definitely want to help find and eradicate that assumption. Have you seen any SQL Server sites that recommend splitting data, indexes and LOB data onto separate arrays for a new application? That would definitely lead a DBA astray!

    Now, back to your 20-drive example: a better recommendation would be to combine data, indexes and LOB data and use those 12 drives together. (Ideally, maybe use 8-10 drives for the data, and more for the logs as well.)

    You’re right in saying that a complete extreme would be bad, but the complete extreme doesn’t justify hitting the other extreme. Dedicated spindles are not always right, but that doesn’t mean that shared spindles are always right.

    Reply
    • Hudson Hawk
      June 10, 2010 4:27 pm

      Brent, I’m a developer playing the DBA role for the time being because we’re having a hard time finding a DBA with the skillset we need (somebody with the experience to help with hardware specs, including SAN for a VLDB). Anyway, shortly after reading these comments, when I was looking for more SAN information, I came across a article suggesting what you want to eradicate : http://www.extremeexperts.com/sql/articles/StoragePractice2.aspx (search for LOB).

      Reply
      • Yes, that’s the downside of the Internet. There’s plenty of information out there, but its up to you, the reader, to figure out which source you should trust.

        Reply

  • Now, back to your 20-drive example: a better recommendation would be to combine data, indexes and LOB data and use those 12 drives together. (Ideally, maybe use 8-10 drives for the data, and more for the logs as well.)

    I am not saying shared is always right either. Just most of the time 🙂 The problem is how do you determine that data gets 8 or 10 drives and logs gets 10 or 12 drives. If you knew for sure, what the log write seq IO load vs. the random 80\20 read\write would be then you could make this decision and it would perform better. Most of the time this is a guess.

    There are lots of well know sites that make blanket recommendations of putting tempdb or indexes on separate drives. For some apps, putting tempdb on its own drive could be like through them out the window. I won’t call them out but google find them.

    Reply
  • For everybody other than Jason who might be reading this – Jason and I agree on almost everything. We disagree on a few small points, and it might look like we’re fighting like bulldogs, but it’s only because we love to talk tech. I read his blog, follow his Twitters, and hey, he’s a Texan, so even when we disagree, at least we’re drinking the same beers and eating steak.

    Now, back to the point – I agree 100%. Shared is right most of the time for exactly the right reasons that you’re giving here, plus more – it’s easier to manage (assuming you’re using a good SAN), easier to grow in small increments, easier on the wallet, on and on. There are still situations where I love dedicated drives, but it’s the minority of setups, not the majority.

    Reply
    • In a Shared Drive configuration (Data and Log on same volume). What about temp? Should it be on the same volume? My main concern would be SAN snapshots. temp does not need to be part of the snapshot, right? If not, should it be on it’s own volume?

      Thanks!!

      Reply
      • Mike – it all depends on the SAN make/model, SQL Server loads, HA/DR needs, and more. It usually takes me about a day of working with a client to come up with the right SAN configuration for their SQL Server. If you’re interested in a consulting engagement, feel free to email me at help@brentozar.com. Thanks!

        Reply
        • Thanks! I guess my main question was, do the temp database files need to be part of the snapshot for the backup/snapshot to be valid?

          If not, then we could put them on another volume.

          Thanks!

          Reply
          • Mike – again, it depends on your SAN make/model and HA/DR needs. If you want to be able to quickly recover from disaster without figuring out where TempDB should go or reallocating new space for it, then you may want them on the same LUNs. In most cases, you want it on separate LUNs, but it does depend.

  • Good post and good discussion! I still enjoy the look on the storage guys face when I start asking about the SAN configuration instead of just saying I need 500GBs like everyone else.

    Reply
  • Thanks for the post. I remember back to what you said about being a SAN configurator at SQLBits in November.

    We are currently purchasing a HP 2312FC SAN, with 24 450Gb drives and I have been speaking with our people about installation. They have proposed that it should be populated with disks, left to level and then split into different “Virtual RAID 1” and “Virtual RAID 5” sections. My experience with things in the past led me to believe that we should set up a series of separate LUNs to ensure that TempDB is on a different set of spindles to a Data file, or a Log file, etc. I am told that this doesnt matter since the SAN will provide the redundancy by splitting data over as many disks as it likes, and that there are no guarantees to be had about what may be stored on the same physical disk as others.

    Should I trust the SAN to do this, or am I being mis-guided when using the SQL purely for SQL Server.

    Many, many thanks for any suggestions or help that you can offer.

    Reply
  • Hi, Nigel! Good question. Jason Massie and I did a “steel cage blogmatch” on this topic here:

    https://www.brentozar.com/archive/2008/09/steel-cage-blogmatch-how-to-configure-a-san/

    The short story is that the only time I want to separate out my LUNs and arrays is when I have a full time, dedicated SAN administrator monitoring the performance full time. The problem with doing fine-grained configurations manually is that if you get them wrong, you won’t know unless you’re constantly monitoring and reallocating resources. Most of the time, if you don’t have the time, then you’re better off using a shared spindles environment like what you’re describing. Letting the SAN manage it is better than nobody managing it at all.

    Reply
  • Hi,

    We have a MSSQL 2008 server running on a Win2003 R2 X86 server (WMware) connected to a IBM DS3400

    We have been running the server on a Raid5 array and have huge problems.

    We have now a expansionbox for this SAN with 12 disks (one hotspare)

    what is your best suggestion for creating new arrays dor the SQL server.

    Im thinking of crataing a 6 disk RAID10 and run LOG/DATA on it and then add 4 more disks if i need more performance later on.

    Reply
    • Hi, David. It all depends on your server’s load patterns and the specifics of the SAN gear. It usually takes me a day of work to find the right setup for a server and its storage. If you’d like to work together on a consulting project, shoot me an email at Brent@SQLskills.com. Thanks!

      Reply
  • Miguel Angel Sanchez
    November 7, 2011 5:32 pm

    What is exactly a spindle, what this mean?

    Thank you in advanced

    Reply
  • Miguel Angel Sanchez
    November 8, 2011 2:09 pm

    Thank you very much as always you rock.

    Another quick question what could be considered as normal for PLE counter and for Compilations, I can see in my system around 148 compilations

    Thank you

    Reply
  • Apologies for coming very late to the party on this (as always) excellent post, but we are just in the process of migrating from an HP EVA to a NetApp FAS3240. I remember pretty much following the Microsoft rule book for partition alignment and allocation unit size when I wrote our company configuration standards for SQL Server on HP arrays. I notice that NetApp guidelines suggest there is no discernible advantage in using 64KB over 4KB allocation units. I can understand that this makes sense because the underlying WAFL file system operates with 4KB blocks, but just wondered how you normally recommend configuring NetApp arrays?

    We are going to be running baseline tests with different allocation sizes and will be happy to share the results.

    Many thanks for another scintillating blog series.

    donaldc

    Reply
    • DonaldC – thanks, glad you liked the post! For configuration, I recommend following Ronald Reagan’s SAN configuration tip: “Trust, but verify.” Test it with different allocation sizes and your own load patterns, and find what works best for you. There’s no one universal guideline that works for everyone all the time. Hope that helps!

      Reply
      • Thank you – that does indeed help and is what we plan to do. Apologies for the earlier shameless flattery, but it was well meant!

        Reply
  • Hi Brent. Terrific, very helpful post. We are setting up a virtual environment that will have 11 VMs on the host. The SAN has two RAID 5 arrays of seven 300 GB disks each. The 11 VMs will be of every possible variety – file servers, application servers, Exchange server…… plus one will be a little SQL Server (one 25 GB heavily used OLTP database). The OS, data and logs etc. can be on one array in the SAN or divided between the two.

    Even if the drives are blazing fast, we don’t know yet what all the various VMs will be when they grow up, so suppose several are read/write intensive and SQL has to wait in line behind them. Methinks this is a disaster waiting to happen.

    Have you ever seen this kind of setup? Would you be screaming for a dedicated RAID array for SQL, added to or carved out of the SAN?

    Thanks so very much. My company has agreed to let me purchase your SQL on VM training vid’s.

    Reply
  • Brent, we migrated from an HP XP SAN to a 3PAR, similar to what Donald c. was mentioning. My question is, since the 3PAR is similar to a NetApp (in the single disk pool configuration aspect), there is no difference between allocating a single 2 TB disk to a server for data (data being 1 database consisting of 8 data files each being 200GB), not including log, vs. 8 individual drives with 1 data file each on it?

    Reply
    • John – this is a little beyond something I can address quickly in a blog post comment. There can be a lot of tricky answers here around multipathing. If you’d like custom consulting help, shoot me an email at help@brentozar.com. Thanks!

      Reply
  • Nice write up. I have recently setup a new SQL install on our SAN (EqualLogic 6100). This is for a new datawarehouse initiative so we had limited specs on what is needed from the vendor.
    Our EqualLogic is setup is 24 SAS drives in a RAID 50 config. One dedicated LUN for SQL that has 3 VHDXs (1 for O/S, 1 for Logs, 1 for Data). It has been running great. I am not a fan of shopping the SAN into different raids 🙂

    Reply
  • We’ve just added some new disks shelves to our netapp filers and created a second aggregate. Are we wise to move our ‘Logs’ LUN to the new aggregate and keep our ‘Data’ LUN on the first one?

    Cheers! 🙂

    Reply
    • Richard – rather than give you a fish, I’d rather teach you how to fish. Have you hit up NetApp’s best practices documents at https://www.brentozar.com/go/san to see what they say?

      Reply
      • Damn! I must have run out of free BrentO advise passes!! Haha! 😉

        Thanks. Yes. Sorry. Mired in investigations as well as a hundred other things! Forgot about that guide. It implies yes, so will give it a whirl.

        Thank you once again! 🙂

        Reply
  • Brent – Thanks for all the replies and explanation. Its great to see such good questions being asked… and replied to…

    Anyway have you tried / implemented anything on SQL 2014 in-memory capabilities?
    I have setup certain servers in-house but haven’t really got breather to work on it.

    Reply
    • Rohit – thanks, glad you like it.

      I haven’t found a scenario yet where the in-memory OLTP (Hekaton) features of SQL Server 2014 were the most cost-effective and practical solution.

      Reply
  • Dear Brent
    Excellent post as i was given the responsibility of setting up the hardware specs for a new SharePoint Server using SQL server and RBS (Filestreaming). We are using HP3PAR and the existing server is using a shared SAN where all of the files for the database were placed under 1 LUN. Is this the recommended route or do we need to segregate the files (database, logs, temp, RBS) into separate LUNs?

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.