A few years ago, Erika and I went to the VW dealership to trade in her old Jetta and get a new one.  The choice of a Jetta was already a foregone conclusion – she loved VWs at the time – and it was just a matter of picking out colors and options.

Bringing the Jetta Home

We took a test drive of the base model, and then the sales guy asked, “Do you wanna also test drive the 1.8L Turbo version?”

Me (immediately): “No.”

Erika: “Sure, why not?”

Me: “You were just saying how this one is so much peppier than yours.  You – and by you I mean we – don’t need a turbo.”

Erika: “Come on, let’s try it.”

As soon as she accelerated onto a highway on-ramp, felt the power surge, and heard the turbo whistle, that was the end of that.  Suddenly, I felt like a SAN administrator and Erika was the DBA.  “No no no,” I was saying, “You don’t need that.  You’re never going to go that fast, and I know because you yell at me when I take highway on-ramps that fast.  Let’s not spend the extra money if we’re not getting extra capacity.  Besides, let’s bring it back to numbers – let’s measure how fast the base version is to 60mph, and then measure the turbo version.”

She couldn’t hear me because I was in the back seat and she was negotiating price with the sales guy.  The seat-of-the-pants feeling of speed was enough for her, and often it’s good enough for us DBAs too.

Measuring Your SAN the Easy Way

I’ve written about how to test your SAN’s performance with SQLIO, but I’ll be honest with you: that’s the hard way.  It takes knowledge and time, and you only have one of those.  (I’ll be charitable and not tell you which one.)

Instead, let’s get seat-of-the-pants numbers for your storage.  Go download the portable edition of CrystalDiskMark (NOT CrystalDiskInfo) and put it on a network share.  Run it on an idle server (not your live SQL Server, because it’ll slow things down while it runs.)  It’ll look like this:


Across the top, there’s three dropdowns:

  • 5 – the number of test passes you want to run.  If you want a fast seat-of-the-pants guess, do 1, but keep in mind it can be wildly variant between passes if something else happens to be going on in the SAN.
  • 4000MB – the test file size.  I like using 4000MB to reduce the chances that I’m just hitting cache and getting artificially fast numbers.  Smaller test file sizes may look fast but don’t really reflect how a large database will work.
  • E: – the drive letter to test.  Keep an eye on the free space there – you don’t want to create a test file that can run your server out of drive space.

After making your choices, click the All button.  While it runs, here’s an explanation of each row’s results:

  • Seq – long, sequential operations. For SQL Server, this is somewhat akin to doing backups or doing table scans of perfectly defragmented data, like a data warehouse.
  • 512K – random large operations one at a time.  This doesn’t really match up to how SQL Server works.
  • 4K – random tiny operations one at a time.  This is somewhat akin to a lightly loaded OLTP server.
  • 4K QD32 – random tiny operations, but many done at a time.  This is somewhat akin to an active OLTP server.

The more astute readers (and by that I mean you, you good-looking charmer) will notice that 4K operations don’t really measure SQL Server’s IO.  SQL Server stores stuff on disk in 8K pages, and zooming out a little, groups of 8 8KB pages (64K extents).  We’re not looking to get an exact representation of SQL Server’s IO patterns here – we’re just trying to get a fast, one-button-click-easy measurement of how storage performs.  Usually I find that during the first round of storage tests, it’s not performing well period – and it doesn’t make sense to bring SQL Server into the game just yet.

Interpreting CrystalDiskMark Results

For magnetic hard drives (individually or in RAID arrays), sequential operations (the top column) are often 10x-100x the rest of the results.  This metric is often limited by how the computer is connected to the storage, and you can get those numbers from the bandwidth rates in Kendra Little’s “How Big Is Your Pipe?” bandwidth reference poster.  Keep in mind that the MB/sec numbers on the poster are theoretical limits, and in practice, we’ve got 5%-20% overhead involved.

For solid state drives, the difference between sequential and random operations isn’t always as dramatic, but it can still be 2-3x.  If there’s no difference, then I’d look even closer at the connectivity method – the SSDs are probably outperforming the connection method (like 3Gb SATA, 1Gb iSCSI, or 2/4Gb FC.)

So what’s a good or bad number?  If your server boots from a mirrored pair of local drives, and stores its SQL Server data somewhere else (like on a larger array or on a SAN), then test the local mirrored pair too.  Compare the numbers for where you’re storing the valuable, high-performance data to where you’re storing the OS, and you might be surprised.  Often I find that the OS’s drives perform even better because we just haven’t configured and tuned our storage.

Keep these original CrystalDiskMark screenshots in a shared folder for the group to access, and then challenge everyone involved to do better.  Simple tuning techniques like tweaking the read/write bias on the RAID controller’s cache, right-sizing the NTFS allocation units, and working with different stripe sizes can usually yield double the storage performance without spending a dime.

Want to learn more? We’ve got video training. Our VMware, SANs, and Hardware for SQL Server DBAs Training Video is a 5-hour training video series explaining how to buy the right hardware, configure it, and set up SQL Server for the best performance and reliability. Here’s a preview:

Buy it now.

↑ Back to top
  1. So did you get the turbo Jetta ?

    Thanks for sharing measuring performance technique

    • The key to a successful relationship is … actually, I have no idea what it is, but it’s on the same keyring as the car your girlfriend wants. We got the Turbo.

  2. Brent, thanks for the helpful articles I really appreciate them. I just ran this on the data drive of a new win2k8r2, SQL 2008 r2 OLTP test cluster. Here are the numbers. I think they look OK but not 100%sure. Thoughts?

    Thanks in advance.

    seq 211.3 244.5
    512K 214.6 192.7
    4K 5.113 13.77
    QD32 366.9 142.8

    • Hi, Bryan. Something about that seems off – you shouldn’t usually higher speeds for 4k QD32 reads than you see on sequential reads. Make sure you follow the instructions in the post carefully with the number of test passes and the size of the test file. If the results repeat, I’d be interested to hear more about the server and the storage involved.

      • I ran it a few more times with 5 passes at 4000MB and hit a range if 350 to 400MB/s. Sequential reads were 50-100MB/s faster.

        We are using HP Proliant servers and an IBM SAN. Looking in Device Manager I can see that we have Qlogic fiber HBA’s, driver date 11/17/2010, seems kind of old. I have a meeting later today with the storage admins and will ask more questions :-)



        • Actually that isn’t too bad – I too use QLE2460’s and the driver date is 12/2/2009. No issues to report – they are performing as expected. Do you have the SANsurfer HBA Manager installed? If so you can glean some pretty good performance related information from it. Since I’m tied into an EMC, I do use PowerPath as well.

          • Hi Allen, SANsurfer is not installed. We have an IBM DS5300 with 300GB 15k disks. Here is what out storage admin has to say about the configuration. This is from an e-mail that was sent to me.

            Test has 13 raid-5 raid groups of 7 drives each.

            Prod has 22 raid-5 raid groups of 7 drives each

            Each raid-5 raid group is presented to the SVC as a lun. Then the SVC combines all of these luns into a managed disk group (or a pool in version 6) which then vdisk are “carved” out for each host. The vdisk is striped across multiple luns in a managed disk group.

            Hard to picture, but the disk that the host “sees” is actually striped twice, (once by the svc and once by the raid-5 raid group on the actual disk array.

            I don’t know if that is good or bad for database storage.

          • Bryan – sorry I was out for a while, and I’m rereading this, but I’m not sure if there’s a question in here or if you’re just sharing knowledge?

  3. This is a great little tool! I’ve already run some tests with it.

    I’ve used your SQLIO tutorial to test my servers / SAN before.

    Do you have any thoughts or guidelines on how to compare / contrast the results from both methods?

    • Steven – I don’t usually compare/contrast the results from the two tools. I use CrystalDiskMark as a fast “easy button” for people to do quick throughput testing, and then if time allows, I use SQLIO to do much more in-depth testing.

  4. I actually took your SQLIO post and automated it a bit more, putting a nice SSRS front-end on it so I can easily identify saturation and things of that nature, but I’m all for a tool that can get me some quick numbers as well. Thanks, Brent.

  5. I have an 11 year old Turbo Jetta. It makes my wife’s Honda Accord seem like a golf cart or a BlueArc SAN.

  6. How useful is this tool for testing I/O performance on virtual machines with all their storage on virtual disks on the SAN?

    • Jeff,

      IOPS are IOPS regardless of whether the storage is local or a SAN. If you drop a file on to a drive and read/write to it, the speed at which you can read/write is what you are testing regardless. The one kicker with a SAN – and Brent alluded to it – is the cache. You want the file to be big enough to know you aren’t just hitting the cache, but rather are hitting the disks.

    • Very! It gives you insight into what the disk performance is like for that guest with its given storage configuration. Now, whether or not your virtual disk configuration is what’s slowing you down is a different story– you’d need to investigate to confirm how much of a difference that was making.

  7. I really thought you were going to post some numbers here but it turns out you’re just a tease :]

    I’ve been considering starting a “san users anonymous” since these vendors hate it when people talk actual numbers. Maybe they have a point but the response shouldn’t be to shut people up, it should be to help them get the best performance.

    For vmware there is a beta tool “IO Analyzer” that people are posting all kinds of results here:

    And for the question, “Does this matter in VMs?” I would say it does more, because that is another layer where you could have an ill advised setting holding you back.

    • Dustin – yep, ideally there’d be some kind of Geekbench-style repository for storage data. When I was a SAN admin, I found that to be extremely difficult, though, because it depends on so many variables. There’s the number of drives, the make/model of drives, the RAID controller or storage processor, how the storage is connected, the stripe size, the NTFS allocation unit size, multipathing software, and of course version numbers for everything involved. It’s hard to compare apples to apples with complete strangers in the storage world.

      • I would agree that while there are a lot of variables, most all of them have metrics and theoretical expectations – from the drives, controllers, ports, cables, switches, HBA’s, etc. A baseline has to established by collecting and analyzing the theoretical capabilities taking into account the configuration. I spent nearly three days reading up on every piece of literature I could find on my hardware before I even started to run tests because the tests mean nothing without some sort of expectations to match the results up against.

        • Allen – yep, but to share benchmarks publicly, you’d have to include all of the variables. Otherwise, it just dumbs down to, “I have storage and it can read 400MB/sec.” While that *is* useful to you, it’s not terribly useful to the public.

          • While I share what I’ve gathered, I mostly do that so I can be corrected in the event I interpreted something wrong. This was my first ever attempt at using SQLIO, using your post as a guideline (I believe you saw it, but so others know what I’m talking about):


            I personally do that because I’m a numbers/math person and feel that’s the only way to make sense of the output using these sorts of tools. Knowing I’m getting so many IOPS or throughput does me absolutely no good personally unless I can confirm or deny that it’s meeting the theoretical expectations. If I’m getting 400MB/sec, but the theoretical throughput is 600MB/sec, I know something isn’t right and will dig until I find out why I have a gap in expectations and results.

  8. Something worth mentioning is that using CrystalDiskMark in this manner only tests the maximum throughput of your SAN. If you’re using iscsi, you may only see a maximum of 165MB/s or so due to the limits of 1gig ethernet.
    That doesn’t necessarily mean your SAN is slow though. If you’re using MPIO and have lots of fast disks, you can still serve a very large number of IOPs.

  9. Pingback: Something for the Weekend – SQL Server Links 06/04/12 - John Sansom SQL Server DBA in the UK

  10. I use CrystalDiskMark quite a bit. In my experience, the main numbers you want to look at / optimize for are sequential reads for table/index scans, and something in between the 512K and 4K random reads for the rest of an OLTP workload.

    Although there are exceptions of course, most apps running in SQL Server don’t seem to generate a queue depth of anywhere near 32 on a single LUN, so I prefer the QD=1 numbers as a first-cut guide.

    BTW, there’s a text box at the bottom of the form that you can use to describe the test environment/date, etc — that way you can capture that info in your screenshots, rather than relying on file names.

  11. Okay, Correct me if I’m wrong or if my question makes no sense.

    So Linchi Shea blogged about the importance of fewer and defragmented VLFs for transaction log files (especially for dbs that support a lot of large bulk operations). Is that because we want the write behavior of the TLog file to be as close to the number reported by crystal disk mark as “Sequential” writes? (MB/s).

    As a follow up question. Can we use slow average write stalls (calculated from info in sys.dm_io_virtual_file_stats) as a symptom of too-many-VLFs?

    Maybe the questions don’t make sense. Any help or links are appreciated.

    • Michael – well, no, not really. The VLF thing has less to do with disk fragmentation and more to do with the way SQL Server manages virtual log files. We do want the write behavior of log files (and data files) to be as high as possible, but realistically, it’s pretty hard to get SQL Server to crank out the same kind of pure sequential throughput that we can get with synthetic tests. If we have exactly one database, and we’re not doing t-log backups as we’re loading, then it’s possible to get close. Problem is, multiple databases’ log files on a shared log file drive, plus transaction log backups going on at the same time, mean that we end up doing a lot of random access anyway.

      I wouldn’t directly draw a line between too many VLFs and slow write stalls. That’s totally possible – I just haven’t done that measurement. The two things are easy enough to measure independently – you can easily check VLF count with Dave Levy’s scripts at and if it’s high, then we shrink & regrow the log files to correct it. I measure storage stalls with sys.dm_io_virtual_file_stats, but if it’s slow (even for log files), VLFs aren’t the first place I look. I usually check for things like 4k NTFS allocation units, RAID 5, writing too much to storage, etc.

  12. Pingback: Easy storage benchmarking | WonkyInc

  13. Pingback: Testing with XIV and VMware « virtualpuma

  14. Hello Brent, I am having trouble reading the results from Crystal Disk Mark. I am using a NetApp for the sql server data files and a raid 10 for the OS. I am running SQL Server 2008 R2 Enterprise x64 on Server 2008 R2 Enterprise (this is a cluster). I went through the bandwidth reference poster and I still dont understand the results.

    Below are my Crystal Disk Mark results, are these good or bad?
    How do I compare the results with the bandwidth reference poster?

    CrystalDiskMark 3.0.2 (C) 2007-2013 hiyohiyo
    Crystal Dew World :
    * MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]

    Sequential Read : 112.708 MB/s
    Sequential Write : 87.305 MB/s
    Random Read 512KB : 41.564 MB/s
    Random Write 512KB : 81.893 MB/s
    Random Read 4KB (QD=1) : 0.857 MB/s [ 209.2 IOPS]
    Random Write 4KB (QD=1) : 10.148 MB/s [ 2477.5 IOPS]
    Random Read 4KB (QD=32) : 27.839 MB/s [ 6796.7 IOPS]
    Random Write 4KB (QD=32) : 118.059 MB/s [ 28823.1 IOPS]

    Test : 4000 MB [I: 28.9% (578.4/2000.1 GB)] (x5)
    Date : 2013/05/10 7:52:15
    OS : Windows Server 2008 R2 Enterprise Edition (Full installation) SP1 [6.1 Build 7601] (x64)

    • Rick – howdy sir! The MB/s numbers are megabytes per second, and that compares to the megabytes per second number on the poster. When you compare those, what’s the fastest number that you’re getting in CrystalDiskMark, and does that correspond to a cable type that Kendra shows on the poster? Does that happen to match up to the cable type that you’re using?

      • Hello Brent, thank you for responding so quickly! The fastest number I see in CrystalDiskMark is 4K QD32 – 118.1 Write MB/s. That number does not correspond to the cable we are using, we use 8 Gb Fiber Channel cable.

        • Yep, that would be a sign that you’re probably not getting the performance you really want.

          • Is the perfmon counter “Disk Read Bytes/sec” and “Disk Write Bytes/sec” the same? Because I see the maximum number as 750 MB in the perfmon counter.

          • Rick – no, those can be different.

          • Hello Brent, I updated the HBA drivers and firmware and now I see sequential reads at 205 MB/s. That is a huge difference, here is my question.

            Is 205 MB/s a good number if we are using a NetApp?

          • Rick – well, here’s probably the order I’d check:

            First, ask the NetApp sales staff (including their presales engineering) to see if that matches up with the model that they sold you along with the number of drives.

            Next, run the same tests against other drives – like the C drive on the server, and the C drive on your own laptop. Use those numbers for comparison.

            Finally, compare it to common desktop hard drives using Tom’s Hardware charts:,3.html

            That’ll give you a good idea of where you’re at relative to a single SATA drive.

          • Hi, I was wondering in what way the Perfmon counters are different than the CrystalDiskMark figures, as Rick was asking, because I was noticing the same thing here, but I can’t explain it. Is there any correlation between them? Thanks for any insights!

          • Bert – unfortunately, this isn’t something that I can answer. You may consider posting questions in a place like or Thanks!

  15. Hi Brent

    Couldn’t help noticing that you did a CrystalDiskMark test on the Samsung 840 Pro (512GB) testing various RAID config some time ago.
    We are getting a paltry 87MB/s for seq writes for the same Samsung disk in a RAID1 config vs 377MB/s without RAID.
    Would this be normal or have your alarm bells going off?!

  16. The verson linked above currently installs OpenCandy adware as described in the EULA.

    A portable version without this can be found here:

  17. The link to CrystalDiskMark doesn’t seem to have a valid program. Do you have another link location for this software?

  18. Hello…can u please take a look at my results
    Seq –> Read :162.2 Mb/s Write :153.3 Mb/s
    512K –> Read :43.01 Mb/s Write :117 Mb/s
    4K –> Read :0.718 Mb/s Write :4.458 Mb/s
    4K QD32 –> Read : 13.42 Write :16.58
    Seq –> Read :162.2 Mb/s Write :153.3 Mb/s
    These are on a brand new EQ 4100x…2 x1 Gb iSCSI…nothing is connected except the testing volumes
    Dell told me that the SAN is running just fine but i just admit that this is the best 22 HDDs -RAID 50 can deliver…
    I tried to point them to these poor results from CD or Hdtune but i hit a brick wall that they are not validated and only IOmeter is good…
    Take a note that when Iometer runs on 64 threads i saturated the network bandwidth but when i reduced the threads to 1 i get around 30 Mb/s….to me it seems bizarre

  19. Ok dumb question just saw your video, you referenced downloading the portable version. Copied to my SAN on my sql server (I was RDP’d to it) And it shows not disk drive

  20. I googled the error it looks like this is common that the program does not work with certain storage devices

  21. In case anyones interested.

    Dell Compellent Sc4020 with 24 * 15K 300Gb, 24 * 1TB 7200, 8 * 8Gb FC backend (2 * 8Gb FC to each Server), Auto Storage Tiering, Dell Fast Track.

    9 Runs of 4Gb in Crystal Mark, each San Controller has 16Gb cache so that may be affecting results.
    The Test Server is HyperV 2012R2 (running on a 3 Node HyperV 2012 R2 cluster). The test drive is a Dynamic .vhdx of 100Gb dedicated for the test. No other SAN activity is occuring at the time of these tests as it is PreProd.

    Also, CSV cache has been increased from 0Gb (default) to 16 Gb.

    CrystalDiskMark 3.0.3 x64 (C) 2007-2013 hiyohiyo
    Crystal Dew World :
    * MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]

    Sequential Read : 1979.379 MB/s
    Sequential Write : 238.530 MB/s
    Random Read 512KB : 1878.255 MB/s
    Random Write 512KB : 182.255 MB/s
    Random Read 4KB (QD=1) : 71.724 MB/s [ 17510.8 IOPS]
    Random Write 4KB (QD=1) : 6.002 MB/s [ 1465.4 IOPS]
    Random Read 4KB (QD=32) : 385.019 MB/s [ 93998.8 IOPS]
    Random Write 4KB (QD=32) : 29.038 MB/s [ 7089.5 IOPS]

    Test : 4000 MB [X: 0.1% (0.1/99.9 GB)] (x9)
    Date : 2014/11/02 16:39:33
    OS : Windows Server 2012 R2 Server Standard (full installation) [6.3 Build 9600] (x64)

    • Mike – that’s interesting, but I think the number of variables (VM, CSV cache, tiering) makes things a little unpredictable. I’m also rather disturbed by how low those random writes are – there’s something wrong there. I’ve seen much higher with Compellent gear, so I’d say it’s probably time to start troubleshooting. Thanks though!

  22. So I did a few runs of SQLIO just to compare.
    Hard to compare exactly but these are much better results.

    sqlio v1.5.SG
    using system counter for latency timings, 2539059 counts per second
    parameter file used: param.txt
    file C:\ClusterStorage\Volume2\Test\testfile.dat with 2 threads (0-1) using mask 0x0 (0)
    2 threads writing for 120 secs to file C:\ClusterStorage\Volume2\Test\testfile.dat
    using 4KB random IOs
    enabling multiple I/Os per thread with 32 outstanding
    buffering set to not use file nor disk caches (as is SQL Server)
    using specified size: 50000 MB for file: C:\ClusterStorage\Volume2\Test\testfile.dat
    initialization done
    throughput metrics:
    IOs/sec: 36100.69
    MBs/sec: 141.01
    latency metrics:
    Min_Latency(ms): 0
    Avg_Latency(ms): 1
    Max_Latency(ms): 50
    ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
    %: 3 73 23 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

  23. quick note: diskmark doesn’t work with mount points

  24. Sadly, CrystalDiskMark installs sh*tware.
    Be careful.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>