It was a dark and stormy night…

SQL Server
5 Comments

and I was hunched over my trusty, battered Macbook Pro, sipping whiskey by the backlit keys when the email came in. I got worried as soon I saw the sender, because Big Jimmy May is what they call a “Performance Consultant”. Before I even opened the email, I called my bookie to make sure my checks had all cleared. I’ve only got two kneecaps, and they’re already bad enough just from poundin’ the street looking for info. Thankfully, I was in the clear, so I read the email.

Jimmy said a fella by the name of Christian Bolton was knocking on doors looking for help with a particularly nasty job: exposing the dark, hidden secrets of SQL Server engine internals. To make matters worse, Christian wanted to combine two of my least favorite words: trouble and shooting. Jimmy would have taken the case himself, but he had a job he had to handle in Turkey, and you know how those Turkish jobs go.

Next thing you know, there’s a contract going around out there with my name on it – not to mention things like deadlines.  It’s a dirty business, I tell you.

The book authoring process stands in stark contrast to my normal blog entry process: instead of banging out a few ill-advised (but often spell-checked) paragraphs in half an hour and posting it for the world to see, I have technical editors and technical reviewers. They’re like blog commenters, only they have the authority to modify my work BEFORE you see it instead of AFTER. And of course, to see that work, you’re going to have to fork out some moolah.

I’m writing two chapters, covering storage and on the Performance Dashboard Reports, and I wanted to ask you, dear reader – what would you want to see in a Storage chapter?  Rather than showing 35 full-page diagrams of the Container Store’s goodies, I figured I’d cover these topics:

Types of Storage

There’s several types of storage that affect the way we make architecture decisions and perform troubleshooting. I’ll cover the basics of each and what their ramifications are:

  • Locally Attached Storage
  • Storage Area Networks (and touch on multipathing)
  • Virtual Servers (how VMware and Hyper-V change storage designs)
  • Project Madison (scale-out, shared-nothing nodes)
  • Using SQLIO to test storage throughput

Things that Change Storage

  • Compression
  • Transparent Data Encryption
  • Partitioning (of objects like tables & indexes, not disk partitions)
  • Partition Alignment (that’s where the disk stuff comes in)

What We Store

  • Data
  • Full Text Indexes
  • Filestreams
  • Logs
  • TempDB

Any requests? I still have to arrange this stuff in a way that flows, and don’t be surprised if I left off something huge.  I came up with this list on a cruise ship, people.  Just be glad it doesn’t include “margarita on the rocks” or “cigars”.

Don’t look for a fast turnaround – the book won’t be out until early 2010. Like I said, this is a lot different than publishing a blog entry! Along the way, I’ll be blogging about the process so you can get an inside peek at how it works – things like the contract, my brilliant coauthors, the writing, the reviewers & editors, and so on.  That way, when you see it on the shelves of your local bookseller, you’ll be able to explain to the proprietor exactly why they should burn it.

So if you’ll excuse me, I need to go buy my bookie a Christmas present. Now that I know Jimmy May has my email address, I can’t be too careful.

Previous Post
New internal solid state drives are “world’s fastest storage”
Next Post
Spot the Differences

5 Comments. Leave new

  • Great post as usual. I only wish the book was coming out sooner as I need to learn about most of the topics listed, especially TempDB and Full Text Indexes.

    I assume you were referencing the old Peanuts bit:

    http://www.snoopy.cpilgrim.com/images/books/darkstrm.jpg

    Right?

    When you sell the movie rights to your SQL thriller, can I play the psycho killer “Dead Lock”?

    Reply
  • Yep, you nailed it with the Snoopy reference. I can’t think about writing a book without thinking of Snoopy at the typewriter!

    Reply
  • Hi Brent,

    Coming from a sys admin background in platforms and SAN I have some experience tweaking performance on mid to high end arrays.

    The most bang for the buck tweak, barring correcting any horrid misconfigurations, has been on spindle and stripe element / size sizing.

    Not many outside of the upper end SAN guys have seen, first hand, the performance implications of spindle and stripe modifications. Usually, this is the case since these guys are the ones that have access to SAN equipment to test these configurations.

    There is open debate on the effect of stripe sizing and many sources of information that provide conflicting info. The most useful document on disk/stripe tweaking I’ve found has been Microsoft’s Disk Performance whitepaper. However, MSFT’s SQL Server Predeployment IO best practices whitepaper, even being less storage specific than the first, mentions good practices.

    For example, I was able to improve and reduce a DW/DM load time by a factor of 7, (to 3 hrs), by following, and extensive testing, of MSFT’s spindle and striping recommendations. Not many sources address disk stripe element and full stripe length in sufficient detail. They should because when it comes to IO concurrency, load distribution is huge.

    Major Storage Points
    -Concurrent IO
    HBA Queue Depth settings. Set to max. Most SAN array vendors have custom HBA firmware that will max this setting. Also, Windows has a concurrent IO setting for physicaldisk. SQL might bypass this setting, but something of interest.
    -Network paths
    For SAN, attempt to supply as many end-to-end paths to meet desired IO concurrency. If using a CDP facility, and performance is a major concern, enable SCSI write ack spoofing on the SAN switches to avoid added latency. Also, if SAN extension via IP is in place make sure to evaluate TCP performance using vendor supplied utilities. Tweak as necessary.
    -Operation fragmentation
    Avoid it. As usual, never fragment a request/response when performance is a major concern. Size the disk allocation unit, Layer 2 Frame Size, Layer 3 MTU (for IP transports), Array RAM cache page size, Array disk stripe element size in a succession of encapsulation able to finally carry, in it’s entirety, the IO request/response *.
    -Critical Array Operations
    Critical meaning scary. Try not to overload a SAN with maintenance jobs, such as snaps, clones, replication, and try to support an extreme high performance DB at the same time. Simple IO math. If extensive use of CDP/replication is used to remote Arrays, and the organization has 50-100K to spend ;), consider offloading that investing in SAN fabric based replication.
    Also, review Array RAM cache management policies and tweak as necessary. Exhaustion and subsequent all-stop flush of cache to disk is obviously horrible for performance. Set a more aggressive cache flush policy, usually by percentage used. If spindles and disk stripes were sized appropriately this aggressive cache policy shouldn’t hurt,…much.

    All I can think of at the moment. Good luck on the book!

    * This is a point of contention. Some sources say to size to accommodate the average IO request size. Some say to instead max out the file system alloc units (MSFT) and disk stripe element size (MSFT, et al). I’ve tested this and I have to agree with MSFT that this method is excellent for high throughput when high number of spindles are involved.

    Reply
  • Great set of recommendations! In fact, I’m going to subcontract the chapter out to you. 😉 Sounds like we’ve had a lot of the same experiences!

    Reply
  • Thanks, Brent. In all of my experiences I’ve seen disconnects between storage and DBA teams. The extent of the disconnects can have negative implications for IO intensive apps like SQL. I really do wish to see a book that attempts to bridge the gap of knowledge where storage IO management is concerned. To go beyond scaling of paths and adding of cache. Both, although important, are intermediaries and require appropriate endpoint configurations to be effective under heavy loads.

    Btw, probably being a SAN guy at heart I’m truly impressed with HP’s EVA data layout. Gone are contiguous data stripes replaced with independent blocks that can be relocated by policy to satisfy load balancing / IO assignment needs. I think HP’s taken the next logical step in the high concurrency data layout of the Symm DMX . I haven’t been following DMX, so they might be moving in the same direction, albeit more slowly considering the scales and install base of DMX.

    If you don’t mind emailing me I have an unrelated question I’d like to ask you offline. Thanks.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.