Local Backups Are The New Offsite Backups

Years ago, Microsoft’s Jim Gray said that memory is the new disk, and disk is the new tape.  (You can read more in his 2006 “Tape is Dead, Disk is Tape, Flash is Disk” PowerPoint presentation or the Google Quick View.)  Many of the things he predicted have come to pass, and it’s time we explore this a little more because it has so many impacts for today’s database administrator.

When I first started in database administration, I did regular fire drill restores because I was paranoid.  (Chalk that up to dropping a table when I was a teenager working with SCO XENIX.  Damn, I feel old.)  As often as I could, I’d restore databases from production over to our development servers and time how long they took.  That way, I could report up to my managers and say, “If we lose production, it’ll take X hours to restore onto dev and go live.”  Every now and then – never as often as we should – we shipped backups offsite to the boss’s house.  As I moved up in life, I started working for companies that sent backups offsite via storage services.

My primary method of recovery was to restore a local backup first (which took a known length of time), but if things really went to hell in a handbasket, I could call the offsite storage service and get them to courier a backup over.  I had no idea how long that would take, and when it arrived, it’d take another unknown time to read the backup off tape and get it onto the live servers.

In today’s multi-terabyte world, most of us don’t test our restores as often as we should.

Heck, most of us don’t even do backups as often as we should.  We have clients who have even abandoned the thought of doing multi-terabyte data warehouse backups because it’s faster to reload the data from source, and because they’ve got schema changes they’ve always wanted to make along the way anyway.

Local Disk is the New Tape

Unless you're running Developer Edition in production. Arrr, pirates!The HP DL580 G7 holds up to 2TB of memory, but even if you cheap out and only get 1TB of memory, you’ll pay under $50k for that memory.  While that might sound expensive at first, keep in mind that SQL Server Enterprise Edition is roughly $30k per socket, so you’ll spend $120k just licensing the four CPU sockets in that DL580.  The memory isn’t the biggest expense in that server – 1TB of memory is cheaper than the SQL Server licensing.  If you need four CPU sockets worth of power, you’re probably going to be caching a great deal of your database in memory.

That is, as long as the server’s up.

Two ugly things happen when that server restarts.  First, it needs to check the memory upon booting.  One of our clients with a 1TB memory system reports that it takes over 15 minutes for the server to boot up into Windows.  That’s some pretty ugly downtime if you need to do Windows patches or firmware upgrades, and this is why clustering starts to be a no-brainer when you go past the 256GB memory mark.  It’s just so much faster to have the second node already up and running – you lose much less time.

Second – and clustering doesn’t fix this – is that your server is starting from scratch with nothing cached in memory whatsoever.  The data’s all sitting on disk.  Even with whiz-bang solid state drives, it takes a long time to read 1TB of data off disk to warm up SQL Server’s cache.

One of our performance-freak clients had us write scripts to do a SELECT * across every table just to make sure all of the data would be brought into cache after a restart.  And if you’d like to attempt a similar exercise, keep in mind that you’ll also need to use index hints for every index in the database too, because just doing a SELECT * will only hit the clustered index.  If you’ve got filtered indexes, you’ll need to use a WHERE clause that matches your clustered index, or else use DMV tricks that will hit the filtered index.

For these kinds of companies, a SQL Server restart is a serious outage-causing event.  Their systems will perform below par for up to an hour as the database server recovers.  To a manager, this is akin to restoring from backup – it’s an event they don’t want to see more than once a quarter.

Caching Servers are the New Memory

The rest of yours are pretty damn slow, believe me.Database administrators think of SQL Server’s memory as the cache.  That is completely and utterly wrong.  The database is the persistence layer – the place things go where they need to be permanently saved somewhere.

The first place your applications should be querying isn’t the database: it’s a caching server tier like Redis, Memcached, Couchbase (aka Membase), and Windows AppFabric Caching.  Only if your apps can’t find the data in cache should they ask the database, and then they should store the database results in the cache to lessen load on the database.  The fastest query is the one you never make.

Don’t think the caching tier doesn’t need to be reliable just because it doesn’t have original data in it.  When this caching tier goes down, your database server will feel it.  As your loads grow, your app can get to the point where the database tier can’t survive the load if the cache servers are completely flushed, and you have to build cache-server-warming code.  That’s why I consider caching to be the new buffer pool – it’s the database’s first line of defense, long before queries hit the database server’s buffer pool.

When You’re Performance Tuning Queries, Zoom Out

When someone brings me a query to tune, the first question I ask is, “Why are we running this query?”  It’s not just that there may be a faster way to get the data, but there might be a reason not to run the query at all.

Are we repeatedly running the same get-configuration-data query over and over?  Maybe that should be cached inside the application itself.

Are thousands of users asking for their dynamic home page over and over?  Maybe we should use a cache tier like Redis that every web and app server can query.

It can take your developers months to fully implement and test a good caching solution like Redis, so think like Jim Gray.  The time to prepare for tomorrow’s massive architecture changes is now.  Start asking the caching question so your developers can start building out your new buffer pool.

Previous Post
How to Tell if TempDB Is a Performance Problem
Next Post
Third Normal Form is Snake Oil

3 Comments. Leave new

  • Did you just type “SCO?” You did. Yup, you’re a dinosaur.

    Reply
  • Do you think it would be a good idea for Microsoft to somehow make the buffer cache its own service? SQL Server’s executables could then restart for updates etc without dropping all that cached data. Of course, if the buffer service itself needs to be updated then you’ll still lose your buffer, but that wouldn’t happen as often as restarts for other reasons.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.