SQL Backup Software: Part 2 – Quad Cores are Changing the Game

In my last post, Why Native SQL Backups Suck, I talked about the weaknesses of native SQL Server backups. Today, I’m going to extend that a little by talking about one of the greatest surprises for DBAs in recent history: the advent of dirt-cheap multi-core processors that don’t cost extra for SQL licensing.

How SQL Server Licensing Is Affected by Quad Cores

Microsoft SQL Server is licensed by the CPU socket, not the core, so it costs the same to license a single-core CPU as it does a quad-core CPU. I’ve used that logic to convince executives to upgrade older single-core database servers to new multi-core hardware because they can often pay for the server hardware via license savings. It’s twice as cheap to license a brand new 2-cpu quad-core box than it is to license a 4-cpu box with single cores, and the license savings completely pays for the cost of a new server.

Most of the time, quad-core CPU’s aren’t really a compelling feature for database administrators because SQL Server experiences more I/O backups than CPU power backups. We pour money into drives, HBA’s, and preach the benefits of raid 10, but we don’t spend a lot of time comparing processors in great detail. I/O is the big bottleneck. This is especially true during the backup window. Backing up a SQL Server database consists of reading a lot of information from drives, and then writing that same information to another set of drives (either local or across the network).

So during the backup window, we have all these extra cores sitting around idle with nothing to do.

Let’s Do Something With Those Cores!

Why not use that extra idle CPU power to compress the data before we send it out to be written?

The users won’t notice because they’re already waiting on I/O anyway, especially during backup windows when we’re taxing the I/O subsystems.

If we dedicate this extra CPU power to data compression, we now have smaller amounts of data being sent out for writes. Our backup size gets smaller, which in turn – decreases our I/O load! In effect, we’re trading CPU power for I/O power. The more CPU power we have for data compression, the more I/O we free up.

The equation gets interesting when we start to relate how much I/O speed we buy with each additional processor core. Going from a single-core CPU to a quad-core CPU enables a massive amount of backup compression power, which means much less data needs to be written to disk. If less data is being written to the backup target, then we have two options: our backup windows become shorter, or we can use cheaper/slower disks.

Using Backup Compression To Save Money

Choosing the latter method means that the shiny new quad-core database server may pay for itself. I’ve been able to say, “You need more drives for your new project? I’ll sell you my raid 10 of high-end, 73gb 15k SAN spindles because I’m downsizing to a raid 5 SATA array.” Trading off those expensive drives enabled me to buy more quad-core database servers, which could compress the backup files better, and I could live with the SATA drives as a backup target. My backup time window stayed the same, and I gained faster CPU power outside of my backup window because I had more cores.

Cheap quad-core processors enable a database administrator to trade CPU power for I/O speed in the backup window – but only when using those newfound cores to actively compress the backup data. SQL Server 2000 & 2005 can’t natively do that, and that’s where backup compression software comes in.

The same quad-core power works in our favor at restore time, too. During restores, the SQL Server has to read from the backup file and then write those objects out to disk. With backup compression software, the server does less file reads from the backup file because the backup is smaller. This means faster restores with less I/O bottlenecking, and fast restore times are important to a DBA’s career success. The faster we can restore a database in an emergency, the better we look.

Old Servers Trickle Down to Dev & QA

This pays off in another (albeit obscure) way: development & QA servers. At our shop, we’re constantly replacing big, multi-cpu (but single-core) servers with smaller quad-core servers. As a result, we have a lot of 4-way and 8-way servers lying around that are relatively expensive to license in production. They make absolutely perfect development & QA SQL Servers, though, since SQL Server Developer Edition isn’t licensed by the socket, but instead by flat rate. I’ve been able to take these 8-way servers by saying, “No one else can afford to license these for their applications, but I can use them for development.” Then, those 8 cores pay off in faster restores from our production database. I’m able to refresh development & QA environments in shorter windows because I can uncompress them faster than I would on a smaller server.

If faster backup & restore windows were the only tricks available in backup compression software, those alone would be a great ROI story, but there’s more. In the next part of my series, New Features for SQL Backup and Restore, we’ll look at ways backup software vendors are able to jump through hoops that native backups can’t.

Continue Reading New Features for SQL Server Backups

Previous Post
SQL Server Backup Software: Part 1 – Why Native SQL Backups Suck
Next Post
At Microsoft’s Technology Center in Chicago

4 Comments. Leave new

  • So did SQLSafe 4.6.1 fix the bug issues? Can you blog the MemToLeave error you were receiving?

    Thanks!
    -Another interested DBA…

    Reply
  • I haven’t been able to test that the new SQLsafe patched fixed the bug because we ended up focusing on our MTC lab preparations. I’m going to test it when I get back, though.

    I wasn’t getting a MemToLeave “error” – Idera support wanted me to change the MemToLeave settings. Since this is on an SAP BW data warehouse, all of our SQL changes have to be vetted by the SAP staff first. Changing a setting like that would affect how the application performs, and that’s not something we’d consider on a production box without testing it extensively first.

    Changing that setting may have “fixed” Idera SQLsafe’s memory leak by allowing it to continue leaking longer without running out, but when I got that instruction, I realized we were going down the wrong path for a product that I have to bet my job on.

    Reply
  • Most people (Esp us SW guys) still dont realize that those
    3 extra cores wait for core one to stop using those same data and
    address pins on that single socket. (socket 1133 doesnt mean each
    core gets 64 data pins and 48 or more address pins *each*)
    Technically, 4 single-core sockets would truly be able to do
    independent work. Unless the memory bus gets frozen as each core
    does IO. Another reason why linear will remain
    unobtainium…

    Reply
    • Jack – that’s an interesting thought, and there’s a grain of truth to it, but that’s not really how today’s processors work. Two things help to let cores work simultaneously: L3 cache (which lives inside the same CPU) and the slow speed of things happening outside the chip. It doesn’t take long for CPUs to send requests and get data back over the data & address pins, but it takes a *very* long time (relatively) for the disk subsystems to respond.

      By your logic, even 4 single-core sockets can’t do independent work because they rely on the same controller chipsets.

      While linear scaling is indeed unobtanium, I just wanted to make it clear that additional cores do enable scaling.

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.