DBA Days: Money for Nothing, Chips for Free

Throwing hardware at it

We gotta move these E5-2670 v3s

We gotta move these E5-2670 v3s

These were just sitting around Dell. On a table. Not doing anything. They might have been broken; I’m not sure.

But that’s not the point.

The most meager of server blades hanging out here had 128 GB of RAM in it.

One-hundred-and-twenty-eight. Gigabytes.

Let that sink in for a minute, then we’ll talk about why it matters to you.

Cache is king

How big is your database?

How much RAM do you have?

If the first number is way bigger than the second number, ask yourself why. The most common answers I get from clients are:

  • We’re on Standard Edition
  • We had way less data when we built this box
  • We already have 32 GB in here, how much more do we need?
  • WE NEED HOW MUCH MORE?

I’m not saying you need a 1:1 relationship between data and memory all the time, but if you’re not caching the stuff users are, you know, using, in an efficient way, you may wanna think about your strategy here.

  • Option 1: Buy some more RAM
  • Option 2: Buy an all flash array

You’ll still need to blow some development time on tuning queries and indexes, but hardware can usually bridge the gap if things are already critical.

If you need help figuring out if memory will help for now, head over here and here.

No favors from Redmond

This is something I have real beef with, and I’ve written about it before. It takes Microsoft near-zero development time to let you cache more than 128 GB of data. Why do they charge you $7k a core for it? The features that are hard — Availability Groups, for instance — I totally get why they charge Enterprise Edition licensing for them. Lots of people spent lots of time getting them to you, along with many other features in Enterprise Edition.

No vendor is perfect on this. Oracle doesn’t allow for parallel query processing in Standard Edition, Postgres is just getting the hang of parallelism period, and MySQL… Uh… Exists.

This isn’t something that you can escape in the cloud, either. Azure’s biggest box packs 448 GB (and costs about 15k a month), and Amazon’s biggest box that’s compatible with SQL Server has 224 GB. You can go bigger if you want to run SAP’s Hana, but it’s pretty expensive.

And then you’d be on SAP Hana, and I’d miss you reading my blog posts.

What’s the point?

Staring at the hardware that was literally sitting on a table doing nothing all week was simultaneously refreshing and frustrating.

Refreshing because some people really get what hardware is helpful for SQL Server performance, and frustrating because I could solve so many client problems with just a few sticks of the RAM in one of those idle and unplugged blades.

And no, they wouldn’t let me take any. And no,  they didn’t think my “Ozar’s Eleven” joke about stealing it was funny. But who doesn’t want their own security escort?

Yeah buddy that’s his own RAM

Caching data doesn’t solve every problem, though. If your problem is blocking or deadlocks, memory won’t necessarily make it go away. It doesn’t fix your bad design choices, lack of referential integrity, cursors, scalar functions, or 16 column clustered indexes.

For that you’ll need some young and good looking consultants.

Previous Post
[Video] Office Hours 2016/08/31 (With Transcriptions)
Next Post
DBA Days: Scripts from Downtime Train

3 Comments. Leave new

Menu
{"cart_token":"","hash":"","cart_data":""}