Database Design Choices To Consider When You’re Worried About Scale

Architecture
4 Comments

Tipping the Kilter

I’m intentionally avoiding physical/hardware choices in this post, like how much hardware to buy, and scaling out vs. scaling up.

It’s not that I don’t care about those things — I do, and they’re important — but most new apps these days are starting in the cloud, or an on-premise VM. In the cloud, you’re not making terribly specific hardware choices, and the hardware choices you make for VM hosts are different than what you’d make for a bare metal server.

CPU choice is on the top of that list, for a lot of reasons. A CPU’s clockspeed is a serial query’s speed limit, which made the typical wisdom for a bare metal server to buy a lower number of cores to get a higher clockspeed. This saves you licensing bucks, and lets your server function reasonably well.

That line doesn’t work on a VM host, where you need to have a bunch of different things co-habitating, and co-scheduling. You’re gonna sacrifice core speed for girth.

Up in the cloud, most published CPU speeds are in the 2.4-2.6 GHz range. That’s because the cloud’s really all VMs where providers had to sacrifice core speed for girth.

See? This is why I’m not talking about hardware. Any advice I give you is gonna be outdated in a quarter, and I have to keep saying “girth”.

Stick To The Ribs

With that out of the way, let’s talk about some more logical (ha ha ha) choices you can make that might save your tail later on.

Be More Optimistic

I can’t begin to tell you how many terrible things you can avoid by starting your apps out using an optimistic isolation level. Read queries and write queries can magically exist together, at the expense of some tempdb.

Yes, that means you can’t leave transactions open for a very long time, but hey, you shouldn’t do that anyway.

Yes, that means you’ll suffer a bit more if you perform large modifications, but you should be batching them anyway.

Just think of all the folly you’ll save yourself, falling into the NOLOCK trap, and the countless nights you’ll spend wondering if READ UNCOMMITTED is different.

(It’s not.)

Partition It Just A Little Bit

No, not because of some magical performance unicorn that you’ll never saddle, but because eventually someone will say: “Hey, we should archive some data”

And you can say: “No problem”

Because moving partitioned data out in chunks is a lot easier than coming up with your own strategy to move, verify, and delete data.

Partitioning isn’t a fit for every table. Things like lists of customers or users don’t often have a good partitioning key that you’d use to swap data around by. I really mean it when I say that partitioning should be considered a data management feature. If you’re going to implement it, make sure you can benefit from it.

Bite The Four Bytes Bullet

“Hey, you’re close to running out of INTs for this identity column.”

“Yeah, we don’t wanna use BIGINTs though, they’re four bytes bigger than INTs.”

“I also noticed all these first and last name columns are NVARCHAR(MAX)…”

Stop with the four byte drama. If your concern is for a database that’s going to grow into the terabytes anyway, you’re going to run out of INTs faster than you think.

I don’t mean you should throw out domain knowledge about the range of values you’re allowing in a column, but if you’re defining identity columns or sequences that don’t have any particular relational value or business meaning, just go big. It’s not worth the pain of changing later, or having to monitor and reseed.

Compress To Impress

Disk is cheap, but it’s not infinite. Unless you’ve got an Infinite Storage Repository®, of course. While not as good as columnstore, I think row and page compression are still legit choices to smush more data onto your disks.

This doesn’t help you with memory, because data is decompressed when it leaves disk, but if you’ve got tiered storage, it can help you keep more hot data on the good stuff.

Speaking of which…

Files and Filegroups

There’s a hard limit of 16TB for a single file in SQL Server. I’ve had to add files on a few occasions, because primary was filling up. Again, if you’re concerned about the future size of your data, this is important to consider: don’t put stuff in primary, and have a well-defined strategy for what-goes-where.

This isn’t a performance thing, really, it just gives you some freedom with how you backup and restore data, how you run DBCC CHECKDB, and being able to switch any archive data to read only. You can also do some neat tricks with moving data with no downtime.

At larger database sizes, you’re likely going to flip from taking native backups to taking SAN snapshots. I’ve heard from people on both sides of the fence, here. Some people swear they can make native backup and restore scale to be as fast as snapshots, and that snapshots have caused issues with corruption, or have silently failed in odd ways.

I haven’t personally run into either of those, but it goes to show: no matter what your backup strategy is, make sure it fits your RPO and RTO goals, and make sure it works.

Left Out

On top of hardware choice, I also left out SQL Server version and edition choices. I did that for a couple reasons.

  1. If you’re serious about scale, Enterprise is kind of a given
  2. Everything I mentioned here is also available in Standard Edition, starting with 2016 SP1

If you’re starting work on an application today, it doesn’t make sense to use a version of SQL Server prior to that, full stop.

I also didn’t go into different architecture choices, like database per customer, schema per customer, etc. Finding the right fit there can be tough, and if you’re on PaaS like Azure or RDS, you have to abide by their individual limitations (30 database limit in RDS, no cross-database stuff in Azure SQL DB, etc.).

Thanks for reading!

Previous Post
Why Do Some Indexes Create Faster Than Others?
Next Post
The Pleasure Of Presenting With Joe Obbish

4 Comments. Leave new

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.