When I look at a SQL Server, one of the first things I check is, “How much memory does this thing have relative to the amount of data we’re hosting on here?” I’ve long used some seat-of-the-pants numbers, but armed with data from SQL ConstantCare® users who opted into public data sharing, let’s do a little deeper analysis.
I took last Thursday’s data for about 1,400 servers, then excluded stacked instances (multiple instances installed on the same OS), Azure SQL DB, Managed Instances, and servers with less than 10GB of data.
How much memory do SQL Servers have?
The median SQL Server has 19% of the data size as RAM. Meaning, if it’s hosting 100GB of data, it has 19GB RAM in the server. (Max memory size is a discussion for another blog post.)
To take a few examples from that median:
- 84GB data hosted on a SQL Server with 16GB RAM
- 166GB data hosted with 32GB RAM
- 289GB data hosted with 55GB RAM
Let’s slice it a different way, though: let’s take the median data volume out of this sample, 219GB. (If I sort servers by how much data they host, the middle value is 219GB.) In servers in that range, a few include:
- 219GB data hosted on a SQL Server with 15GB RAM (OS has 7% of the data size)
- 222GB data hosted with 128GB RAM (58%)
- 222GB data hosted with 24GB RAM (11%)
These numbers are thought-provoking, but before you ask questions, think about this one: the data size vs hardware size doesn’t by itself mean that the users are happy. There are a lot of other factors at play, like query volume, tuning done, wait stats, etc.
A few examples from the extreme ends of memory-vs-data-size provisioning:
- 4.2TB of data hosted on 16GB RAM (HAHAHA)
- 20TB hosted on 64GB
- 10GB (not TB) hosted on 64GB RAM
- 61GB hosted on 256GB RAM
As data size grows, memory doesn’t.
I split the servers into quartiles based on the size of data they’re hosting:
- Servers hosting 10-59GB data: median RAM size is 74% of the data! (example: 27GB data, 20GB RAM)
- 60-224GB data: 23% RAM size (example: 210GB data, 48GB RAM)
- 225-600GB data: 13% RAM size (example: 488GB data, 64GB RAM)
- >600GB data: 6% RAM size (example: 2.1TB data, 128GB RAM)
The low end percentages are a little skewed since in the 10-59GB tier, the OS memory means a lot. In our SQL Server Setup Guide, we tell folks to leave at least 4GB to the OS, and I think most sysadmins would consider 2GB to be the bare minimum. But still, the dropping percentages as data grows – that’s pretty steep.
Do Enterprise Edition servers have more memory?
Overall, Enterprise Edition servers handle larger volumes of data, and they are configured with more memory to handle it:
- Standard Edition median data size is 168GB, median RAM size is 32GB
- Enterprise Edition: median data size is 358GB, median RAM size is 54GB
- Developer Edition: data 111GB, RAM 16GB (poor developers)
So in terms of pure server size, yes, Enterprise servers are larger, but as a percentage of data, something kinda interesting happens:
- Standard Edition median: RAM is 23% the size of the data
- Enterprise Edition: RAM is 17% of the size of the data
- Developer Edition: 11% (c’mon, man, let them get some memory!)
Do older SQL Servers have less memory?
Y’all are starving the dinosaurs:
- SQL Server 2008 and 2008R2 median RAM is 15% of the database size
- SQL Server 2012: 18%
- SQL Server 2014: 19%
- SQL Server 2016: 22%
- SQL Server 2017: 24%
Which might be due to a few factors, like not caring about the performance on older servers, or dealing with old servers built with much lower memory sizes.
Where to go from here
My next thoughts are:
- Which servers are more likely to experience PAGEIOLATCH waits?
- Which servers are more likely to experience RESOURCE_SEMAPHORE poison waits when they run out of query workspace memory?
- Can I build a sizing tool that predicts user happiness based on data size? (Meaning, if you put 3TB of data on a 64GB RAM VM, how bad will the wait stats be?)
- Then, can we give that same advice to customers? For example, showing them a chart of where they rank for data size vs memory vs happiness?