How Much Memory is “Normal” for SQL Servers?

When I look at a SQL Server, one of the first things I check is, “How much memory does this thing have relative to the amount of data we’re hosting on here?” I’ve long used some seat-of-the-pants numbers, but armed with data from SQL ConstantCare® users who opted into public data sharing, let’s do a little deeper analysis.

I took last Thursday’s data for about 1,400 servers, then excluded stacked instances (multiple instances installed on the same OS), Azure SQL DB, Managed Instances, and servers with less than 10GB of data.

How much memory do SQL Servers have?

The median SQL Server has 19% of the data size as RAM. Meaning, if it’s hosting 100GB of data, it has 19GB RAM in the server. (Max memory size and file sizes are discussions for another blog post.)

To take a few examples from that median:

  • 84GB data hosted on a SQL Server with 16GB RAM
  • 166GB data hosted with 32GB RAM
  • 289GB data hosted with 55GB RAM

Let’s slice it a different way, though: let’s take the median data volume out of this sample, 219GB. (If I sort servers by how much data they host, the middle value is 219GB.) In servers in that range, a few include:

  • 219GB data hosted on a SQL Server with 15GB RAM (OS has 7% of the data size)
  • 222GB data hosted with 128GB RAM (58%)
  • 222GB data hosted with 24GB RAM (11%)

These numbers are thought-provoking, but before you ask questions, think about this one: the data size vs hardware size doesn’t by itself mean that the users are happy. There are a lot of other factors at play, like query volume, tuning done, wait stats, etc.

A few examples from the extreme ends of memory-vs-data-size provisioning:

  • 4.2TB of data hosted on 16GB RAM (HAHAHA)
  • 20TB hosted on 64GB
  • 10GB (not TB) hosted on 64GB RAM
  • 61GB hosted on 256GB RAM

As data size grows, memory doesn’t.

I split the servers into quartiles based on the size of data they’re hosting:

  • Servers hosting 10-59GB data: median RAM size is 74% of the data! (example: 27GB data, 20GB RAM)
  • 60-224GB data: 23% RAM size (example: 210GB data, 48GB RAM)
  • 225-600GB data: 13% RAM size (example: 488GB data, 64GB RAM)
  • >600GB data: 6% RAM size (example: 2.1TB data, 128GB RAM)

The low end percentages are a little skewed since in the 10-59GB tier, the OS memory means a lot. In our SQL Server Setup Guide, we tell folks to leave at least 4GB to the OS, and I think most sysadmins would consider 2GB to be the bare minimum. But still, the dropping percentages as data grows – that’s pretty steep.

Do Enterprise Edition servers have more memory?

Overall, Enterprise Edition servers handle larger volumes of data, and they are configured with more memory to handle it:

  • Standard Edition median data size is 168GB, median RAM size is 32GB
  • Enterprise Edition: median data size is 358GB, median RAM size is 54GB
  • Developer Edition: data 111GB, RAM 16GB (poor developers)

So in terms of pure server size, yes, Enterprise servers are larger, but as a percentage of data, something kinda interesting happens:

  • Standard Edition median: RAM is 23% the size of the data
  • Enterprise Edition: RAM is 17% of the size of the data
  • Developer Edition: 11% (c’mon, man, let them get some memory!)

Do older SQL Servers have less memory?

SQL ConstantCare

Y’all are starving the dinosaurs:

  • SQL Server 2008 and 2008R2 median RAM is 15% of the database size
  • SQL Server 2012: 18%
  • SQL Server 2014: 19%
  • SQL Server 2016: 22%
  • SQL Server 2017: 24%

Which might be due to a few factors, like not caring about the performance on older servers, or dealing with old servers built with much lower memory sizes.

Where to go from here

My next thoughts are:

  • Which servers are more likely to experience PAGEIOLATCH waits?
  • Which servers are more likely to experience RESOURCE_SEMAPHORE poison waits when they run out of query workspace memory?
  • Can I build a sizing tool that predicts user happiness based on data size? (Meaning, if you put 3TB of data on a 64GB RAM VM, how bad will the wait stats be?)
  • Then, can we give that same advice to customers? For example, showing them a chart of where they rank for data size vs memory vs happiness?
Previous Post
[Video] Office Hours 2018/11/14 (With Transcriptions)
Next Post
Psst – it’s holiday card time. Update your address.

34 Comments. Leave new

  • I know you wouldn’t have this, but I’d really love to hear more about that “4.2TB of data hosted on 16GB RAM” system…
    Are the users happy?
    Is the OS happy?
    How long does it take to boot up and shut down?
    How *OLD* is the server (somehow, I’m picturing a 10 year old PowerEdge that hasn’t been replaced because “it ain’t broken”)

    Reply
    • Yep, all great questions. I’ve seen one case in particular where folks restored a ~10TB data warehouse onto a 16GB RAM server just for periodic reporting & auditing purposes. The queries would take days to run, but they didn’t care because there wasn’t an urgency on those particular queries. The server was on fire, though, hahaha.

      Reply
  • Are your numbers based on the allocated size of the databases, or the amount of space used in the databases? In my capacity management work I forecast based on the space used, figuring that unused space doesn’t contribute to buffer pool demand.

    In general, it would be great to know more about these findings. I am in the process of making recommendations for how much data we can put on servers based on the amount of memory they have. It’s been pretty hard to get information on the responsiveness our systems are giving from a user point of view, so the big question you ask about user “happiness” often is unanswered.

    Reply
    • Len – here, I’m using allocated size to keep things simple. But yep, absolutely, when I’m doing detailed space forecasting, I do the same thing.

      We’ll definitely be sharing more about the findings with the clients & training students. My big plan for 2019 is to start feeding this data back to clients in a way that helps them make better decisions. There’s a huge value in having this kind of diagnostic data to improve their experience.

      Reply
  • Interesting findings. I normally allocate 75% of the data storage. Is there such a thing as allocation of too much memory?

    Reply
    • That’s a great question. The only reason I’d say yes is that if you give too much memory to one server that doesn’t need it – when you could give that memory to another server that does need it. (Or if you lose credibility with a client where someone else comes in and says, “No no, you’re wasting money here, there’s no need for that.”)

      Reply
  • We’re right in there I spose. 4TB on 100GB. We actually have more memory, but the guys read something at some point that made them fearful of giving more RAM and getting worse performance (something about plan generation).
    When I first started I was all in favour of giving it more RAM, but now it feels fine. Most of the data is older and we’re in need of an archiving strategy, or actually using our partitioning to offload older data to less performant disk.
    The main issues we have is a few queries that hit a very active part of the database with poor design, and larger memory grants – apart from that things are pretty smooth, and memory is something I don’t worry on so much.

    Reply
  • Ivan Argentinski
    November 20, 2018 5:32 am

    The perfect RAM:DB ratio is application dependant. For our purposes 1:10 usually proves adequate.

    On one of our systems, we have 4TB db with 384GB RAM.

    Reply
  • Really interesting thanks. I’m currently responsible for a server where the total db size is 4.2TB. The server has 256GB of RAM but is running SQL Server 2016 Standard Edition, therefore limited to 128GB. That equates to just 3% of the total db size in memory, roughly along the lines of Brent’s findings.

    What’s interesting is if I dig into what’s eating up my buffer pool, I can see that the largest database (ReportServer – 2.6TB!) is only consuming 1.2GB of memory. So my db size vs memory stats may look bad, but the biggest databases aren’t those consuming memory. The highest consumer of the buffer pool is a OLTP database which is 500GB in size, and consuming 90GB of memory (just under 20%).

    Conclusion? Increasing memory is unlikely to dramatically improve performance. I monitor Page Life Expectancy as my primary indicator of memory pressure, and currently it’s a fairly healthy 3.5 hours, although that does drop sharply during maintenance windows.

    Interested to hear other people’s thoughts on this.

    Query to return the buffer pool size per database in case it’s useful:

    SELECT database_id AS DatabaseID,
    DB_NAME(database_id) AS DatabaseName,
    COUNT(file_id) * 8/1024.0 AS BufferSizeInMB
    FROM sys.dm_os_buffer_descriptors
    GROUP BY DB_NAME(database_id),database_id
    ORDER BY BufferSizeInMB DESC

    Reply
  • Roger Plowman
    December 7, 2018 7:50 am

    I guess we’re really lucky. We’re running < 10GB of data with 80GB of RAM. 🙂

    Of course this is a terminal server that's been press-ganged into having a SQL Server Std. instance on it for budget reasons but 800% of data size means our buffer page life expectancy is answered by "when was the last time the server was rebooted" and our buffer cache hit ratio rarely drops below 100%.

    Hey, RAM is cheap, right? 🙂

    Reply
  • Always good to see stats.

    In our case we’re dealing with a 200GB+ legacy vendor database that tends to scan whole tables of data prior to presenting a dialog for users to search for what they need.

    Historically, our performance point for memory has generally been closer to 70-80% of data. Lately we’ve been struggling with performance as we’ve migrated from physical to virtual and the infrastructure team don’t like the SQL Server memory hog, so they’ve halved the memory we used to have.

    Cheers
    Phil

    Reply
    • Yikes, that’s a familiar horror story – “We don’t like how much memory this uses, so we’re going to feed it less. Why is the performance worse?”

      If your tables are doing a lot of scans though, is it possible to add indexes? In the past I would have been reluctant to change anything from a vendor… but if performance is terrible, then… at the very least I’d be asking the vendor why that is and if they have a set of indexes they could apply.

      Reply
      • It continually amazes me that people that don’t know are able to make those decisions, regardless of the professional support and guidence that is available nowdays.

        We do have indexes where our main performance hits are. They only help marginally.

        We’re talking tables with a couple million rows 150-200 columns that are either char or datetime. A majority of the most of the tables have compound primary keys across 5-6 char columns …

        An to top it off, if we make structural changes outside their custom toolkit we have to be very carefule. They keep track of all the object_id’s in the database. If there is a difference, the application itself starts throwing errors, even if the part we changed was in a totally different area.

        Cheers
        Phil

        Reply
  • 4TB total databases volume (only a small fraction is active in usage in terms of current processing and reporting), with 256GB RAM/16 cores. I know this was based on throwing hardware at a poorly tuned/optimized code base. Looking at going to AWS RDS, and based on the total amount of memory by database (vs free/empty space), it’s 104GB used with 34GB empty. The RDS instance RAM sizes are quite limited when looking at 8 and 16 cores (8 has 32 and 61GB; 16 has 64 and 122GB). I’m thinking an 8 core 61GB (based on what is actually used) instance is probably sufficient (to help keep costs down and also get multi-AZ). How do you know otherwise?

    Reply
  • Well, using a similar script to what Shaun posted, and trusting in first principles/concepts, I’m down to 61GB (used that target as it is an option in RDS) from about 230GB with no discernible degradation of anything. Took larger incremental drops in size in the beginning, one was too too much (from 180 to 140) and dumped everyone out. Had to increase for a bit, let things settle down, then started dropping 10GB at a time. Every once in a while PLE would nose dive, but come back up, and plan cache went from a few hundred MB to several GB in size, but as far as what Solarwinds shows on most panels, you can’t tell the difference in buffer cache size. In case this helps anyone.
    https://ibb.co/VDPtBtt (image of memory usage over time, graph is from a Grafana SQL Server monitoring dashboard plugin, actually has some more useful things than what Solarwinds DPA shows)

    SELECT
    (CASE WHEN ([database_id] = 32767)
    THEN N’Resource Database’
    ELSE DB_NAME ([database_id]) END) AS [DatabaseName],
    COUNT (*) * 8 / 1024 AS [MBUsed],
    SUM (CAST ([free_space_in_bytes] AS BIGINT)) / (1024 * 1024) AS [MBEmpty]
    FROM sys.dm_os_buffer_descriptors
    GROUP BY [database_id]
    order by 2 desc;

    Reply
  • TechnoCaveman
    March 25, 2019 12:16 pm

    Psych barriers. Back in 2008, 16 gig was alot of memory. Now my PC has 16 and its easier to ask and easier to get 24 gig of server memory for 200 Gb PeopleSoft database.
    Some storage admins think that SQL will use what ever memory is given it. That SQL would use 16Gb for an 8Gb database. I know this is not the case.
    What is the real rule of thumb? Add memory till the performance curve flattens out ?

    Reply
    • SQL Server will indeed use more memory than the data size – queries need workspace, you cache execution plans, etc.

      Reply
      • Techno Caveman
        March 25, 2019 3:23 pm

        SQL does need “room to think”. I guess it is my ignorance that I’ve seen SQL take and give memory back for small databases. ( Still waiting for Santa to drop off my server with 640GB of memory to test one and two hundred gigabyte databases with. )
        Anyway, with the smaller database only needing part of the data in memory, SQL left free memory. Yes I need a picture “or it did not happen”
        That when SQL started up it does not suck up all available memory. One can watch SQL grow as additional data pages are loaded. Yes additional queries will take up memory but data is loaded 64Kb at a time so it takes bigger chunks.

        Reply
  • Does the 20% metric of memory for 100gig of data apply to a SQL Server 2014 Virtual machine in VMWare?
    Thanks for any help.

    Reply
  • TechnoCaveman
    April 30, 2019 12:20 pm

    I’ll see your answer and raise you one:
    Brent – so VM systems do not need a correction factor? My limited experience has shown network attached storage (NAS) and using the network interface card (NIC) for disk I/O and communication makes the database I/O bound. Mo disks – mo better. The disk queue may not get above 5, but disk busy time can reach 80% or more. [or this could be a local anomaly experienced by me and no one else]
    So no additional CPU or memory needed for physical to VM transition as a performance correction factor ?

    Reply
    • When you say “correction factor” – you’re overthinking it there. When you move from physical to virtual, you have all KINDS of changes: different CPU families, different memory speeds, different storage network, different storage, etc. You can’t fix all those with “correction factors.”

      Reply
  • TechnoCaveman
    May 1, 2019 7:40 am

    (hmmmm) I trust you.
    Reformatting tempDB and data disks from 8k clusters to 64 K clusters yeilds predictable increase in performance. More than one storage admin used the 8k Microsoft recommendation.
    Switch from HDD to SDD to NVMe also produced a predictable performance increase. (Both Azure and ORACLE offer NVMe SSD drives – too busy to check AWS)
    Hence, when using VM add X amount to offset NAS and the additional overhead.
    Nuff said. Too many factors for a correction factor.

    Reply
  • Very Interesting. My question is: What happens when you have a 20TB data? We are working with 256GB RAM with reasonable performance. According to the rule, I should have between 1.5 and 4 TB of RAM!!! Any feedback will be appreciated

    Reply
    • Hector – sure, if you need help with your system, shoot me an email and we can talk about what a consulting engagement would look like. (Obviously you wouldn’t wanna evaluate and architect a 20TB system in a blog comment.)

      Reply
  • Just want to clarify, when you are looking at “data” size are you referring to total of all files for an instance or full server? Also “RAM” is it referring to max memory assigned or full OS?

    Reply
  • Just curious, did these percentages take into account the tempdb space as well?

    Reply
  • Luis Agustin Azario
    October 2, 2019 12:46 pm

    Brent, do you have any information about Minimum server memory ? I am very familiar and learned a lot from about MAX memory and other parameters lik MAX DOP, etc. but not about MIN memory parameter configuration.
    Thank you

    Reply
  • Luis Agustin Azario
    October 11, 2019 1:30 pm

    The thing is that one of my clients is running a very “needy or hungry” for resources Sql-Server 2016. He is the developer and he likes to go behond his role and change parameters for his aplication. We changed other parameters los CTP, max memory, etc. In one of his clients othe DBA configure min memory + lock page in memory, so i agreed to make same change in the server given the fact that the other server runs a lot of process much faster than ours.
    After changing both parameters process that took 3 hours went down to 30 minutos aprox.
    I am not sure if the min memory made any diffrence. I almost sure that lock page in memory did make a diffrence.

    Thank you

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.

Menu
{"cart_token":"","hash":"","cart_data":""}