How to Tell If Your SQL Server Has Too Much Memory

Sounds impossible, right? The saying goes that you can never be too rich or too thin or have too much memory.

However, there is one good indication that your SQL Server is probably overprovisioned, and to explain it, I need to cover 3 metrics.

1. Max Server Memory is set at the instance level: right-click on your SQL Server name in SSMS, click Properties, Memory, and it’s “Maximum server memory.” This is how much memory you’re willing to let the engine use. (The rocket surgeons in the audience are desperate for the chance to raise their hands to point out different things that are or aren’t included in max memory – hold that thought. That’s a different blog post.)

2. Target Server Memory is how much memory the engine is willing to use. You can track this with the Perfmon counter SQLServer:Memory Manager – Target Server Memory (KB):

Generally, when you start up SQL Server, the target is set at your max. However, SQL Server doesn’t allocate all of that memory by default. (Again, the experts wanna raise hands and prove how much they know – zip it.)

3. Total Server Memory is roughly how much the engine is actually using. (Neckbeards – seriously – zip it until the end of the post. I’m not playing.) After startup, SQL Server will gradually use more and more memory as your queries require it. Scan a big index? SQL Server will start caching as much of that index as it can in memory. You’ll see this number increase over time until – generally speaking – it matches target server memory.

What if Total Server Memory doesn’t go up?

Say you have a SQL Server with:

  • 64GB memory
  • Max memory set to 60GB
  • 1GB of total database size (just to pick an extreme example)
  • Infrequent query workloads (we don’t have hundreds of users trying to sort the database’s biggest table simultaneously, or do cartesian joins)

SQL Server might just not ever need the memory.

And in a situation like this, after a restart, you’ll see Total Server Memory go up to 2-3GB and call it a day. It never rises up to 10GB, let alone 60GB. That means this SQL Server just has more memory than it needs.

Here’s an example of a server that was restarted several days ago, and still hasn’t used 4GB of its 85GB max memory setting. Here, I’m not showing max memory – just the OS in the VM, and target and total:

Memory metrics over time

In most cases, it’s not quite that black-and-white, but you can still use the speed at which Total Server Memory rises after a reboot to get a rough indication of how badly (and quickly) SQL Server needs that memory. If it goes up to Target Server Memory within a couple of hours, yep, you want that memory bad. But if it takes days? Maybe memory isn’t this server’s biggest challenge.

Exceptions obviously apply – for example, you might have an accounting server that only sees its busiest activity during close of business periods. Its memory needs might be very strong then, but not such a big deal the rest of the month.

How we watch that in SQL ConstantCare®

In our SQL ConstantCare® mentoring, we build automated rules for things we think are no-brainers. For example, it’s easy to find good starting point advice on how you should configure max memory.

For more advanced things, we track metrics over time and give manual advice instead. This is definitely one of those manual issues because there simply aren’t hard-and-fast industry-wide best practices yet. We’re building them as we go.

So for issues like this, we let the admins know just as an FYI. If you wanted to save some money on a smaller cloud VM, or maybe move some of this memory over to a different virtual machine that needs it more desperately, here’s something to consider.

Previous Post
[Video] Office Hours 2018/5/23 (With Transcriptions)
Next Post
Behind Every Trivial Plan Is A Good Demo

15 Comments. Leave new

  • This is an interesting PoV. I am an DBA and SE. I ha e a Master’s in Database System Design and I do agree you can have to much memory, aka un-needed. We have a app which creates databases, tables and stored procedures. Wierd thing is the memory maxs at 70%, even though we have it set for more and less to the OS. Any ideas why?

    Reply
  • martyn downs
    May 29, 2018 7:43 am

    Those poor neckbeards.

    Reply
  • Paul Maguire
    June 8, 2018 5:59 am

    One thing we always look for is Memory over allocation at host in virtualised environment, just to be sure. This can have a similar looking set of counters. In similar setup where you have VM with 64GB allocated with SQL allocated say 54GB. You have large busy databases with a working data set much larger than 54GB. Where my OS “thinks” it has 64, you set Target memory in SQL at 54GB but while running Total Memory stays around 40GB (for example). You might think SQL doesn’t need it, but the VM Hypervisor could be limiting the amount of actual memory or worse asking VM to release memory to hypervisor…

    Reply
  • I once worked with a production database that had a Page Life measured in Weeks 🙂 Wish I could have that again.

    Reply
  • My server is running SQL 2016 Standard edition, which as we know, is limited to 128GB for the buffer pool memory. However the ‘Total Server’ counter returns a little over 194GB.

    I’m assuming all that memory above the 128GB limit is the rest of SQL Servers caches (procedure cache, thread stack, backup buffers etc). Does that sound reasonable? It seems like a huge amount! (nearly 70GB!). If I run the following to gather the size of the plan cache:

    SELECT SUM(CAST(size_in_bytes AS BIGINT)) AS size_in_bytes
    FROM sys.dm_exec_cached_plans

    …it returns around 12GB.

    Reply
  • Is there actually a downside (performance hit) to having unused RAM?

    Thanks for the info on how to monitor this

    Reply
  • Bill Kline
    May 2, 2019 9:58 am

    Hi Brent, we have an instance configured with a max server memory of 46080 MB. Yet when I run the query against sys.dm_os_performance_counters to get the target server memory, it returns 26857952. Dividing that by 1024 yields 26228 MB. Shouldn’t the target generally equal the max configured value?

    The query for Total Server Memory returns 25498832, which equates to 24901 MB. The server itself has 64 GB of memory.

    Reply
    • Bill – generally, yep, unless you’re under memory pressure. Troubleshooting that is beyond the scope of what I can do quickly for free in a blog post comment though.

      Reply
  • Hi Brent, We have a production server SQL 2016 Always-on Cluster which have total of 62Gig of Ram with 6 instance and total off 95 DB. Now the server architect is saying that they cannot see any RAM usage on SQL Server the RAM need to reduce. Is there is any way that I can get the 1 month ram usage by SQL Server Instance and give those report to them.

    Reply
  • Hi Brent

    I’m not a SQL expert and i would like to know the SQL query you ran to generate the table above.

    Thanks

    Reply
    • Thomas – that’s from a monitoring system that collects metrics daily. It’s not stored in SQL Server’s dynamic management views over time, unfortunately.

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.