How to Prove Your SQL Server Needs More Memory [Video]

Open up Performance Monitor (aka Perfmon), then:

  • Windows 2008 & newer – expand Monitoring Tools, click on Performance Monitor
  • Windows 2003 & older – click System Monitor

This brings you to the graph. Click on the metrics at the bottom and click the red X at the top to remove each of them. We want to start clean.

Then click the + button to add a new counter. At the top, where it says “Select counters from computer”, put in the name of your SQL Server starting with two backslashes, like \\MYSQLSERVER and hit the Tab key. Perfmon will pause for a few seconds while it gets the list of Perfmon counters on that server, and then shows them in a list below.

Add these counters:

  • Memory – Available MBytes
  • SQLServer: Buffer Manager – Page Life Expectancy
  • SQLServer: Memory Manager – Memory Grants Pending
  • SQLServer: Memory Manager – Target Server Memory
  • SQLServer: Memory Manager – Total Server Memory
  • SQLServer: SQL Statistics – Batch Requests/sec
  • SQLServer: SQL Statistics – Compilations/sec

While data starts to come in to the graph, watch our 30-minute video on How to Prove Your SQL Server Needs More Memory. I explain what the counters mean and how to interpret them:

The takeaways, arranged from easy to hard:

If your server is slow and you’ve got less than 64GB of memory, learn how to explain business costs as I explain in the video. It’s smarter to buy $500 worth of memory rather than spend days struggling with a problem and making risky changes to your server. Sure, the business is going to argue against you – that’s their job, and it’s your job to clearly explain the pros and cons of each side. You have to persuade.

If Memory Grants Pending is averaging over 0, queries can’t even start because they can’t get enough memory. You may have queries that have incorrect memory grants, or you may just desperately need more RAM.

If Compiles/Sec is over 10% of Batch Requests/Sec, SQL Server may not be able to cache execution plans. This causes increased CPU use and slower queries. There’s a lot of gotchas with this one, though – now the hard work starts, because you have to spend time analyzing your workload to see if the execution plans can even be reused.

If Page Life Expectancy is low, you may need memory to cache more data pages. In the video, I explain how to start with tuning indexes and queries first (since that’s typically a lower cost than buying more memory once you’re over 64GB). I mention these tools:

Previous Post
IOPS Are A Scam
Next Post
What Does the #SQLPASS Board of Directors Do?

26 Comments. Leave new

  • Great video – I watched it live but was unable to follow in PerfMon. I have found that I cannot get any results back from named instances in PerfMon. I am using the counter sets prefixed with the instance name like: “MSSQL$MyInstance”. The only column that is in the resulting csv file is the time. If I connect to the default instance, I do get values back. Any ideas? This problem is common across all servers in my company, even though I am in the local Administrators group, and have the “SysAdmin” role on the SQL Server side.

    • Hi, Drew, glad you liked the video. Troubleshooting that is a little beyond the scope of something I can do quickly in a blog post comment. You could be connected to the wrong node in the cluster, for example. Make sure that you put the server name (without the instance name) in the Perfmon box for server name, and nothing else.

  • Nice article and video. Really helps me out and looking forward to read more. Also I want to share that: Previously, I used Buffer Cache Hit Ratio (BCHR) Monitoring perform counters for checking the Memory bottleneck. IF BCHR counter is “high”, then SQL Server is efficiently caching the data pages in memory, reads from disk are relatively low, and so there is no memory bottleneck. Conversely, if the BCHR value is “low”, then this is a sure sign that SQL Server is under memory pressure and doing lots of physical disk reads to retrieve the required data.
    But, one should not use this counter for Memory Bottleneck since after reading this blog long time back; I am not using BCHR now. This is just another counter, and one shouldn’t be using as for Memory bottleneck.
    Anil Maharjan

  • Hi Brent.
    As always.
    Brent just a Q question.
    Have you killed a Spid session on SQL on why?
    I just kill someone (SPID) that was using a DB on the C drive and the temp Db is also on the c drive.
    I just 5 min it took 45GB of space. It was only to take all anyways. My colleague was not happy about it but i did not want to run a change to run out of space on the c drive where the OS is and Binaries files are.
    I need to know what you think? Thank you.

  • Hello,
    After following this podcast, I realized my SQL Server needed more memory, as we were experiencing an average of 2 for Memory Grants Outstanding. We bumped up the server RAM from 24 to 32GB but everything looks the same (memory was increased 12 hours ago, maybe I need to be more patient?)

    My question is, after adding more memory to the Server itself, do I need to do anything to make SQL Server start using that additional memory? My “max server memory (MB)” is 2147483647 which I believe is the max it can be.

    Thanks for the tips and for helping me understand how to get SQL Server to start using the additional memory.


    • Well, I’ll answer my own question … apparently the additional RAM on the server is automagically picked up by the SQL Server when needed. sqlservr.exe was using 7GB yesterdat, then after the new memory it is now using 16GB so that part of my question is answered.

      However, according to the Podcast, adding more memory should push the “Memory Grants Outstanding” to a zero average – assuming that shortage of RAM was the problem. But my memory grants outstanding stats are about the same as before I allocated more RAM.

      So now I would like to know what is meant by “You may have queries that have incorrect memory grants” from the statement “If Memory Grants Pending is averaging over 0, queries can’t even start because they can’t get enough memory. You may have queries that have incorrect memory grants, or you may just desperately need more RAM.”

      What the heck is a “incorrect memory grant” (Google search turned up info for “memory grants” but nothing for “incorrect memory grants”)


      • I guess no one (not even Brent) knows what is meant by “You may have queries that have incorrect memory grants”. Is it for real?

        • This sucks, I followed this podcast and took the advice about adding more memory, but I still have Memory Grants Outstanding events that average 1 or more.

          So either this website gives bad advice, or no one at this web site knows what “incorrect memory grants are” and it’s all just a bunch of smoke and mirrors.

          I should have known better, there was no real proof in this article, just a bunch of BS. Lesson learned.

          • Aps – there are plenty of reasons why your queries can have incorrectly high memory grants. That is outside the scope of this blog post, though. I do wish we could answer everyone’s questions and take an unlimited amount of time for Q&A, but unfortunately we do have day jobs. If you’d like to contact us for personalized consulting help, feel free to click Contact at the top of this page. Thanks!

  • it was easy on the top of the google search 🙂

    it just needs a small correction:

    instead of •SQLServer: SQL Statistics – Compilations/sec
    needs to be
    •SQLServer: SQL Statistics – SQL Compilations/sec

    please delete my comment after perfmon counter correction

  • I can’t find these counters in Windows Server 2012 Standard running SQL Server 2012 (SP1) – 11.0.3000.0 (Intel X86) Enterprise Edition on Windows NT 6.2 (Build 9200:) (WOW64) (Hypervisor)

  • Josh Strauss
    May 6, 2014 5:17 am


    Are you aware of any SQL implementation planning templates available from Microsoft. An example will be a template that asks about database size, growth, I/O, jobs etc which will in turn output requirements such as recommended memory, CPU and storage?


    • Josh – sure, there’s the Fast Track Reference Architecture and the Fast Track Data Warehouse Reference Architecture. These are appliance designs for hardware manufacturers that you can use to design your own hardware.

      • Josh Strauss
        May 6, 2014 8:26 am

        Hi Brent,

        Thanks for the prompt reply. I’ve had a look and it seems to target hardware. Is there anything that specifically looks at VMware or could I use HP as a base then just convert to a VMware specification?

        I’m using HP DL380s and an HP 3PAR as storage and physical layer then VMware as the hyper-visor layer.


  • What significance does Memory Grants Outstanding have, if any?

  • Hello Brent, I am extending myself from a traditional BI developer to also wearing the hat of a DBA for performance tuning purposes.

    I tried the link above, “Our 6-hour index training videos,” however, the link does not take me to your six-hour video(s). Would you mind having a look to see if you can resolve the link problem? I actually scheduled time today to sit, watch, take notes, and learn from you. Now I am just going to have to watch the ball game, darnit! 🙂

    Anything that you can do would be of great help. Please do let us know when the link is fixed by posting an update on the page.

  • Hi Brent, when you say to compare Batch Requests/sec and Compilations/sec,
    does this mean that plans are created and stored on batch level(1 batch – 1 query plan)?

    A batch is a set of SQL statements(DML) submitted together and executed as a group, one after the other. Go separates batches, so 3 select statements(without go) are one batch and one query plan, right?

    And why then management studio tells you plan size in cache for each sql statement?

    • Nick – yep, but this is really beyond what I can do justice to quickly in a blog post comment. For further learning, check out our training classes, or pick up Grant Fritchey’s book on execution plans.

  • Joseph StClair
    April 22, 2019 9:20 am

    Brent, we are moving to Azure cloud and I want to properly size that db compared to our onPrem physical dbs. We want to make sure to take advantage of options in the cloud to ensure we keep costs relevant to what is needed. We are on SQL Server 2016, with Dell R730 2*20 Cores 768GB RAM, 8X800 GB SSD.


Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.