Why Core-Based Licensing Matters for Performance Tuning

If you don’t license all of your SQL Server’s CPU cores, you need to pay particular attention to your server hardware.

Say you’re using an HP DL580 with four CPU sockets, each of which has a Xeon processor plugged in with 10 cores. That’s a total of 40 real cores, or 80 logical processors if you’ve enabled hyperthreading.

In SQL Server’s log at startup, if you’re using an upgraded license capped out at 20 cores, you’ll see a message like this:

SQL Server detected 4 sockets with 10 cores per socket and 20 logical processors per socket, 80 total logical processors; using 40 logical processors based on SQL Server licensing. This is an informational message; no user action is required.

To understand the performance implications, it helps to look at a diagram of the server’s system board:

HP DL580 System Board
HP DL580 System Board

The 4-3-2-1 across the top are the four Xeon processors, each of which has 10 cores (20 with hyperthreading.)

The 8-7-6-5-4-3-2-1 across the bottom are the memory banks. Memory is not shared across all CPUs – each CPU has its own local banks of memory, a critical concept in NUMA – non-uniform memory access. To learn more about NUMA, check out the virtualization module in my Senior DBA Class.

Here’s what that concept means in practice for our server – say we’ve got 1TB of total memory:

System board color coded
System board color coded

Each processor gets its own local memory.

And when SQL Server says, “Hey, you’re only licensed for half of these cores, so I’m only going to turn on half of them,” it will only use the memory in half of the NUMA nodes. In this case, we’re licensed for 40 logical processors, so we have to dig further into DMVs and logs to figure out which NUMA nodes got turned on, and whether or not our memory is even available to us. See, if memory is hooked up to a NUMA node that isn’t enabled, that memory won’t be accessible to SQL Server.

Similar problems exist when you run Standard Edition on a server with more than 4 CPU sockets. Sounds crazy – most of us don’t use servers that large – but this situation pops up a lot when VMs are misconfigured with 8 virtual CPUs, each of which is presented to the guest as its own CPU socket.

How to Know If Your SQL Server is Affected

Run sp_Blitz® and look for the priority 50 warning of “CPU Schedulers Offline.”

How to Fix It

There are lots of ways to fix it, and the right solution depends on your version/edition of SQL Server, your licensing, and your hardware. Here’s the big picture:

Make sure you installed the right binaries. Believe it or not, there’s actually an install for SQL Server Enterprise Edition floating around that’s capped at just 20 cores. Even though you might have paid for more, if you run the wrong installer, sad trombone. Here’s more info about the infamous 20-core-limited installer.

One option is to license all of your sockets and cores. (Hey, listen, you’re the one who wanted a solution.) Don’t buy hardware with more cores than you actually need because CPUs with more cores typically have slower clock speeds (often by 1/3 or 1/2) compared to CPUs with less cores. If you’re limited by SQL 2014 Standard Edition & prior’s cap at 16 cores, you could consider upgrading to SQL 2016 Standard Edition, which will allow you to license up to 24 cores. More than that, though, and you’re looking at Enterprise Edition.

Another option is to use affinity masking to tell SQL Server, “These are the specific cores I want you to use,” and manually load balance the work across all NUMA nodes. Say you’re using SQL Server 2014 Standard Edition, which is limited to 4 sockets or 16 cores, whichever comes first. In the example above, where we have 4 sockets with 10 cores each, you could design your affinity mask to just use 4 cores per socket (4 x 4 = 16) thereby staying within the 16-core limit.

If it’s a VM, tweak the cores/sockets configuration. Some folks accidentally configure an 8-socket, 1-core-per-socket VM, which has 8 cores – but only 4 of them are usable. Just shut your VM down, change your VM cores-per-socket settings, and boot ‘er on back up.

For custom advice on which one is right for you, check out our consulting services.

Previous Post
Using sp_BlitzCache™ Advanced Features
Next Post
When Does Index Fragmentation Matter?

38 Comments. Leave new

  • Hi Brent,
    You hit a itch with this post. I have been wondering about something for a long time.
    I have a HP DL380 with 2X6 processors and 256GB RAM. I am using SQL 2008R2 Std Ed.
    My wonder is if I put two standard edition Instances on the server and use Affinity Masking to assign each instance to a particular Socket and presumably a NUMA node, would that hinder, or enhance performance.
    Unfortunately I don’t really have sufficient assets to set up a good test environment for this.
    Will you offer an opinion or advice on this?

    Reply
    • Ray,

      The DL380 does not support NUMA.

      Reply
    • Ray – 2008R2 Standard Edition is limited to just 64GB of memory anyway, so that’s a little bit irrelevant here. You can do instance stacking to use more resources, but generally that isn’t something I’d recommend without a test environment first.

      Reply
      • Hi Brent,

        I’ve seen you discourage instance stacking in several comments, as well as live at training events. Do you have a blog post or other resource that dives into why that is?

        If not yet, and if it seems like an interesting topic for a future blog post, what I’m wondering is what kind of issues have you run into with that? Are there any ways to mitigate those problems? Why is virtualization better than instance stacking? Are there exceptions to the rule, where instance stacking might actually be the better alternative?

        Not looking for a specific answer in a comment, but to give you a bit of background of why I’m interested in the topic: Where I work, we use instance stacking extensively because we have the need to support multiple versions of SQL Server. We supply a centralized backend for about 100 COTS products, and they all introduce support for new versions of SQL Server on their own schedules. None of them require enormous resources, so generally we’ve found Standard Edition sufficient to support them. We license all the physical cores, and run the instances on the physical operating system. We’re careful to set the max memory on each instance so that they’re not gobbling up each other’s RAM, at least. We’re reluctant to virtualize, because then we’d be licensing the same physical cores for each virtual operating system environment, unless we go to enterprise+sa and license for maximum virtualization. But that gets very expensive.

        Reply
        • Matt – yeah, that’s something we cover in our Senior DBA Class of 2016 (and prior years) but it’s kinda bigger than a blog post. Consider attending one of our classes for architecture and licensing guidance like that. Thanks!

          Reply
  • What would you suggest looking at to find a balance between fewer/faster and more/slower cores?

    Reply
  • I had to explain VMWare CPU options to one of my Windows admins who was unaware of the option to set up multi-core CPU’s in a VM. This changes how the guest OS sees the CPU’s. Very good article.

    Reply
  • So what happens when I virtualize the server. I only put one VM on it with 8 virtual cores on the server and license SQL with 8 enterprise core license. My understanding is VMware would give me full access to ALL memory and cores of the Host. However the Guest only sees 8 active cores and more importantly I haven’t violated licensing. Has anyone looks at this?

    Reply
    • Kevin – cores are different than sockets. Sockets are Xeon CPUs, and cores are the number of simultaneous “workers” inside one of those sockets. You can license as many cores as you’d like for Enterprise.

      Reply
  • Hi,

    Everything you’re saying is totally correct, but I did want to throw out another option. I don’t know if it’s specific to dell, but in our Dell servers (r720 / 820) we have an option to only enable “x” cores. For example. I have an r820 with quad 8 core procs, but we’re only licensed for 16 core. So i went into the bios and set the CPU to only 4 cores. which means I only use 4 core per socket, and still have access to all our memory.

    Just wanted to pass that along for others…

    Reply
    • Hey Eric,

      You should talk with a Microsoft licensing specialist if you’re relying on this. I know several folks have talked to Microsoft and have been told that you have to license all processors that are physically in the machine, regardless of whether or not they are enabled in the BIOS settings. This is not clear in the published licensing guides, but I have heard this from several sources, so I can’t recommend it.

      Kendra

      Reply
      • While I appreciate your feedback, I’m going to err on the side of “I’d like to see MS argue that legally”. It’s no different than licensing a vCPU in a virtual environment. I could have a 32 core vmhost and have a dual vCPU SQL guest. I only need to license the vCPU, not the physical one (if I don’t’ want to run an unlimited number of SQL VM’s that is…). When you disable the cores in the BIOS, the OS can’t see / use the cores and thus, SQL can’t see / use the cores.

        IMHO, I think it would be worth you guys going out and doing a bit more “official” research (‘m sure you guys have some licensing contacts) and maybe have a follow up blog post.

        If I’m correct, you could make a much more future proof SQL server purchase. Adding a few extra cores to the cost is nothing compared to needing to purchase an entirely new SQL server. Not too mention, if you decided (I need to go from 8 cores to 12 cores) is as simple as shutting down, going into the BIOS and presenting 3 cores per socket instead of 2. Which not only make for a cheaper future proof solution, but one that is also much quicker to upgrade.

        Then again, I would argue why not just virtualize it and be done with this nonsense. VMware would allow me to have dual vCPU and span numa nodes for memory if I really wanted. Just another win for virtual over physical.

        Reply
        • Eric – that’s an interesting choice. Most folks would rather not get into a legal argument with MS, but if that’s your take, I understand. I’d rather not pick that fight myself, but good luck to you! It’d be great to read a blog post from you about that process.

          Reply
          • I’m pretty used to the taste of shoe by now. I checked in with MS and they do as you’ve stated required all physical cores to be licensed even if they’re disabled in the BIOS.

            Just another reason in my book to virtualize the server since we’re now going to basically waste two 8 core procs and 384GB of RAM. Fortunately we should be able to rip out the processor daughter board, now i just need to see which PCI slots that will be disabled.

          • Brent and Kendra are spot-on.

            I had this discussion with a local MS License Rep with regards to licensing half of a HP SuperDome with ENT, as we’ll only use half the cores on guest VM’s, and was informed this is not the way it works, all cores need licenses.

            Why I mention SuperDome is, the fact that it had some weird functionality to separate cpu assignment/limiting technology built-in, which wasn’t even deemed good enough in MS eyes.

        • Intel creates a line of processors with the same number of cores, then they disable cores for those that are a lesser model. So technically, the cores are present but are disabled. Therefore, if Intel can disable them, then why shouldn’t I be able to disable them? Microsoft did not say anything about “disabled” cores, so assumptions have to be made. Since Intel does it, and Microsoft accepts it, why shouldn’t I believe that I can do the same?

          Reply
  • Am I right that this only affects the upgraded core licensed systems where I will have the 20 core cap?

    If I buy today a new dedicated hardware and want to use SQL Server 2014 Enterprise, than I have to license all cores visible to OSE.
    And if I run a VM and the bare metal isn’t licensed at all, than I have to pay for all v-cores that are visible to the virtual OSE.
    So you can say: What you see is what you have to pay for.
    That is my interpretation from the SQL Server 2014 licensing guide (page 5 and 9).

    Regards
    Dirk

    Reply
  • I’m a little bit confused. The article starts off by saying, “If you don’t license all of your SQL Server’s CPU cores […],” which implies it is possible to only license some of your physical server’s CPU cores. But then in the comments Kendra, Brent, and EricCSinger seem to conclude that you MUST licenses all of your cores even if you’re not using them.

    This is relevant for me because I’d like to run two instances of SQL Server 2012 Enterprise on one physical server that has 16 cores. My plan is to use affinity masking to tell one instance to only use 8 cores and half the RAM. The other instance will also use affinity masking to use the other 8 cores and other half of RAM. The question for me is, in this scenario do I need two sets of 16-core license or will two sets of 8-core licenses cover me?

    Reply
    • Kendra Little
      January 8, 2015 11:31 am

      Hi Elijah,

      It’s totally confusing. The article is basically saying you might not have all the cores licensed, and it usually happens by accident. It’s usually one of two ways:

      * A virtual machine running SQL Server Standard Edition which is configured with more than 4 sockets (something like 6 vCPUs configured as six sockets with one core each)
      * A special SQL Server 2012 Enterprise Edition CAL based license that’s limited to a specific core count (was available with upgrades for some customers with Software Assurance). Sometimes people download this edition without realizing it.

      If you think you might fall into either of these categories, you can run our free sp_blitz script, look for its diagnosis that some schedulers are offline. https://www.brentozar.com/blitz/schedulers-disabled/

      For your question about affinity masking, I would not recommend doing that if performance is important (and it usually is). If you need to run multiple SQL Servers on the same host, virtualization is a much better bet than instance stacking.

      Hope this helps!
      Kendra

      Reply
      • Thanks Kenda. I still wanted to get my exact question answered of whether I needed two sets of 8-core or two sets of 16-core licenses. I followed your advice from https://www.brentozar.com/archive/2014/12/ask-microsoft-sql-server-licensing-question/ and did a chat with MS. They said I need two sets of 16-core licenses. Here’s the transcript:

        Tiffany P: Hello, how may I assist you today?
        Elijah W. Gagne: Hi Tiffany
        Elijah W. Gagne: I’d like to run two instances of SQL Server 2012 Enterprise on one physical server that has 16 cores. My plan is to use affinity masking to tell one instance to only use 8 cores and half the RAM. The other instance will also use affinity masking to use the other 8 cores and other half of RAM. The question for me is, in this scenario do I need two sets of 16-core license or will two sets of 8-core licenses cover me?
        Tiffany P: Good Morning Elijah. I will be more than happy to assist you.
        Tiffany P: Give me a moment to check this.
        Tiffany P: After researching this you will need to two sets of 16-core license.
        Elijah W. Gagne: Thanks! Have a good day.

        Reply
        • Ben Pomicter
          May 4, 2015 9:20 am

          Actually, both of those options are incorrect. You don’t need two 8-core licenses or two 16-core licenses; what you need is one 16-core license [and note that it’s physical cores; if the machine you describe actually has two 4-core CPUs with hyperthreading, then you need one 8-core license].
          What you’re licensing is the physical server (OSE), not the instance(s). You can then install as many instances as you want, and this is unaffected by affinity masking.
          “For each server to which you have assigned the required number of licenses as provided in the “Physical Cores of a Server” section above, you may run on the licensed server any number of instances of the server software in one physical and any number of virtual OSEs”

          Reply
  • Here’s a strange one for you. We just upgraded from SQL Server 2012 standard to SQL Server 2014 Enterprise. The box we are running on has 4 CPUs of 6 cores each. SQL Server will only let me use 20 cores. I was under the impression that the Enterprise edition will use all the CPUs in the box……

    Reply
    • Joseph – read the above article carefully. The answers you seek are hidden in the text.

      Reply
      • If you upgrade and are only using 20 cores in enterprise edition, you installed the wrong bits. There are two in VL: Core edition (Core licensing) and the regular (Server-cal) version. Server-Cal, per our documents, limit EE to 20 cores. This is only available to enterprises upgrading Server-CAL or still have it on their contract. Do a SKU upgrade and use the core bits. An INI file change might have to occur to make this work.

        Reply
        • George – just wanna clarify your wording there a little. If you upgrade and you are using Server+Cal licensing, and you’re limited to 20 cores, you are indeed using the correct bits.

          Reply
          • I agree that upgrading Server-CAL to Server-CAL is valid for EE. I also have had many customers inadvertently downloading the “Server-CAL” bits, trying to upgrade or install new, SQL 2012 EE or later. Suddenly their 32 core box is stuck on 20 cores, and it’s not always obvious, and they are licensing by core.

  • Toby Geffryes
    January 6, 2017 9:58 am

    Hi Brent, Regarding the memory available, your information may be incorrect. I have recently worked on an upgrade from SQL2008R2 EE to SQL2014 EE (Server/cal) on a BL460 with 4 CPU sockets each with 10 cores and 512Gb memory.
    In both instances the Max Server Memory was set to 460Gb. In the SQL2008R2 instance 4 NUMA nodes were visible (000,001,002,003) and on the SQL2014 instance there were only 2 (000 and 003).
    On running a reindex of the main database, the memory usage initially increased on node 000 up to 128Gb. Utilisation on Node 003 then increased to about 100Gb before usage switched back to Node 000.
    At the end of the reindexing the bufferpool usage was:- 220,421 Mb on Node 000 and 223,425 Mb on Node 003.
    I know this is an old post but before the upgrade I was looking for any information I could find as I had the same concerns.

    Reply
    • Toby – I can’t really troubleshoot this in a blog post comment, but I can verify that I’ve seen it repeatedly in the wild (and MS has confirmed). The short answer there is that you should be seeing all of your NUMA nodes, not just 2.

      Reply
  • Hi Brent,
    I have a 2014 Enterprise on Datacenter OS. It’s a VM with 3 sockets at 10 cores each. Why in the world would I get this message “SQL Server detected 3 sockets with 10 cores per socket and 10 logical processors per socket, 30 total logical processors; using 20 logical processors based on SQL Server licensing. This is an informational message; no user action is required.” If it’s the dreaded 20 core binary installation, how could I tell. If not, what are your thoughts?

    Reply
    • Nicholas — your answer is right there in the message you pasted:

      using 20 logical processors based on SQL Server licensing

      Time to SKU upgrade, buddy!

      Reply
    • It sounds like you have the server/CAL-licensed EE. That one is indeed limited to 20 cores. If you paid for 30 cores of core-based licensing of EE ($225000), then you might just need to reinstall with a different PID. If you have TechNet or MSDN, you can download the media from there

      Reply
  • Great explanation: I have a similar set up as in e.g. we use enterprise core based licensing with 4 sockets 20 cores and HT enabled making total 80 logical processors available with 1 TB of memory in server. MAX server memory is 656 GB with NUMA nodes 01,2,3 having 164GB each. But why do i still see on avg free node memory different across each NUMA node i.e. 0 having 128 MB free, 1 with 280 MB free, 2 with 122 MB and 3 with 30 MB free. Also the PLE across all 4 nodes varies on avg. Does that mean something is wrongly configured with memory or this is how its suppose to

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.