Should I Install Multiple Instances of SQL Server?

Marina Towers in Chicago

Marina Towers in Chicago

Instance stacking is the technique of installing multiple instances of SQL Server on the same Windows instance. For example, you might have a VM or server named SQLPROD1, and have:

  • SQLPROD1 – default instance of SQL Server, say SQL 2016
  • SQLPROD1\SQL2014 – for our older apps
  • SQLPROD1\SharePoint – because it’s supposed to be on its own instance
  • SQLPROD1\development – for our QA & testing

The Benefits of Instance Stacking

Lower SQL licensing costs – you only have to pay for one license, and then even Standard Edition lets you install dozens of instances on the same Windows base.

Lower Windows licensing costs – you only have to pay for one Windows.

Easier Windows patching – since you only have to patch one OS install.

The Drawbacks of Instance Stacking

Much harder performance tuning – all of the instances share the same CPU, memory, network pipe, and storage pipe. While SQL Server does offer tricks like affinity masking and memory settings to alleviate the first two, it’s got no answers for the second two. A backup on one instance will knock out performance on the other instances regardless of how much tuning work you put in. If none of the instances are performance-sensitive, this doesn’t matter – but how often does that happen? And how do you figure out what the “right” memory or CPU settings are? It takes so much human work and experimentation that it really only makes sense when you have plenty of free time per DBA per server.

Much harder reboot planning – you have to get all of the customers on all of the instances to agree on a time to patch Windows.

Security challenges – sometimes, we get those awful folks who insist on being able to RDP into the Windows instance that hosts their databases. If they insist on being sysadmin on the box altogether, then they can make changes that wreak havoc on the other running instances.

The Alternative: Virtualization

Whenever you think about carving up a single server into smaller pieces, think virtualization instead. It’s a great default place for new SQL Servers.

Every SQL Server deserves its own Windows instance. Yes, this does mean higher licensing costs – you’ll need to license SQL Server Enterprise Edition at the hardware host level, and then you can pack in as many VMs as possible into the host.

Then, each VM gets their own performance management, patch schedules, and security. Plus, surprise bonus: every VM, even the tiniest ones, get all of the features of Enterprise Edition.

Previous Post
Availability Groups: More Planned Downtime for Less Unplanned Downtime
Next Post
Vote Now to Pick the GroupBy Conference Lineup

40 Comments. Leave new

  • Interesting. Do you have to install SQL to license the host? IE, can we license an ESXi host?

    Reply
    • Virtualization licensing is a little outside of the scope of this post, but yes, you can license an entire VMware ESXi host and run a bunch of guests on it. (Windows licensing is separate though.)

      Reply
      • (Clarification: that requires SQL Server Enterprise Edition)

        Reply
        • Further clarification for 2017 Enterprise: You can run as many SQL Server VMs *as you have core licenses.*

          IE, If you have a 6 core Enterprise license that covers your entire host, you are limited to running 6 SQL Server VMs (or containers).

          Reply
          • Unless you have Software Assurance, in which case you can run as many as you want:

            “With the addition of Software Assurance (SA) coverage on all Enterprise Edition core licenses (for a fully licensed server), customers’ use rights are expanded to allow any number of instances of the software to run in any number of OSEs (physical or virtual). This valuable SA benefit enables customers to deploy an unlimited number of VMs to handle dynamic workloads and fully utilize hardware computing capacity.”

            -SQL Server 2017 Licensing Guide p.18, Licensing for Maximum Virtualization

  • 100% agree on virtualization can’t wait till containerization reaches the same point for SQL server.

    Reply
  • meh :). My systems push various performance limits over time, as our developers release new app versions and new db versions (~6/yr, and we are going to continuous deployments). For me, VMs mean their performance is an additional worry, for the same reasons that having multiple instances on the same HW is a worry (and something I will never do). And now that my VMs are also in the clouds, finding those performance constraints is even more of a nebulous exercise. My only solace is that my environment is not all vaporware – there is some some hardware, somewhere, doing something, hopefully for me.

    Reply
  • Please clarify “every VM, even the tiniest ones, get all of the features of Enterprise Edition”
    Are you talking about EE for SQL or Windows OS?
    Is this true for any way of getting a VM – HyperV,VMWare, Azure VM, …

    Reply
    • Noah Engelberth
      December 8, 2016 12:34 pm

      He’s talking about EE for SQL. Though for VMWare or Hyper-V hosts, a similar functionality exists with the Datacenter Edition licensing (taken from the Windows Server 2016 licensing datasheet – http://download.microsoft.com/download/7/2/9/7290EA05-DC56-4BED-9400-138C5701F174/WS2016LicensingDatasheet.pdf)

      – Standard Edition provides rights for up to two OSEs (operating system environments – in other words, VM guests) or Hyper-V containers when all physical cores in the server are licensed. For every two additional VMs, all cores in the server must be licensed again.
      – A minimum of 16 core licenses is required for each server (regardless of the actual number of cores)
      – A minimum of 8 core licenses is required for each physical processor (e.g. a 4-socket server with 4 cores per socket would require 32 core licenses, not just the server minimum of 16).
      – Datacenter Edition includes an unlimited number of OSEs or Hyper-V containers, once the physical cores in the server have been licensed per the above guidelines.

      Using Windows Hyper-V on the physical host doesn’t consume any licensing above what’s required for the guests, so the licensing rules are basically agnostic to whether you’re using Hyper-V or VMWare. If you’re running 1-2 Windows Server 2016 Standard VMs on a physical machine, you need to license every core on the physical machine (subject to minimums of 8 cores per physical processor and 2 physical processors / 16 cores total). If you’re running 3-4 Windows Server 2016 Standard VMs on a physical machine, you need 2 Standard Edition licenses per core (subject to the same minimums). And so forth. If you are running any number of Windows Server 2016 Datacenter VMs, once you’ve licensed all the physical cores on the server you’re good to go (again, subject to minimum core licensing).

      In a cluster of hosts, hopefully you’re running Datacenter to keep the licensing headache sane (and to get the additional features for Datacenter that make Hyper-V clustering better, if you’re using Hyper-V). Otherwise, you should probably talk to a Microsoft licensing professional to figure out what the Standard Edition licensing requirements will be.

      Reply
  • Interesting – I haven’t seen much discussion about stacking SQL instances. In our situation, we put anything that requires performance on to its own VM, usually as a Default SQL instance. But we make the rest of them Named instances and stack them together. Each Business Unit has test VMs and a prod VMs, etc. That way, if they reboot or stomp something, they are only hurting themselves – not the other Business Units.

    By using CName aliases for the server name portion of the instance name and handing that out as the connection string, we upgrade and/or move instances around, quite transparently to the end users. Because we have so many instances and VMs, moving them around easily is important to us.

    Reply
  • Could you please elaborate on The Alternative: Virtualization?
    Are you saying that the Enterprise Edition actually needs to be installed on the host?
    And then any Enterprise Editions installed separately on each of the VMs on that host would be covered by the single licensed copy that was installed on the host?

    Reply
  • For the virtualization questions. With SQL EE, you can licenses the physical core of a server, once you license all physical cores in a given physical server, you can then run an unlimited number of SQL VM’s on top of that physical server. You don’t install SQL on to the physical server, only the VM’s, but you license the physical server.

    As for the comment about performance, that’s environment / admin specific. VMware is not the issue in those cases, its the admin of those systems that are.

    Reply
  • We use VM, but for HA, we’ve taken your concerns about Availability Groups to heart and have chosen clustering. You also warn about clustering in the virtual world. We’ve stuck to physical hardware for our HA solutions, even though we stack because of your pro-stacking reasons. To me, it all depends on the goal, so “it depends” more than not.

    Reply
  • Instance stacking is working great for us! In order to make stacking better we do the following:
    1. Only same versions of SQL server. Never mixed like 2008R2 and 2016
    2. Properly allocate memory (MAX and MIN settings) and CPUs per instance
    3. Have them on the same patching levels
    4. Assign different disk per each instance

    By looking at these 4 things, you will find that instance stacking can work great. Just remember that there many ways to skin SQL Server, so pick the best one for your needs. Be it instance stacking, virtualization and/or cloud.

    Reply
  • Chris Fournier
    December 8, 2016 1:44 pm

    In years past I always liked stand alone or clustered default instances, but in my current role I’ve embraced instance stacking.

    I currently manage SQL clusters, typically 3 node, that support 15 to 20 SQL instances each and are configured in an A/A/A or A/A/P type of approach. They work great for HA and other needs (such as Golden Gate replication from Oracle). I have a mix of instances from 2008R2 to 2016 and haven’t had any conflicts. Each SQL instance gets its own IP (so port 1433 for each) its own drive letter and however many mount points it needs. The physical nodes are fairly burly with 28 cores / 512GB RAM, so balancing them out for performance has been simple. Just be sure to set max memory as Rudy mentioned in the above post. To stave off any potential network contention each node is multi-homed with 10GB NICs for the prod network and the backup network, so each SQL instance listens on two IPs and keeps traffic segregated. Also each node has multiple HBAs for SAN traffic.

    The servers are licensed for EE, so they are expensive upfront, but just adding additional instances has made that investment worth it for us.

    Reply
    • Chris – just a quick sanity check: 28 cores per server * $7k each licensing = $196k per server * 3 nodes = $588k just for licensing.

      I sure hope they’re working out well. I hope someday I can afford a house that nice. Wow.

      Reply
      • Chris Fournier
        December 8, 2016 2:19 pm

        Brent,
        Yeah, these servers were definitely expensive to license up front – we’re actually licensed for A/A/P so each cluster was closer to 400k to license (with SA for “mobility”).

        They work real well and its quite a shift from my previous role where companies would scrape pennies and balk at standard edition pricing! I feel spoiled now.

        Reply
        • We have a similar setup, but only with 2 failover nodes. We stack instances (only same server version). Each SQL version goes into it’s own VM. However, we ended up with licensing the VMs in standard since it would have been way more expensive to license the host with enterprise. Our VM admin only allows us to over provision the CPU by 50%.

          Reply
  • We’ve been running SQL on VMWare for about 5 years now and it’s worked extremely well for us. The things I love the most are the scalability ( adding more CPU or RAM only takes minutes) and hardware upgrades/replacement become almost invisible.

    Reply
    • We’re completely virtual as well. We have two VM servers running 16 cores per blade which requires 16 licence packs for the total of 32 cores (2 blades x 16 cores). License packs are sold in 2 cores per. The unlimited virtualization allows us to move SQL VMs between the two licensed blades…provided you got the Software Assurance.

      I have a couple SQL servers in this environment which have multiple instances (2008R2, 2012, 2014). Memory management can get a bit confusing when you try to allocate X amount of your memory to instance x, y and z like Rudy mentions above. I would love to run some clustering, but our SysOps folks can’t support the O/S clustering at this time. That would sure make it easier when you need to patch.

      Reply
  • Don’t have a problem with stacking. But with certain considerations:
    – same application, ie: won’t handle DB’s on the same instance. So we have multiple sites each with their own DB in a separate instance.
    – management of jobs and work loads – not a problem if you put the time in initially
    – IO: follow best practice
    – Memory: appropriately divided out
    – CPU: you need the appropriate number of cores to meet your need.
    – do not mix SQL versions/editions/SP/CU levels.
    – do not mix development/test with Production
    (all Operational DBA stuff here….)

    Outages to multiple instances is not a concern. You are running HA right? What outages?

    Separating into distinct VM’s may give you more finite control, but the extra overhead and costs of more OS’s should also be considered.

    This is EE, not SE – SE I do not see the benefit.

    Reply
  • Is it another way of getting around 128 GB limit on standard edition of SQL server

    Reply
  • Donald Kennedy
    December 9, 2016 4:34 am

    We’ve historically followed a similar pattern to some other commenters – stacked clustered instances. The main reason was a difficulty with obtaining the capital to buy dedicated hardware, and some limitations on physical capacity in our data centres.

    We’re now moving away from this towards the sort of virtualisation that Brent advises. Not because we think Brent is wonderful and knows everything (though of course he does!), but for 2 reasons:
    1) virtualisation of SQL instances is no longer looked on as the dangerous rantings of a madman, but is much more mainstream and “trusted” as a technology choice now
    2) because a number of our apps are moving towards things like Availability Groups, where there’s a clustering requirement to support SQL functionality – which makes incorporating failover clusters into the same cluster “tricky” at best.

    Reply
  • Phew… glad to see at least some people use stacking. For some time I always though we were doing it wrong.
    We have a few 2-node clusters each running more than a dozen of named instances, some of them quite heavily used. We use mount points for each instance to prevent running out of drive letters (only the root has a drive letter and hosts 3 mount points for data, log, tempdb). Since we moved the disks to XtremIO recently the “bottleneck” shifted towards the CPU’s but still no real issues.
    We also use a dedicated backup network and load-balanced NICs for production traffic.
    We once ran the instances each with dedicated CPU’s using SQL’s affinity settings but that didn’t really make a difference so we reverted the setting to the default.
    Recently enabled hyperthreading. No issues… yet 🙂

    Reply
  • I didn’t see any mention of how the licensing differs if this uses Azure VMs? If we bring our own SQL EE license (BYOL) for an Azure VM, does that mean we lose the ability and benefits to license the physical host?

    Reply
  • Our VAR is suggesting that with Standard Edition, core based licensing, you are only entitled to one SQL Server instance for a physical node. Each license requires a minimum license of 4 cores.

    I referenced the SQL Server 2016 Licensing guide, which isn’t very clear in this respect. I contacted the Microsoft Licensing centre and they concur with our VAR. This didn’t sit right with me.

    Buried in a link in the Licensing Document is the ‘Commercial Licensing brief. Introduction to Per Core Licensing and Basic Definitions’ where on Pp 6 it states

    “With the Standard Edition, for each server to which you have assigned the required number of licenses, you can run on the licensed server any number of instances of the server software in the physical OSE.”.

    I’m attempting to get MS to make the multiple Instances per node statement clear in the source Licensing document.

    Reply
  • Can I have two default instances using two versions? For example (using example above):
    SQLPROD1 – default instance of SQL Server, say SQL 2014 (MSSQLSERVER)
    SQLPROD1 – default instance of SQL Server, say SQL 2008 R2 (MSSQLSERVER)

    Application expects default instance name to connect to. Thanks in advance!

    Reply
  • Thanks Brent – will have to see if application can live with named instance. Kinda-sorta is right, technique not for the “faint of heart”.

    Reply
  • One of our important servers (let’s call it System1) has been hosted on a massive physical box (1 TB of memory, 24 cores, enterprise edition – call is Server1). We were then asked to provide DR, for which we are using AlwaysOn. This required another beefy box in our other site for DR (server2). We felt that the DR box sitting dumb was a bit of a waste of such a big box so we decided to add another SQL instance for another system (call it System2) , but criss-cross – so System2 production instance is on Server2, synchronizing to server 1 to a Server1 DR instance. System1 production runs on Server1, synchronizing to Server2 DR instance. Because both System1 and System2 require AlwaysOn, we are licensing all 24 cores on both servers, which means that both System1 and System2 secondary replicas can be used for read-only reporting. I have carved up the memory between the instances on each server, but no processor affinity set. I’m not really worried at this stage because one instance is Prod, one instance is DR, on both servers. Of course in a DR scenario that changes but there are other things to worry about in a DR scenario – and this is asynch replicas anyway providing DR, not HA.

    I think this will work for us nicely as it makes the best use of two beefy boxes!

    Reply
    • Doodles – assuming you were under software assurance, you didn’t need to license the DR box when you weren’t querying it. For every 1 primary you buy with SA, you get an equivalent passive instance for HA or DR with the same number of cores – but only as long as you don’t query it.

      By querying it, you just spent an extra 24 cores ($168k USD) on licensing.

      So, uh, maybe don’t tell the boss about your great job. 😉

      Reply
      • With regards to “but only as long as you don’t query it”… How does that relate to checkDB’s ? Everything I’ve read seems to indicate you would need to license the passive instance to run checkdb’s, is this correct ? Just seems strange that you could be failing over from an issue or corruption right back into corruption with no way to check for it proactively……. Well, no way without spending huge chunks of cash but I guess that’s the price you pay for that piece of mind

        Reply
  • Hi Brent – no don’t worry, we were aware of the licensing costs with querying DR. We only decided to go with it since we put the production instance of System2 on the DR box, so had to license all the cores anyway (cause we wanted to use AlwaysOn for System2 anyway). We had thought of using processor affinity and not licensing the unused cores but not legal apparently so had to license all 24 cores. So now querying the replicas becomes available, which is nice!

    Reply
  • Yes… but the systems are only loosley related. Just felt neater to separate. We should be OK because it’s Prod and DR instance hosted on same server, as opposed to Prod and Prod.
    Generally i would virtualise (hyper v) but Server1 existed when i started, and i was asked to put in DR, as well as System2.

    Reply
  • Thanks, What i understand is basically performance issue especially if the DB size are huge (during backup as you said).
    Now, how sensible it is to have say 3 instances in one Node with Always on configured with the secondary on Node 2.

    From what i see, three different AG group, 3 Listeners for each instance- will be harder to manage, performance issue and does not seem to have any added benefit.

    Is it not a better design to have one instance with one AG group and one Listener hosting the Databases of all three instances?

    Reply
    • Atsung – for personal advice like that, that’s where our consulting comes in. Cluster and AG architecture design is pretty far outside of what I can do for free in a blog post comment. Thanks!

      Reply
  • Understood. Thanks

    Reply
  • My company has used instance stacking for years with large databases without performance issues. However, you do need lots of memory and lots of CPUs to get the performance. We are not using clusters or Always On technology. Instead we use CA Arcserve RHA for replication which provides HA and DR. My point here is that instance stacking is now slow as it depends on your needs and configurations.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.

Menu
{"cart_token":"","hash":"","cart_data":""}