My Thoughts on SQL Server 2012’s Licensing Changes

A few weeks ago, Microsoft announced that SQL Server 2012 will no longer be licensed by the CPU socket, and will instead be licensed by the core.  You can read more at Microsoft’s recap or Denny Cherry’s analysis.

After contemplation and discussing the issues with companies, I think of this as a few related announcements.

SQL Server is getting more expensive overall.

This shouldn’t come as a surprise to anybody: good things get more expensive over time, and crappy things get cheaper.  This change simply doesn’t affect me because I’ve never paid a single penny for SQL Server myself anyway, and I don’t plan to start now.  If I had to pay my own money for a database platform, I’d have switched to PostgreSQL long ago.

When I talked to my clients about this change, they all had one of two reactions:

  • “OMG, THIS IS UNBELIEVABLE!”  These people were more worried about their individual careers than their budgets: DBA managers wanted to know if SQL Server would start waning in popularity, thereby reducing the value of their skills.  (If the high salaries of Oracle DBAs are anything to go by, bring on the licensing price increases.)
  • “No big deal, but no new 2012 projects next year.”  These people understood that they were already licensed for their existing servers anyway, and only their ongoing maintenance fees would be impacted.  However, they immediately crossed Availability Groups off their list of 2012 projects, and that really sucks for me.  I’d been so excited about the ability to scale out with multiple read-only replicas, but the pricing just makes this a no-go for most of my clients.  Between the licensing changes and the traditional hesitation to deploy before Service Pack 1, 2012 is dead in the water for them.

Nobody decided to throw SQL Server out of the shop altogether, but some of them did start asking tough questions about their future projects.  It’s really hard to hire a good production SQL Server DBA right now (email us your resume if you’re looking in the Chicago, Los Angeles, or Portland areas, but no remote workers) so many of our clients are running understaffed.  One client said to me, “As long as I’m relying on outsiders for my database administration, what difference does it make whether it’s MSSQL, Oracle, DB2, Postgres, or the cloud?”

SQL Server used to be seen as the middle ground between expensive-but-awesome Oracle and free-but-limited open source.  Those days are gone – SQL’s pricing is higher, and open source platforms have gotten pretty darned good.

Limits kick in quickly on Standard Edition.

Standard Edition’s limits haven’t really changed significantly – but today’s hardware has, and Standard Edition isn’t keeping up.  If you’re struggling with application performance on Standard and you survive by throwing hardware at the problem, your options run out once you hit 16 cores and 64GB of memory.  At that point, you have to throw hardware and licensing at the problem by upgrading to Enterprise Edition.

I don’t think 16 CPU cores is really all that limiting.  CPU-intensive SQL Server queries tend to be the easiest ones for me to tune, and I’ve always argued that database servers aren’t app servers anyway.  The core-based licensing change just gives me more ammo to tell developers not to do string processing in the database server.

While 10-core Xeon CPUs are already available, they’re not typically deployed in 2-socket configurations.  You can technically buy a 4-socket box like the HP DL580 and populate it with 2 10-core CPUs, but that config just doesn’t make sense for Standard Edition due to the high cost.  With Intel’s upcoming tick/tock roadmap, the next couple of Xeon families are still slated to be in the 6-10 core range, so I don’t think the 16-core limitation is going to have a dramatic impact in 2012/2013.

The 64GB memory limitation, on the other hand, is frustratingly small given today’s memory prices.  32GB of server memory runs around $1,000, and memory can hide a lot of sins.  Standard Edition just doesn’t let you hide sins – you’re forced to spend manpower to keep tuning applications, and unfortunately, that’s not an option with third party applications.  I do a lot of work with independent software vendors (ISVs), and they’re frustrated that their customers can’t just buy $1,000 worth of memory to get awesome performance quickly.

There’s an opening for DBAs who love performance tuning.

It’s just you and me here, so let’s be honest: I make money when companies are in pain, and SQL Server’s licensing changes will inflict some pain.  Companies can’t just throw more CPU or memory at the problem anymore without writing a big check to Microsoft for additional licensing.  As a consultant, I can say, “I’ll fix that problem for less money than Enterprise Edition costs, let alone the cost of a server with more CPU sockets.”

If Microsoft had raised the 64GB memory limit, then companies could afford to mask problems longer by throwing memory at the problem.  They can’t, so I win.

The bummer for me is that I just can’t make the math work for scaling out with Availability Groups now.  I was really, really hoping for a lower-tier license cost for the read-only replicas (after all, they only support a subset of the work) but the cost to run a 5-node Availability Group is staggering.  Even if we use just 2-socket, 6-core servers across the board, that’s $412,440 of licensing – simply unthinkable, even if we throw in discounts.  It kills me, because I’m a huge fan of this feature, but right now companies have to be under tremendous pain in order to write a check that large to scale one application.  It’s just easier to tune SQL Server, and that’s where I come in.

It’s hard to be upset about that.

Previous Post
What the Users Want: Ad Hoc Reporting
Next Post
Coping with Change Control Video

67 Comments. Leave new

  • The changes in licensing for SQL Server make scaling Standard Edition over many commodity servers much more appealing. Admittedly, there are some aspects of scaling out that are non-trivial, but when you combine the hardware and licensing costs of a 4 socket server, the man hours to make that work suddenly don’t seem so pricey. I’ve spoken with a few people using 4 socket, 8 core servers who are easily going to be doubling their licensing costs in short order.

    Reply
  • What concerns me about the changes in cost is that it essentially cuts AMD right out of the SQL Server market. When you compare the two processor sets and the fact that AMD has far more physical cores, licensing SQL Server 2012 becomes prohibitively expensive. With this move, Intel becomes the natural option for SQL Server and there’s a part of me that is annoyed that Microsoft took that choice away from me.

    Reply
  • I wonder if this will be the future trend, charging by core. Or if 2012 feedback is negative, would Microsoft tune the SQL 2014 licensing again?

    I’m shocked there’s no phase-in hybrid licensing scheme, like either Core or Socket-based licensing, but just make both more pricey?

    It’s sad that people now only focus on SQL 2012’s cost/licensing, and not the new features.

    +ve: people will now have less hardware to work with and really have to fix problem querys

    Reply
    • Jerry – my guess is that as processors continue to get higher numbers of cores, it will make sense for everyone to move toward that model. Oracle’s already there, VMware’s there, and so on.

      Reply
  • SQL Server 2012 — a.k.a. “Microsoft vRAM”

    Eric

    Reply
  • SQL Server is a great DBMS, but I prefer PostgreSQL. We used PostgreSQL for a couple of years, and I never had a problem I couldn’t resolve in less than 15 minutes, partly due to complete transparency of their documentation (can’t count the number of circles Books Online has run me around in) as well as the access to developers on the discussion group. Unfortunately I couldn’t sell my organization on it because it was open source. My colleague and I still mourn it now that its completely gone from our organization.

    Reply
  • Joe Fleming @muaddba
    December 13, 2011 12:15 pm

    One of my chapter’s board members approached our MS TSE at our last meeting. His company uses SQL server under the SPLA, and he said “you better come to our shop and explain to all my bosses why we shouldn’t switch to a free platform after these licensing changes.” Apparently their licensing cost is going to go up by nearly 80%

    For us, it’s not so big of a hit, we’ve got over 300 SQL Servers deployed already, and most of our licensing will only be marginally affected in the immediate future. But for a small, rapidly growing company, this is a huge invitation to ditch SQL and go with a free solution.

    Reply
  • Agree with the statement that SQL Server is becoming an expensive solution. We have been working towards consolidating and will continue to do so. Taking advantage of better HW and Software Assurance that we got in place.

    I was looking to POC Availability groups for DR, and if MSFT wants to charge for replicas, then mirroring sounds more appealing to me.

    Reply
  • I don’t understand why they’ve ditched Server+CAL for EE. I’d hoped for a cut down version of Availability Groups in standard edition, which was limited to a small number of databases but alas no. Given that we can’t afford EE on the per core basis, I guess I’ll stick to mirroring on 2005. Sure it’s not as good but well, I can’t buy nice things If I haven’t got the money, simple as that.

    Reply
  • Dang Brent – it’s like you’re in my head. I just submitted a SQL Server Licensing in a Virtual Environment to both SQLRally and SQLSat#119 and then this article popped up on Twitter. Weird.

    Great article, btw. We just went through a huge licensing audit and we are trying to make the transition from a 32,000 CAL subscription (don’t laugh) to a processor based licensing model in a virtual environment. I’ve seen things that would make your skin crawl and I’m having trouble sleeping at night.

    In one of the licensing meetings we were discussing the cost of SQL Server and I had an Executive Director look in my eyes from across the table and say “we need to start moving everything to mySQL”. I snickered a little and then went to my cube to polish up my resume.

    Reply
  • Hi Brent,

    Wow! You’ve summarised it in a very realistic way. I was working at a client’s site where money was not a real objection at the time and we always jumped on the latest and greatest version of SQL Server (within reasons). So life was really sweet. But now that I have moved on and have been exposed to more clients, especially smaller ones, they were really disappointed with the new licensing model and hesitant to embrace SQL Server 2012 simply because of the cost.

    I can predict a few clients would slowly stop using Analysis Services and buy cheaper in-memory BI third party products to do their analysis straight from the relational database.

    Having said that though, one can always dream to work on a project where they have the ideal infrastructure and environment to support the full Microsoft BI stack.

    Julie

    Reply
  • Even for Bi they have feature that we will never see in production..
    like powerview! only avaiable in Enterprise editon (nothing in BI edition)and it requires office sharepoint entreprise licences..yes you have to add sharepoint enterprice price + sql enterprise staff to have a good ad hoc in memory dashboard. I think i will definitly switch to clickview or spotfire.

    The original goal was to fight against clickview but the fight is over before the product release…
    Even for some simple db connector for ssis it requires enterprise edition (DB2 connector from feature pack for example)
    Biggest change for me is 50% of the usual pre sale argument we had are gone like “licence per cpu not per core” or “complex need but few user : enterprise editon+cal”
    And for saas providers, i see a lot of them with spla contract for enterprise edition with cal. Removing cal from enterprise editon can just crash their business…

    Just a tweet from Donald Farmer, famous people from Microsoft BI team that go to Qilkview this year :
    “The new licensing for SQL Server BI Edition feels like an early Christmas present from MSFT to #qlikview – thanks!”

    Even worse : microsft stop the success strategy they have since a long time . Usually microsoft bring very big features to company that never used them before, mainly because of the software price. It is the case with olap cube, etl, full text search, reporting…
    With SQL 2K12 these comp,nay will use some of the new amazing new feature…with products from other software providers because of the price…

    Reply
  • Svetlana Golovko
    December 18, 2011 12:14 pm

    Honestly, I think that instead of charging per core Microsoft probably needs to charge “per feature”. I am sure that many companies don’t use Reporting Services or Analysys Services, but would like to use Enterprise Edition for the performance reasons. With the new licensing model we may actually see shift back to Oracle.

    Reply
    • Svetlana – to some extent, they do: Standard, BI, and Enterprise Edition all have different feature sets (and prices to match). I agree that I’d like to see BI decoupled completely, though, and I think Microsoft’s BI sales story would suddenly be a lot different if those features had to stand on their own.

      Reply
  • I use MSSQL in development for a decade+, licensing was always an issue.
    Time to look around. 🙂
    On a weekend I installed PostgreSQL and EMS SQL Manager on a sandbox. Today I had a meeting with my team, planning to migrate a sample DB and stress-test.

    Reply
  • Why PostgreSQL?

    Reply
    • Claire – it seems to have the longest established code base with the most features to meet the needs of RDBMS users. I’m not wild about picking up Database Du Jour if I don’t know that it’s going to be developed for the next decade. PostgreSQL has a nice combination of features and track record.

      Reply
  • Well I for one, will not be going to SQL 2012 because of this, I may even keep on supporting SQL 2000 even though MSFT doesn’t.

    The real problem is many companies ran to processor based licensing without really thinking it through. In only the largest public web facing sort of companies does that make sense. In my current position, my predecessor paid for CPU licenses for a peak connection pool of about 300 connection. CAL’s would have been so much cheaper, about $160,000 cheaper. It was this lack of understanding that validated the MSFT new licensing model. And they also see that guys like me who can see the economical approach to licensing needed to be shut down. So they are eliminating the CAL’s on the Enterprise level products.

    So I am not sure where the future is going to go, but I am thinking about wringing every ounce of capability from 2008 R2 while I possibly say good bye to 17 years as a SQL DBA and investigate Oracle.

    Reply
  • Hi Brent,

    No I am saying if MSFT does this, I am going to investigate going to Oracle. Since at the costs shown thus far, the pricing isn’t that far off, it may well be worth getting up to speed with Oracle. Else, I may be languishing with SQL 2005/2008 for the rest of my career.

    Not a single DBA I know, including myself, can justify the move to 2012. With some of the tools out there like ERWin, it isn’t as difficult to migrate from one platform to the other as it used to be.

    Reply
  • I have been looking over pricing since I saw your tweet, Brent, on this – due to concerns of future upgrades and what Software assurance can do. I read Denny’s link above and was concerned from what I read. I just got off the phone with Microsoft which pointed me directly to :http://download.microsoft.com/download/5/9/5/59527629-ABD3-4C12-8117-DFABB86E2CFA/SQL2012_Licensing_Datasheet_USA_Dec2011.pdf

    Very important information about the Microsoft “One time election” is on page 5. In Denny’s page he expressed that:
    “When you upgrade from SQL Server 2008 R2 (or below) to SQL Server 2012 using your Software Assurance rights you can continue to use your existing license model until the end of your Software Assurance cycle. This means that if you have CPU licenses under SQL Server 2008 R2 you can continue to use those CPU licenses under SQL Server 2012 until your Software Assurance expires. Once it expires you will need to true up on the number of CPU Cores.”

    Scary. I was at Pass MN meeting the other day and there was someone who stated that they were scared as they just upgraded, but were then told that their assurance covers the CPU to core change without anything about renewing assurance.

    I just got off the phone with Microsoft that I was told there would be a(n announced at this time) one time “election” for the old servers to “capture” how they were running on now and have those licenses upgraded without additional cost.

    Page 5. V Important for people who were looking at assurance upgrades now. I do not have the details on WHEN the audit MUST be completed by and is solidified on servers. Sorry.

    Hope this is useful and hope I captured everything correctly as I am going to be making a push on our budget for a server CPU upgrade with assurance before it’s too late. Thanks for your tweet!

    Reply
  • With the new SQL licensing model, at the company I am employed as the DB Architect, I am now recommending we true-up our Server\Cal licenses on SQL 2008 R2 under the current EA agreement and ride out the SQL 2012 storm… we have no current business need for SQL 2012 and given the direction of MS expect the Server\Cal model to be phased out completely going forward… I am expecting that the savings we can make in stopping payment of the SA on our SQL Servers and 1000+ Cals over the next 3 year cycle will be money in the bank (OK, it never works that way as the money gets spent on other things!) to pay for core licenses in a VM environment in 2015! Of course, it also buys time to investigate a (partial)move to an open source DB…

    Reply
  • How is SQL Server Active/Passive Clustering affected by the new SQL Server 2012 Licencing? Will you now have to licence the passive server?

    Reply
  • Been following you for years. Just stumbled on this and noticed that you linked to crappy things getting cheaper (Blackberry Playbook “You Save: $296.01”)! How do you find this “crap”? Funny.

    Reply
  • Brent,
    My name is Doron and I am a DBA from Israel. I know this reply is kind of late but I just read your article now. I must say that I have been (and still am) a SQL Server DBA for a bit more then five years. How ever, I started using PostgreSQL three years ago and I think that Microsoft has some more to learn. Regarding the price strategy microsoft has published, well… they say that it take a long time to teach old dog new tricks… wel after six month of intensive marketing, I managed to make one of my customers switch to PostgreSQL. Linux costs no money at all… PostgreSQL costs no money at all… and I can surely say that performance is greater then MSSQL. just FYI, my customer is a medical facility serving about 3000 patients per day (24 hours).

    I have a 1U server, with XEON 4-core and 16GB or RAM, and you would not believe that performance I am getting with PostgreSQL !!!

    Thanks for the article buddy, you just made my day !

    Doron Yaary,
    Israel.

    Reply
    • Vasiliy Goncharenko
      November 20, 2012 9:01 am

      Hi, Doron,

      It’s great to hear that transition your from MSSQL to PostgreSQL worked well.
      I also use mostly SQL Server and just started using PostgreSQL in web-apps. So far it is powerful enough for my needs. I miss CLRs, but there are many different ways to do the job in PostgreSQL.

      Couple of questions:
      – What front-end do you use? What language?
      – based on your experience, any cons with PostgreSQL comparing to MSSQL?

      Thanks,
      Vasiliy

      Reply
      • Hi Vasiliy,
        If you mean to the script language as front-end then I am using pg/SQL. However, one of the powerful features of PostgreSQL is that you can install more langauges as plugins. As an example, I can use pg/sh for running linux shell commands from a function or a trigger within my database.

        Cons ? well… there are few…
        1. Backup must be done via shell – no “Maintanance Plan” here…
        2. The pg/agent is your alternative to SQL Agent (it`s a plugin)
        3. Linked server are easier in MSSQL but surely possible here using Slony-l
        4. Umm.. You MUST love linux … I mean it. PostgreSQL runs on Windows as well…
        5. If you have any experience with Oracle (Sorry for the bad word), you can use it here. Basically, pg/SQL has amazing similarity to PL/SQL.

        By the way… there is an old rumor saying that Oracle copied the PostgreSQL to built their database. I don`t know if it`s true or not but there are some error codes which are exactly the same, as far as I heared.

        In the long run, it is hard to make a customer believe in PostgreSQL because it does not have the back support as Microsoft has. How ever, performance is x10 times better and you can REALY run background processes via pl/sh which makes integration commands run easyer.

        If you need any more help, I`ll be happy to answer my friend !

        Reply
        • Vasiliy Goncharenko
          November 20, 2012 10:24 am

          Doron,
          Thank you for a detailed response.

          By front-end I meant application that works with the database. I mostly use .Net and had difficulties working with PGSQL from .Net.

          When it comes to performance, I have stress-test DB on SQL Server that I ported to PGSQL. For sure PGSQL is less resource hungry, but I did not notice 10x performance improvement.
          On what kind of tasks (reading or writing) did you get significant performance gain?

          Thanks again,
          Vasiliy

          Reply
        • “there is an old rumor saying that Oracle copied the PostgreSQL to built their database.”

          I heard it was based upon DEC Rdb which Oracle bought out years later and renamed it Oracle Rdb, it’s still on the go as far as I know.

          http://www.oracle.com/technetwork/database/database-technologies/rdb/overview/index.html

          Reply
  • Yah I know this is an old thread, but one thing you missed on or maybe you didn’t is this move is really going to put the grind to those people who are using Enterprise 2008R2 in the SPLA program. EG the company I work for. 🙂 Our primary DB server is an HA 20008R2 cluster(active-passive). We are running on 2 CPU licenses with a dual 12 core system. Our secondary/passive server with a quad 6 core machine. We are getting by with only 2 cpu licenses for SQL server 2008 R2 enterprise… our spla term ends in october. This means after october we either pay about 20 times the cost for SQL server or we migrate over…and I likely find a new gig. 🙂 haha Actually I think the last time I ran the numbers if we were to pay for every core it would put the company out of business.

    Anyway, I think MS is really shooting there own foot off with this one. Its no wonder they are pushing tablets and what not. I think they accept that they will be losing market share in the SQL department. Thinking back I should have taken an oracle course instead of the SQL 2008 R2 MCDBA track. Oh well.

    Reply
    • Vince – dumb question, probably, but do you really need all those cores? Most of the time when I do performance tuning engagements, we drop CPU use by a heck of a lot just by doing index and T-SQL tuning. What do you use the CPUs for?

      Reply
  • Its powering a pretty large group of websites.It’s been tuned, tweaked and creaked. We have caching servers, search servers and more all working to take load off of the sql server. Cutting the DB into more DB’s is an option but not a pretty one given it runs ok right now at a consistent 75% CPU utilization 24×7. To break things up and split the load among more instances would be a major overhaul. I think what we will likely be doing is moving to a nosql or mysql solution.

    Reply
  • Vince,
    You can use partitions and file groups instead of several servers or instances. Regarding the 75% CPU utilization… I had a client who had about 80% CPU utilization and after I examined the server, I found out that his harddisk run on only 7200rpm. Technicaly, the server stacked up more requests via TCP/IP then it could handle. Changed the drive to a 10,000rpm and I got the CPU load down to 44% !!

    In SQL Sever, and any other RDBMS for that matter, each request is handled and being monitored for two main reasons: The first is the timeout which WILL close the request and clear it. The second reason (and more critrical one) is that the RDBMS has its own engine which may create a lock if several requests to the same table has failed!

    Now, if your server is a VM then I would strongly suggest to check how many “neighbors” you have with you on the save host ? Is your VM host a private cloud or your server is being hosted in a public cloud ? I know the world is all crazy about virtualization but it is NOT always the right solution.

    Finally, another thing which may take your CPU time, is ETL packages runtime. That means that if you run an ETL package (SSIS), on the same SQL Server, the package will normally get performance priority. You can workaround it by creating a user which is connected to a certain resource pool. I have a client which is a local bank and he commits an ETL package every night. The last time he had bad performance, we noticed that the ETL XML file itself is about 500MB! We then splitted the ETL process into several parts and it turned out to be right.

    Hope this helps a bit. Tell me if it does. 🙂

    Doron Yaary, Israel.

    Reply
    • The server is using a dedicated SAN with a raid 10 array of 24 , 10k disks 6gb/s disks, so short of going to SSD, disk IO is up as high as it can go. 🙂 Alot of work has been done to get this beast to the point where it would even run on a single machine, now with the change we would hav ebeen better off leaving it on different DB servers.

      Reply
  • I’am facing this chalenge now. Most of all server comes with 20 cores or more, therefore SQL Server becomes too expensive.
    Now I’m considering to install Oracle or other database management system on my servers.

    Reply
    • Eduardo – interesting. So you believe you’ll save money by going to Oracle?

      Reply
      • I bet the cost will come out pretty darn close if not cheaper on Oracle. I know when I did my comparision it was a wash. Big multi MS SQL servers are being taken out of the market by the core licensing scheme.

        Reply
        • So if it’s the same price, and you already have SQL Server in house (especially the expertise), then it will cost you more on Oracle – even if Oracle is slightly cheaper. Priced out an Oracle DBA lately? Then factor in the expenses of moving software from one database back end to another, and it doesn’t make sense to change at same price levels.

          Reply
          • In my experience,

            Oracle product = Good.
            Oracle lIcensing machine = Bad.

            While there are many facets to the argument, I was at a company who got caught by the Oracle Maintenance agreement fine print. They wanted to keep their current Oracle 11g production databases systems fully covered for future patches and upgrades (equivalent to “SA” for SQL Server), but didn’t require this for a number of their old, legacy Oracle systems that were heading out of use.

            The Oracle licenses were owned out-right by the company, so continuing to operate the Oracle systems was not an issue.

            However, despite lengthy and frustrating pleas with the Oracle licensing dept. the message was that, in no uncertain terms, ALL Oracle systems in-house had to have an active maintenance agreement or none at all.

            Paying Oracle maintenance fees for those legacy systems, systems that were never going to be upgraded ever again meant finding a six figure amount in a budget that simply didn’t have anything available.

            Oracle licensing turned the thumb-screws, with their no-negotiation style, in a way that I have never experienced with Microsoft.

            With Microsoft, it seems, when you have an EA agreement and are paying them for more than just your SQL Servers, they are a pretty forgiving bunch to deal with… where as Oracle on the other hand, hmm…

            Oracle didn’t get their money. The maintenance went out to a thrid party company, for a substantial cost saving.

    • High now I have a scenario that I need to provide a database solution for a 16 core server. The database price installing SQL Server with SA option is U$ 50.709,00. Same server running Oracle will cost US$35.000,00. What solution you think my costumer will prefer?

      Reply
      • Oh absolutely – if you get to pick which platform you deploy on a brand new app that is compatible with both database servers, and there’s no staff involved, then you should pick the cheapest one. Seems odd that you’d have a figure down to the dollar for SQL Server, but Oracle gets nice round numbers. 😉 Best of luck with the project! I’m sure your costumer will be very happy. What kinds of costumes does he make for you?

        Reply
  • Hello,

    I posted it before and my heart tells me to repeat myself: go OPENSOURCE!

    While MSSQL or Oracle will cost tens of thousands of dollars, here is a good scenario for you, Eduardo:

    OS: Ubuntu 12.04 Server Edition x64 – cost: 0$ !!!!
    DB: PostgreSQL Database Server 9.2 – cost: 0$ !!!!

    PostgreSQL has drivers for almost any language (PHP, .NET, Java etc.) so you wont be lost here.
    I can tell you that I had a project for a commercial bank in which I advised him to drop Oracle and move to PostgreSQL (due to pricing, especially maintenance fees!).

    I am sure that if you search the web for comparison between the two (Oracle/PostgreSQL) you will find answers that will blow your mind 🙂

    Hope this helps 🙂

    Reply
  • Vasiliy Goncharenko
    January 29, 2013 2:52 pm

    +1
    I’m not crazy about opensource, esp. when it comes to delivering solutions to mid+ size companies, but PGSQL + .Net is my preference for personal development.

    Reply
  • Vasiliy,

    I am sure that if you install PGSQL and pg/agent (which is the equivalent to SQL Server Agent), you will have en enterprise level database which supports clustering and mirroring (via Slony which is the equivalent to SQL Server Replication).

    By the way, did you know you can run shell commands from your stored procedure ? You can run C/Java code as well ?

    I`ll give you an example from one of my clients. I need to retrieve some financial info from a mainframe (yeah… they still exist!). I exectued shell command from a stored procedure and executed a webservice without any 3rd party tools at the DB tier.

    Mail me if you need any help 🙂

    Reply
    • Vasiliy Goncharenko
      January 30, 2013 8:05 am

      Yes, I know that I can include my code into stored procedures and triggers, and based on my experience with SQL Server that does not make me feel comfortable. I did not use this feature so far.

      Reply
  • We bough sql server licenses using a three year SA agreement. The SA agreement states clearly that these are perpetual licenses and that we own them. However as our SA is expiring, Microsft is claiming that eventhough we have perpetual licenses that we have to renew our SA to use these licenses to host an web applications we developed.

    When did microsoft change perpetual licenses to term licenses? Is this something you have encountered and what didi you do?

    Reply
    • Licensing can be really confusing. Let’s take a step back for a second.

      In most cases, the licenses are permanent. The license gives you the right to run SQL Server.

      Software Assurance (SA) is an agreement that for a period of time, you can get free upgrades to your existing license. Let’s say in January 2010, you bought a license of SQL Server 2008R2 (the newest out at the time) along with a 3-year SA agreement. That means until January 2013, you can continue to upgrade your SQL Servers to whatever Microsoft brings out. After that agreement expires, you don’t get free updates to new versions – although you can just re-up and buy another 3 years of SA.

      There are exceptions to both the license and SA terms, though. For example, there are specialized licensing agreements like Bizspark and SPLA that are not permanent. There are also benefits that you only get when you’re covered with SA, like License Mobility, which gives you the right to move your SQL Servers around from place to place.

      Your best bet is to work with a licensing reseller that’s well-versed in Microsoft licensing and can help you navigate these challenges.

      Reply
  • Brent,

    PostgreSQL is maybe the most powerfull database engine that exists (to my humble opinion). I can do anything that MS-SQL can (with respect, ofcourse). Regarding Bizspark, you only get a “timebomb” with that. That means, that you WILL pay the money after a year or so… it`s not realy free as they may say. (Correct me if I`m wrong here buddy)

    More then that, and I`m sure many will agree, a Linux server is more stable the a Windows one.

    Reply
  • Without digging into the “Why could you possibly need 32 CPU” end of the spectrum here, here’s my problem with the new licensing model:

    “In a virtualized environment, the compute capacity limit is based on the number of logical processors – not cores, because the processor architecture is not visible to the guest applications. For example, a server with four sockets populated with quad-core processors and the ability to enable two hyperthreads per core contains 32 logical processors with hyperthreading enabled but only 16 logical processors with hyperthreading disabled. These logical processors can be mapped to virtual machines on the server with the virtual machines’ compute load on that logical processor mapped into a thread of execution on the physical processor in the host server.”

    The reason I see this as contradictory is that with a Windows 2012 / SQL 2012 VM applied on the latest ESX host, Windows 2012 does see the processor architecture based on our config. Windows 2012 is touted as being “Fully NUMA aware” in VM, and SQL loves it too (meaning NUMA), which I have seen to be the case here. So since we are causing this particular VM to map to the physical architecture of the Processors, and Windows and the applications see that correctly, it should be held to the stated rule for SQL 2012 Standard “Limited to lesser of 4 Sockets or 16 cores” not the implied rule of “the compute capacity limit is based on the number of logical processors – not cores, because the processor architecture is not visible to the guest applications”. So if I have an ESX host running 4 Opteron 6267 (2 On Die config – 1 Socket / 2 NUMA Nodes / 8 Cores per Node = 16 Cores per socket), and I have my VM specifically allocated to use 2 Sockets for 32 Cores / 32 CPU, then technically I am within the licensing restrictions, as I am only using 2 Sockets, and all 32 CPU should get online schedulers in SQL.

    I think that as the abstraction from the Host Hardware to the VM allocation becomes more transparent (if configured as such), than it should be treated as physical hardware, and the new licensing model starts seeming shady.

    Just my two cents mind you.

    Reply
    • Josh – sorry, might be because it’s late, but I’ve read this a couple of times and I’m not quite sure what your core (no pun intended) complaint is. What do you want Microsoft to do differently?

      Reply
  • Hello,
    I work for one of the leading financial bank in North America. There has been significant growth in sql server area and recently Microsoft has changed the license which doesn’t impress our executives. We are now migrating as much as we can from Microsoft sql server to other DBMS platforms like Oracle, my sql, db2 server. This is unfortunate that they have started let go sql dba as well.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.

Menu
{"cart_token":"","hash":"","cart_data":""}