SQL AZURE LOST ITS LEASE! EVERYTHING MUST GO!

Crazy Ballmer’s Database Hut is having the fire sale of the century!  SQL Azure pricing is sliced in half or more. We must be crazy, they say!  Check out these discounts:

SQL Azure Pricing Reductions

When SQL Azure first came out with its absurd limitations (no backups, tiny database sizes), I viewed the pricing the same way I view cocaine pricing.  You’d have to be pretty stupid to do cocaine, but if you do, then you’re not really the kind of person who’s deterred by astronomically high prices anyway.  (You’re probably proud of the high prices, come to think of it.)

Today, it’s a different story.  We’ve got:

  • Databases up to 150GB
  • Federations to make sharding easier (not easy, but easier)
  • Backup capabilities
  • Reasonable pricing

Suddenly, SQL Azure isn’t cocaine anymore.  I’m not saying you’ll want to start sniffing it today, but for starters, why not run a query to find out how much your own server would cost to host in SQL Azure?

[code lang=”sql”] SELECT  @@SERVERNAME AS ServerName ,
DB_NAME(database_id) AS DatabaseName ,
SUM(( size * 8.0 ) / 1048576) SizeGB ,
SQLAzurePrice = CASE WHEN SUM(( size * 8 ) / 1048576) > 150
THEN 999999.000
WHEN SUM(( size * 8.0 ) / 1048576) > 50 THEN (125.874 + ((SUM(( size * 8.0 ) / 1048576) – 50) * .999))
WHEN SUM(( size * 8.0 ) / 1048576) > 10 THEN (45.954 + ((SUM(( size * 8.0 ) / 1048576) – 10) * 1.998))
WHEN SUM(( size * 8.0 ) / 1048576) > 1 THEN (9.990 + ((SUM(( size * 8.0 ) / 1048576) – 1) * 3.996))
WHEN SUM(( size * 8.0 ) / 1024) > 100 THEN 9.990
ELSE 4.995
END
FROM    sys.master_files
WHERE   type_desc <> ‘LOG’
GROUP BY DB_NAME(database_id)
ORDER BY DB_NAME(database_id)
[/code]

Disclaimer: that query is probably wildly inaccurate because I write horrendous T-SQL and barely tested it.  For exact pricing, check the Azure pricing page and scroll down for SQL Azure, or use the Azure pricing calculator.

I’m praying that you buy SQL Azure.

My query lists the data file sizes – not the actual data sizes, so if you’ve comically overprovisioned your files with lots of empty space, you’ll get artificially high SQL Azure costs.  That’s how I like to do my projections, though – if I’m worried that the data will grow to a certain size, then I size the database for it, and that’s a reasonable projection number for my SQL Azure costs as well.

That query also doesn’t warn you about features you’re using that aren’t available in SQL Azure, nor the security challenges of having your customer data offsite, nor does it cover how your applications might be impacted by a database that lives outside of your network, yadda yadda yadda.  These aren’t unsolvable challenges – especially when the price is right, and today, it’s gotten a lot closer to right.

Here’s the much harder query to write: we need to be ready when management asks, “So how much does it cost to host these databases on-premise, and do we have three replicated copies like SQL Azure?”  SQL Server 2012’s AlwaysOn Availability Groups get us much closer to that solution, but it ain’t cheap.

Previous Post
An Introduction to SQL Server Clusters
Next Post
Three Things You Should Never Say in a Technical Interview

25 Comments. Leave new

  • I wonder what the purpose of the price cut really is? For a regular business, whether its $50 or $500 won’t be of huge influence in a technology decision, where the data will be kept, SLAs and integration with other services will always be much more important.

    All I can think of is a massive scale-out capability is being planned with the federated database feature, taking a database and hosting it over 1000 SQL Azure instances, only then will such pricing make a difference.

    Reply
    • I’m curious about that too – especially in light of Microsoft’s approach to jacking up prices for on-premise SQL Server. Enterprise Edition costs are absolutely skyrocketing for folks with >4 core CPUs. Are we seeing a case where Microsoft’s happy with SQL Server market penetration, so it’s time to jack up prices there, but SQL Azure hasn’t caught on, so it’s time to cut prices until adoption takes off (and at which point prices can go back up again)? Or is Microsoft taking the CA approach where they abandon development on the on-premise product, jack up prices, and make money off maintenance for decades while they switch development over to a new product? All kinds of opportunities here for the tin foil hat crowd.

      My guess – and this is just a guess – is that Microsoft’s learned a lot over the last year about how much hardware they need to support XGB of data, and they’ve been able to drive down their own costs of database hosting with code improvements, resource governance, and maybe even cut-rate datacenter-oriented hardware that the major vendors have been bringing out. I’d also guess that they want to pass this savings on to consumers in order to eliminate barriers to customer adoption because, like you noted, there’s plenty of other barriers.

      Reply
      • I was advised during a recent SolidQ course that the code base for on-premises SQL and Azure is now the same, it’s just that some features (heaps…) are made unavailable in Azure. If that’s the case, that means they aren’t abandoning dev on one product in favor of the other…… so I’m thinking the notion that jacking up on-premises pricing sky high while simultaneously making huge cuts to Azure is all about moving people to Azure as fast as possible.

        What is unnerving about that is… once you are relying on it – there is nothing else. No other vendor you can switch all your db’s built for Azure to, and if you have no servers in house anymore for this kind of need – you’re stuck. That means MS can jack the price of Azure not only back up to cocaine prices, but beyond. Frankly, such wild price swings do not leave me with the warm and fuzzies at all. Instead of leading me to Azure, this might make people think twice about what hell might happen in ten years.

        Reply
  • That’s just the cost for data at rest, but did they also change the $/GB rate to pull data back out? Of course the implied strategy is to put the apps in Azure also and co-locate them to avoid those charges, and then return minimal data to the client, but how many apps really do that well? While they’re making it easier to move, my guess is it won’t be long before the accounts payable people start showing up at programmers’ desks telling them to rewrite their apps to be more cost efficient.

    Reply
  • From experience I can say that it’s pretty pricey for a start-up to get going in Azure, and I think MS will continue to drive down prices so cost will be taken out of the equation when thinking about deploying php/MySQL on Amazon vs .Net/SQL Azure on Azure. Microsoft needs the next generation of apps to be built on their platform.

    Reply
  • Here is a pretty cool blog post for you, called “My laptop is faster than SQL Azure”.
    Well worth reading it.

    http://sqlservice.se/sql-server/sql-server-performance/a-sql-azure-tip-a-day-17-my-laptop-is-faster-than-sql-azure/

    Reply
    • But is your laptop faster than 100 database shards in Azure?

      The strength of a solution like Azure isn’t about the raw computing power of a single instance. It’s about flexibility and distributing load across many instances when you need to.

      Reply
      • The mentioned article talks about a test which inserts N number of rows in a table on a laptop and in SQL Azure.
        Even if you have 100 databases in Azure, you will still get mediocre insert performance, right? (Yep, sharding has serious issues with the data ingestion and bulk data loading.)

        Reply
        • Feodor – actually, no. In the linked test, 10,000 inserts took about 120 seconds on Azure and about 30 seconds on the local laptop. Now imagine using just four shards in Azure. Inserting 10,000 records will still take 120 seconds for each shard, but do your inserts in parallel, and you can load 40,000 records in 120 seconds – thereby matching the speed of the laptop with just four shards.

          This is the difference of the cloud – you have to learn to think in parallel, doing tasks simultaneously across multiple machines.

          Reply
          • Alfaiz Ahmed
            July 1, 2016 8:21 am

            Hi Brent,
            Actually I don’t know how to do Shards Can you please teach me as I’m New On Azure So Won’t you to teach me

        • Also, to be clear – when you say “sharding has serious issues with the data ingestion and bulk data loading”, that comes down to the local application’s design for choosing the right shard. If you code well, you can pick the right shard for inserts on the app tier and insert blazing fast.

          Reply
          • Brent, what you say is bang on the money, however, it has confirmed a long held belief of mine; you should never put anything with a low latency requirements in a cloud. At least not until solid state storage or whatever replaces this becomes ubiquitous in the cloud.

          • Chris – just to clarify your statement a little, I’ll add a line:

            “You should never put just parts of a system with a low latency requirements in a cloud – you have to put the whole system there.”

            After all, if the requirement is low latency for your customers, then putting the entire database *and* app tier in the cloud is an extremely good idea. If the requirement is low latency for internal applications that all live in the same datacenter (not globally distributed) then of course the cloud is a bad idea.

  • Besides the problem this product is still premature, the biggest problem I have with Azure is that I don’t trust (in this case) Microsoft. Who will store its confidentional companydata on a “public” location like the internet?
    Also, You don’t have any influence where the data will be stored (you can only select the continent) There’s no guarantee the data is not stored in a country where you don’t wanna have it stored.

    Reply
    • Wilfred – that leads to an interesting question. If you could pick any 3 companies that you’d trust to host your data, who would they be? Or is it only your own internal staff?

      Reply
      • Brent, I dare you keep your Social Security number, birthday and names in a nameless database somewhere on the internet. And to make it even more fun, I dare you take a picture of your passport and keep it there as well.
        I bet you wont.

        Reply
        • Feodor – that’s not what I asked. I asked, “If you could pick any 3 companies that you’d trust to host your data, who would they be? Or is it only your own internal staff?” I’d be interested in hearing your answer.

          Companies already host my Social Security number, birthday, and name – my banks, my credit card companies, my health insurance company, and to make things even more crazy, all of my *past* companies too. My old employers, my old mortgage brokers, any phone company I’ve ever used, etc. I wish I could trust that all of them kept perfect computer systems, never lost a backup, and never had an employee tempted by the urge to copy the company’s data in order to make a quick buck, but like you, I don’t put a lot of trust in blind hope like that. Security through obscurity doesn’t work. We’ve proven that over and over.

          Reply
  • Are your clients outsourcing their DBA teams? Their dev teams?
    I am currently working for a fortune 100 client, sure all their servers are hosted locally ( they have their own data centers ) but the monitoring, development etc are outsourced in india to some other companies. Basically their most important data can be accessed by indians and potentially competitors.
    Said like this, it sounds scary, doesn’t it?
    And I could say the same for many other clients.
    From my point of view, cloud would be as secure as this current situation.

    Reply
    • I know that your confidential data can be accessed by Indians but then if your organization has outsourced some work, they (and you) need to trust.

      It does not sound scary to me because if someone has to misuse the data, he\she will do it,irrespective of the nationality.

      Regards
      Chandan

      Reply
  • Brent.

    Are the system databases & tempdb charged for?

    Assuming you’d want to take a backup of the databases at least every day and transfer the data to a different location, would you then hit transfer charges of $0.12 per GB. That doubles the cost.

    Would you want all of your SQL databases hosted in the cloud. You can do a business case for your important “top dollar production systems”, but often you have other SQL databases that ‘just do the boring stuff’ – important in their own right but not revenue generating. Where would you host them? Locally?

    But once you have bought a local SQL server, surely it then becomes more cost efficient to properly utilise it.

    I don’t think I will move my stuff over just yet…

    Reply
    • Guy – no, no charge for system & tempdb.

      About the offsite backups – that same extra cost would apply internally to your own SQL Servers as well. Offsite backups aren’t free.

      About “Would you want all of your SQL databases hosted in the cloud.” – no, and I certainly hope nothing in my post suggested that I would. Let me know where you got that idea and I’d be glad to clarify it.

      About “But once you have bought a local SQL server, surely it then becomes more cost efficient to properly utilise it.” – are you saying you put all of your databases on one server? What happens when you outgrow that server? I’d agree that SQL Azure isn’t targeted at people who have one and only one database server – or is it? If you’ve only got one SQL Server, you probably can’t afford to build in the kind of HA that SQL Azure has.

      Reply
      • Offsite backups? Small shops – backup to USB and take home. No incremental costs.
        Large shops – probably already have bought and paid for a WAN (between sites) so no additional cost.
        Azure? Want a copy of your database and it is $$$ every time.

        100% cloud: Didn’t want to imply that you are advocating moving all your db’s to the cloud. It’s horses for courses as we say in blighty.

        Full HA in Azure is attractive, and as you say, if you need all the 9’s and the performance and the flexibility then it’s going to be more cost effective with Azure – at least in the short term. If it’s that important to your business then you still need a good DBA/sys admin by your side.

        An ‘only slightly less available’ solution can be done with clustering (3 to 5 second failover?) and that does not cost for the extra SQL licence (IIRC) – hardware is cheap (comparatively).

        Reply
  • This is a little tangential, but he UI design is horrible. It took me minutes to figure out how to work the calculator.

    https://www.windowsazure.com/en-us/pricing/calculator

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.

Menu
{"cart_token":"","hash":"","cart_data":""}