No no, April Fool’s was yesterday.
Hear me out.
Azure SQL DB’s feature limitations are a good thing. It doesn’t support everything that SQL Server supports, like CLR, cross-database transactions, database mail, linked servers, and OPENQUERY, but…frankly that feature-limitation page reads like a list of my favorite things to tell developers to avoid. You shouldn’t be using most of that stuff in a new application today. You might need it down the road, but…if you don’t, don’t get started using it on day 1. Stick with the basics, and Azure SQL DB supports those, plus more.
Azure SQL DB has optimistic concurrency. You get Read Committed Snapshot Isolation (RCSI) by default, which avoids most common readers-blocking-writers and writers-blocking-readers problems. Sure, you can still get into blocking problems – but this just avoids a lot of ’em right out of the box.
Azure SQL DB will force you to handle errors. A good developer will listen for things like deadlock errors and build retry logic into her application – but let’s face it, most of us aren’t that diligent. Azure SQL DB kinda forces you to do it by giving you transient errors when your database is under too much load. The better our developers realize that yes, even the best databases can throw transient errors, the better.
Azure SQL DB confines the blast radius of bad queries. In a traditional shared SQL Server environment, a poorly written application can wreak havoc on the performance of unrelated applications that share the same guest or host. With Azure SQL DB, your performance really is confined to just your database – so poorly-written queries show up on the radar, fast. Managers get a much better idea of how much bad code will cost on a monthly basis.
In fact, if you’re a database administrator reading this, I bet Azure SQL DB is exactly the kind of system you’d want to force your developers to use when building a new app.
Stop thinking of Azure SQL DB as a limited product, and think of it as a database platform managed by strongly opinionated DBAs.
DBAs just like you.
“And stop seeing the official icon as a data trash can where the inside contents are on fire”
I see it as a cup of delicious warm soup.
LIKE THIS POST
I really like Azure. Like Brent is saying the limitations are almost all based on things that we shouldn’t be doing anyway.
I get a well administered DB server that I don’t have to worry about patching and updating and I can concentrate on functionality.
Is it possible that a full time DBA with an inhouse server could tweak things a little better, but most people don’t need that kind of performance tweak and if you don’t have that full time DBA the base line SQL install is likely to be worse than what Azure can give you.
Only real downside is if you need to push large amounts of data around and the internet speeds start getting in the way, but if you are using an Azure full stack you don’t have that issue either.
One of the lead engineers on the Azure team was boasting to us how AzureSQL is the best price/performance DB system on the planet, I have come to agree with him. With 60+ different client databases, our leadership pushed the move to AzureSQL in order to balance 15-20 databases on each elastic pool and massively reduce the costs of managing 60 separate VMs. This is not a different scenario scalability-wise then if you put 15-20 databases on one SQL Server box, with one exception – which is where we do have blast radius issues (confined to our own databases though).
As you mentioned in an article last month, Log I/O is capped for a pool on Azure (96 mb/s on Gen4, 48 mb/s on Gen5). Because of this, doing a large SELECT INTO or running the Hallengren rebuild index script on one database can make every other DB in the same elastic pool unusable. Our poor devops manager has a nightmare trying to schedule monthly loads and index rebuilds on each pool.
This low Log I/O cap highlights to me one of the biggest things I miss from SQL Server – filegroups. There is no way to take one database and isolate it’s I/O from every other database on the pool by putting it on a different disk subsystem.
Our largest pain point is trying to move data between databases. We converted cross database queries to elastic query, but performance nosedived – it is similar performance wise to using OPENQUERY to move data between servers. Elastic query isn’t supported in SQL Server, so it makes development locally tricky, we have to create synonyms/linked servers locally to mock the behavior of the external tables – which makes migration scripts really fun. We have found it faster to use BCP to move data between databases. I hate BCP.
Hello, Brent! It’s my first comment at your website, so that’s my helloworld.comment. I am sniffing Azure SQL DB for couple of months, but I am working for public sector, so sniffing only in my own exercises. Moving governmental data to a cloud solution is practically not accepted. (Maybe you have some other point of view for public sector’s folks?) But not sharpening the saw is also unacceptable. And I analyzed your “don’t do this” list in the first paragraph, and found a case of linked servers. There is my question: what’s wrong with linked servers according to you? Is it the permission case, described in your great https://www.brentozar.com/blitz/linked-servers/ post?
Gerard – hi, welcome to the club! The way we handle comments here is that comments need to be related to the post they’re on. I don’t take general Q&A in the comments – just due to the volume of readers I get, I can’t afford to stop and answer every question for free.
You have a couple of ways to get options: for a quick free answer, you can ask a question on a Q&A site or forum like https://dba.stackexchange.com or https://sqlservercentral.com, or for paid answers, you can hire me for consulting.
Thanks again, and welcome to the club!
It was an unintentional awkwardness, in my mind I expected a link to your another appropriate post. Thank you for the tips.
But without CLR, how do I write my geospatial functions?
For that matter, how does geospatial work at all without CLR?
Geospatial datatypes, while CLR, don’t require CLR procs to be enabled.
Just hope that MS doesn’t have another massive outage again that takes out an entire region of services…and then doesnt have the capacity to fail said region over to what is supposed to be the hot site
Yeah, they really need to catch up to the quality and reliability of the servers that the rest of us manage, am I right? Those never go down.
So far I have found two problems with Azure SQL DB.
1) GETDATE() returns a date and time in UTC.
2) sp_blitz does not work.
I’ve tried to fix the later, but I’m looking for testers.
The GetDate() issue is another one of those blessings in disguise that Brent was talking about. We really shouldn’t be relying on the DB server to be in the right timezone. When you do that the second you put a user in a different timezone you have problems. IMHO timezones need to be handled in the UI and having UTC in your DB makes it easier to do that. Of course if you’re migrating an existing DB, you have a lot of work to do. 🙂
Microsoft has a workaround function for converting to a specific timezone
Robert – I’m with you there – I love referring folks to this post: http://yellerapp.com/posts/2015-01-12-the-worst-server-setup-you-can-make.html
@Robert, Yes, storing date and time as UTC is the right solution to the getdate() problem. I’ve just inherited a database, and that change will take a bit of time to implement.
I took on being my current employer’s first and only DBA. That work was previously handled by the Infrastructure Team (server and network admins) and almost everything was already in Azure SQL Database. I spend more time on our dozen SQL Server instances than I spend on our 400+ Azure SQL Databases. There is still plenty of performance tuning work (mostly index work and finding incomplete joins and other code issues), but not having to deal with patching and backups and errors in the log is wonderful. If a developer introduces a performance problem in production over a weekend, the admins can just scale up the database until I find and fix the problem on Monday and scale it back down. It almost two years, I’ve been called outside normal work hours only twice — and this is for a global company where it’s prime work hours for some databases no matter what time it is.
On the other hand, I no longer have visibility into a lot of things I need to better find issues (only Microsoft Support Engineers or Product Teams have access to all the log data), your apps absolutely need good retry logic, there’s no good way to copy databases across subscriptions, Agent Jobs are much easier to deal with than Runbooks and Schedules (IMHO), and everything costs a lot more than was expected.
On balance, I’m glad we have most of our eggs in the Azure SQL Database basket.
Mark – absolutely, the “crank up the knob for a while” strategy is fantastic. I’ve advised customers to do that too when they’ve hit a code problem that will require some time to refactor, too – like a proc that needs to be fixed, but will take a few days or more to refactor & get through testing. Nothing wrong with throwing hardware at it for the short term.
Yeah, as an ISV employee, I’ll pass on Azure SQL DB. I’ll start this by admitting this observation is a couple years old, but the performance of Azure SQL DB was atrocious when I checked. Now, my application architecture is such that it runs LOTS of small queries (<10ms execution times) vs fewer larger queries, but when I ran a copy on Azure SQL DB, the performance was a fraction of what it is on a copy of SQL Server Express (assuming the workload can fit in the 10GB/db size limit, which it usually can't). Now, they might fixed some things in the past couple years, but your recent post on import performance with Azure SQL DB makes me thing that they didn't fix that much.
Just curious, in your situation were you testing with a local application running against the Azure DB or was it an application that was running in Azure as well? Depending on your broadband connection, network latency could easily account for slow overall response time on short queries. Running your application on Azure reduces this considerably.
I definitely notice the latency when developing and running against the DB from my development box.
No, I was on a VM in Azure. That level of latency would have killed my application.
I agree with all those points.
I want to add one – as DBA – no upgrades for be MSFT doing it – this is so good!