Richie and I attended the AWS re:Invent conference in Vegas last week. Here’s some of my favorite takeaways about Amazon Aurora, their homegrown relational database with MySQL compatibility.
1. AWS has a grudge against Larry Ellison. Andy Jassy’s keynotes made repeated jokes about Oracle’s cloud product (or lack thereof), the high cost of proprietary databases, and expensive sailboat hobbies. Larry’s big, bold personality makes for an easy target, but I couldn’t help but wonder if AWS will come after SQL Server’s costs next.
2. Amazon’s database products are lucrative. Their database products are their fastest-growing products in history. The cynic in me suspects that they’re measuring growth in terms of revenue, not customer count, and since databases are expensive, voila.
3. Amazon’s Aurora product is ambitious. Aurora is their home-built relational database product. They manage backups, availability, and patching. In 2014, they announced full MySQL compatibility – you could point any MySQL app at Aurora, and it’d work just fine. This year, they announced PostgreSQL 9.6.1 compatibility, too.
4. Aurora PostgreSQL was 3 years in the making. That’s gotta make you wonder: are they trying to build SQL Server compatibility? Granted, it’d be a lot of work – it’s a closed source database. Feature compatibility would be an arms race: Microsoft would rapidly try to add features to entice users onto newer versions, while at the same time Amazon would have to race to keep up. (Hey, interesting coincidence – Microsoft’s been suddenly shoving features in – even in service packs.)
5. I’m guessing Aurora Oracle compatibility will come next. In the PostgreSQL announcement session, the speakers pointed out that PostgreSQL is the most Oracle-compatible database, with some 60-70% of applications being able to switch over. Theoretically, AWS could fund open source development of further Oracle compatibility, but given AWS’s lack of contributions back to open source projects, I wouldn’t bet on that. Instead, I’d bet on them doing something internally that only Aurora would offer.
Before going to re:Invent, I would have bet against AWS aiming for Oracle compatibility. However, during the keynotes, I kept hearing enterprise stories over and over. AWS really wants to be the default data center for enterprises, and enterprises run Oracle (and SQL Server of course).
Amazon won’t have an easy road. Their edge is that they’re essentially getting two database platforms for free: MySQL and PostgreSQL. Amazon has developers, sure, but they don’t have to build a query optimizer, for example. Thing is, those platforms aren’t Oracle.
6. Amazon’s throwing in free performance monitoring tools. The new Performance Insights tool shows wait stats, top resource-intensive queries, lock detection, execution plans, and 35 days of data retention. It’s designed by a former Oracle (there we go again) performance tuning consultant to match his workflow. You don’t have to install an agent, configure a repository, or keep the thing running. It’s in preview for Aurora PostgreSQL today, and will be rolled out to all RDS databases (including SQL Server) in 2017.
7. You don’t need to learn Aurora today. This isn’t going to be one of those doom-and-gloom posts that says, “The cloud is coming for your job!” The cool thing about Aurora is that it works just like the MySQL and PostgreSQL you
know and love don’t know anything about and aren’t particularly fond of – it just takes away the crappy parts of database administration, like backups, corruption checking, and HA/DR failovers.
If you were a MySQL or PostgreSQL query/index tuner, you could keep right on working in Aurora – only now, your skills relate in direct cost improvements. If you can tune queries and indexes well, you can cut your company’s cloud bill directly – and see the improvement in your very next bill.
The same thing will happen when Amazon comes for Oracle, and eventually, SQL Server. Your performance tuning skills will still work – and they’ll even be worth more.
As they say in Vegas, place your bets.
I think the greater challenge will be to get Enterprise Customers to move to Aurora, and for Amazon to prove that Aurora is a better proposition, both in Database Query Performance and lowering Total Cost of Ownership (TCO). I suspect existing AWS customers will be the first to get tackled. As long as we do not hear big horror stories about failed Aurora implementations, the trend is likely to continue as AWS sweetens the pot for an ever larger number of customers to move to their cloud. Nothing in wrong in that, if both conditions are met.
Hamad – that’s great point. I’ve never heard a horror story about conventional on-premises database servers, so I don’t know how Amazon could possibly –
Wait – wait, hold on a second – I’m being told that yes, there have actually been horror stories about on-premises SQL Server. Whoa.
Brent, Indeed there have been horror stories, where companies have been in MIA “Mad and Intense Arguments” when their pixie dust cloud databases have gone down. These outages usually revolve around these basic things:
1. Network outages or lags
2. Server downtime with no failover
3. Performance problems due to poor hardware specs
Your blogs have more than sufficiently covered all of these things in the past.
It is going to be an exciting time to be in databases. Let me grab my helmet and gear….where’s the paddle shifter? Lets GO! I would rather race than be a spectator. Although I am a SQL DBA, it might be smart to look into AWS because you know the boss is going to call you into his office and ask about it.
If you’re lucky the boss will ask you about it before some hotshot dev decides to run the migration tool on their own (so they can brag how awesome they are that they migrated everything to the cloud without anyone’s help), and before that bill comes in. More likely you’ll get called into the principal’s office after the fact because they figured going to the cloud meant they didn’t need a DBA anymore and now you have to swoop in and save them.
Seems to me that much of sql server query tuning skills involve understanding the physical database processing. Would this knowledge of the physical processing layer really translate well to AWS tuning?
Ken – oh absolutely. I find a lot of people think they’re query tuning Einsteins when in reality, their tables have no indexes and they’re calling multi-statement table-valued functions. (sigh)
This is still going to present issues for those with bulky result-sets or high transactions-per-second. Network infrastructure is just not yet there, and keep up with demand. Some corps will [have to] stick with on-site/data centers.
Of course, those who house their entire apps within the AWS space may feel no pinch whatsoever. 😛
Graeme – can you rephrase that? It sounds like you’re saying your company’s network is better than Amazon’s, which, while entirely possible, raised my eyebrows higher than I thought physically possible. 😉
My initial image of this relationship was not an application that resides entirely within the AWS space, but one that used AWS for remote database services only. FYI, most people are unaware as to how high eyebrows may travel on the face. I have a feeling that the next four years in the US may cause some of us to encounter that limit and perhaps set new records.
Graeme – oh yeah, absolutely. I’m not a fan of using remote databases – your app should be as close to the database as possible.
You got it about the next four years too, hahaha!