Databases Five Years from Today

Architecture
35 Comments

Five years from now, in March 2018, what will be different about databases?  I’m a Microsoft SQL Server guy, so keep that in mind when reading my thoughts.

Looking Back at 2008-2013

The big story here for me was virtualization.  In mid-2008, Microsoft added support for SQL Server running under Hyper-V, and later they added support for VMware as well.  Today, almost all of my clients have at least one virtual SQL Server.  They typically start with development/QA/test instances, then disaster recovery, then production.

For the remaining physical servers, cheaper memory and solid state storage meant you could get insane performance out of commodity hardware.  Microsoft got caught without a good scale-out story, but cheap scale-up prices meant it didn’t matter.  I don’t know whether this was luck or strategy, but good on ’em either way.  In 2008, I’d never would have believed that 2013 Brent would have a dozen clients with 2TB of memory per server.  Over the last couple of years, I’ve had a few companies think they needed to switch database platforms in order to get the performance they need, but the real answer has always been quick (and relatively cheap) changes to code, configuration, and hardware.

Management tools didn’t really change at all, and that’s a story in and of itself.  Microsoft made several attempts to change how DBAs and developers interact with databases – Visual Studio Data Dude, DACPACs, Utility Control Points.  When SQL 2008R2 came out, I wrote about why these features wouldn’t really have an impact, and today, I’m not surprised that they’re just not getting any serious adoption.  Microsoft threw in the towel on Data Dude and replaced it with SQL Server Data Tools, but didn’t include all of the functionality.  I don’t see a lot of developer confidence in Microsoft’s short attention span here, so tooling hasn’t been a big story.  (We did get an all-new SQL Server Management Studio under the hood, but Microsoft went to great pains to ensure it looked/worked basically the same as the old one, so…yeah.)

Business Intelligence (BI) got a lot of headlines, but here we’ve got another hyper-distraction story.  Microsoft threw so many different tools against the wall that the naming even became a joke – does PowerPivotPointPro use the XVelocitySuperMart in v3.14?  I don’t envy the BI pros who have to keep up with this jumbled mess of licenses, features, and names, but I do think Microsoft is heading in the right direction.  The combination of Excel, SharePoint, columnar storage, and hella fast laptops means Microsoft is in a good spot to give insight to managers.  It just wasn’t a huge revolution in 2008-2013 because the stories and products kept changing.

Looking Forward at 2013-2018

When our servers were physical, they had a built-in expiration date.  The hardware support would grind to an end, and we’d be under pressure to migrate them onto more reliable hardware.  We often included a SQL Server version upgrade in that same project.

Those days are over.  The combination of virtualization and SQL 2005/2008 will leave an interesting legacy challenge for DBAs.  Once your SQL Server is virtualized, it’s really easy to get it off old hardware – just VMotion or LiveMigrate it to another host.  You can do it even while it’s still powered on.  Does that old version need some more horsepower?  Shut it down, add a couple of virtual CPUs and more memory, and power it back on.  What used to be a big ugly maintenance project is now a matter of just a reboot.

This means you’ll be supporting SQL Server 2005 and 2008 forever.

SQL Server 2000 has thankfully (mostly) already been exterminated from serious production work.  Its lack of management tools and mainstream support means it’s painful to troubleshoot, so most of us have already migrated production work to 2005 and 2008.  Support for those newer versions doesn’t end for years, so settle in and get comfy.  Sure, SQL Server 2005 and 2008 have bugs, and they’re missing cool features like backup compression in Standard Edition, but for the most part, they just work.  Businesses will stick with ’em for most applications because they don’t see enough compelling features in 2012.

In SQL Server 2012 and beyond, we’ve got:

Call me maybe crazy, but I don’t see really widespread adoption for any of these.  To do them right, we’ve gotta make changes to application code.  The changes won’t pay off for the majority of customers, so it’s risk without much reward.  Don’t get me wrong – when you need this kind of speed, then you need it, and the features are fantastic.  I do see widespread adoption coming in 2013-2018 for AlwaysOn, but only for high availability and disaster recovery, not the scale-out reads part.

The virtualization/SQL2005-is-good-enough combination also means we’re not going to see massive, widespread migrations from on-premise SQL Servers to cloud services like SQL Azure.  (We’re also not going to see people call it by its official product name, Microsoft Windows Azure SQL Database.)  Your app would require code changes to make that switch, and code changes are risky.

New development, on the other hand, means you can pick the features and compatibility you want.  In those environments…I still don’t see a lot of widespread SQL Azure adoption coming.  If I’m a developer building a new app from the ground up, I’m going to pick the cheapest, easiest database possible.  These days, that’s probably PostgreSQL.  Like SQL Server, it’s a platform that is available in a lot of different delivery mechanisms – on-premise on bare metal, on-premise virtualized, colo boxes, cloud-based services, etc.  It’s open source, and it has all kinds of cool features we don’t get in SQL Server.  I’m not changing horses mid-stream, but if I was starting again as a developer, I’d go that route.

What Do You See Changing?

When you look at your company’s needs, the job you do, and the tools you use, what do you see coming down the pike?  In 2018, what will be the defining thing that changed your work?

Update 2018/03 – hey, I did pretty well! Here are my thoughts on what happened, and what’s coming next.

Previous Post
You don’t have a Big Data problem.
Next Post
Log Shipping FAQ

35 Comments. Leave new

  • I agree on the limited adoption for Availability Groups for scaling out reads. I think people love the sounds of it, but once they learn that it means the passive node has to be fully licensed, they abandon the idea. It’s just not worth buying an extra Enterprise License for.

    Reply
  • I love your honest way of writting Brent,very nice read.

    Reply
  • Well, taking into consideration the fact that my company still uses as a main version SQL 2008R2 (we even have a client with SQL 2000 on Win 2000, which will finally receive an upgrade to SQL 2008R2 later this year) and plans an upgrade to 2012 only in a distant future (1-2 years), I might find myself working with SQL2012 only in that distant future :-).

    So 2018 here I come with the latest technology from 6 years ago :D. There’s a small advantage to me though… this allows lots of time to play with all the 2012 toys. Yey! Yeah, sure.. how do you convince a company that an upgrade is due? 🙂

    Care to make a list of bad things that Karma can do to you if you don’t upgrade?

    Reply
  • Are you referring to yourself in the third person or just using the company name? Anyway, that is the fifth time I have read a post with you, Brent, stating that PostgreSQL is the future. Hmmmm, interesting.

    Anywho great post as always.

    Reply
  • I feel like an era of massive adoption of IT is coming – not that this was not the case up until now, but in next 5 – 10 years, businesses will really understand the value of enterprise systems and what they can offer them! That said, when I start thinking of how databases will look like, I am (strangely, but still) thinking of what other technologies will they be surrounded with. What I think will happen with the database world is that it will become a lot more complex and at the same time a lot more automated. Virtualization is just a part of the equation, but it will add up enormous complexity to any environment. Manageability will go in the “automate that also” direction, because there will be so many systems, which if not automated will just add to the nightmare of the whole enterprise IT environment. All of this is leading me to a technologies that I was talking about years back to my friends, teammates – PowerShell and the System Center products. I believe these two together with SQL Server will be part of our everyday’s life and if we are not seeing and realizing that already, we definitely need to, because otherwise we will not be relevant very soon…

    Reply
    • Boris – it’s always a safe bet that we’ll see more complexity and automation, that’s true. However, I’m not worried about being irrelevant – after all, even the most basic tasks like user administration in Windows still aren’t really automated, and there’s plenty of systems administrators.

      Reply
      • Agreed. However, one sysadmin today is doing a hell of a lot more stuff than the same guy years ago and I definitely believe he will do even more after 5 years. Sure, it is not only because of automation, but because of just the products changing, but we are still seeing the sysadmins and DBAs start being responsible for more and more servers and more and more technologies (which, oh by the way, is great!).

        Reply
  • Hi Brent.
    Brent are you saying thet DBA should move to development just to have a job in the future. Or Developers wil have more jobs opportunity that DBA’s. I am a midlevel dba with 4-5 years of experienced but i hate (not good) on programing.
    Thanks Brent, good article as always.

    Reply
  • Nice to see you’re throwing up the Kool-Aide 🙂

    Reply
  • Hella fast laptops YEAHHHHHH

    Reply
  • Great article as usual Brent. I totally agree with your assessment of the slow adoption of 2012. It has a lot of bells and whistles (Enterprise only) that distract the fact that MS has chosen to ignore the little guys. I have done DBA work for 10 years and have still not been at a company that will spring for the cost of Enterprise so I am in the boat of getting something duct taped together with Standard and some creative scripting to try to emulate what Enterprise can do. I keep hearing more and more that PostgreSQL is something we need to look into but I have not found that community as helpful and dependable for a little guy like me. When I run into an issue I can always find a helpful blog post either from yourself or from the multiple great blogger in the SQL community. I don’t see that with PostgreSQL and until that changes I am reluctent to dive into that world.

    As far as the future is concerned I see more of the same from little guys like me. Trying to get the most out of SQL Server Standard and creative scripting. Thanks for all the help in the past and FUTURE.

    Reply
  • QUOTE : “Sure, SQL Server 2005 and 2008 have bugs, and they’re missing cool features like backup compression in Standard Edition, but for the most part, they just work. ” Either, SQL Server 2005 Enterprise edition does not have Compression option. or does it??

    Reply
    • Hi, Sudhir. Nope, 2005 doesn’t have compression, but it’s really easy to add in the form of third party backup programs like Quest LiteSpeed, Red Gate SQL Backup, and Idera SQLsafe. Way cheaper than Enterprise Edition, too!

      Reply
  • Thanks Brent for the reply. I was telling my manager for the last couple of months or so, that we need backup compression on sql 2005 as it does not have one natively. Finally,budgeted it out to buy one just for the sake of compression. I had sweaty plams for a bit when I read the quoted statement and more so, because I know you could not be wrong……Anyways, huge thanks to you and your team for the wonderful content you guys always provide (and free :)…I can only imagine how much more you guys can offer to your paid clases..:)).

    Reply
  • We still have 8 production databases on SQL2000. Over the last 12 months we have migrated 20 databases to SQL2008/2008R2. Like you, I see SQL2005 and 2008 systems continuing on for years with the invention of VMWare. Just migrate it off whatever hardware to newer hardware… no changes to worry about. There is good and bad with that. I believe we still have 8 Win2000 Application servers here and a LOT of Win2003 still.

    Reply
  • Dony van Vliet
    March 19, 2013 12:29 pm

    Brent, I can’t tell you what the future will bring us, but I can tell you some nice features I’d like to have:

    – A database without explicit index definitions. Let the engine decide which indexes are needed and offer the best query performance. Run a scheduled job that creates these indexes and removes indexes that are no longer needed. Maybe this could all be done by a background process on the fly.

    – Defragmentation on the fly. No need to schedule jobs to rebuild indexes for optimal query performance.

    – Table inheritance. If you have an Employees and a Clients table, you can create a Persons view with a union on their common fields to accomplish this, but if you need an extra field in Persons it requires a lot of maintenance on both tables and the view. This feature would close a gap between the object oriented world of programmers and the relational world of databases.

    I think this wish list will suffice for the next five years to come. That we may live in interesting times …

    Reply
    • Dony – hmm, interesting. There are databases that accomplish the first two, but they’re in the multi-million-dollar range. It’s tough to accomplish those in the $2k-per-core range. 😉

      Reply
      • Good article Brent. Interested to know which mm dollar db systems you are referring to in this response?

        As for 2012, I think there will be good adoption of columnstore in warehousing environments. I think costs of storage will be a big prohibiter to widespread always on adoption, but costs coming down I guess.
        Cheers
        Des

        Reply
    • Jeremiah Peschka
      March 30, 2013 4:33 pm

      Hi Dony,

      I’m not sure if this is exactly what you mean, but PostgreSQL has had table inheritance for a while (http://www.postgresql.org/docs/current/static/ddl-inherit.html). If that’s a critical feature for you, that might be worth a look

      Reply
  • In the next 5 years I see updatable, clustered columnstore indexes – so yeah they WILL be used a lot.

    And Hekaton is supposed to be mostly a config change. If true, who *wouldn’t* use it if the data fits in RAM?

    Reply
  • Brent,

    I have been reading your posts about virtualization of SQL Server, both why and why not, and I’m curious if you have any updates to those articles from 2009. Considering many more companies of varying type and industry are moving their production data into the virtual world, I’d be interested in hearing any new thoughts you might have on the subject.

    I’ve run into more than a few situations where virtual database server ended up being bad for many of the reason you have already cited, but I don’t want to make any sweeping generalizations either.

    Cheers!
    PS. Truly enjoy your writing style. Very entertaining and informative.

    Reply
    • Jeff – thanks, glad you like my work! I don’t have any new thoughts on the pros, cons, and company challenges of virtualization. I do have some procedural posts I’ve been working on, like how to choose when you should do a physical cluster versus a standalone physical machine, but the basic pros & cons are still the same. Hope that helps!

      Reply
  • don conrad
    May 7, 2013 3:36 pm

    I believe SQL Server will start being a report data-staging db for HaDoop databases. those databases contain far more data than can be effectively processed by relational databases. However usually data has to be organized differently than Hadoop does to be useful for reporting. Witness MS’s recent Hadoop interface.
    It won’t replace the standard storage role for SQL Server; it will supplement it.
    Thoughts?

    Reply
    • Don – hmm, that’s interesting. So if you were going to pick any database in the world to stage report data for Hadoop, which database would it be, and why?

      I like SQL Server a lot, but I don’t think I’d use the expensive SQL Server licenses to stage report data. Something open source like PostgreSQL might be a better choice.

      Reply
  • *Bump – Staring down the barrel of 2017 – what do you reckon now, Brent? Think we need an updated view as per the World of Ozar 😉

    Reply
    • Gerhard – well, I’ll pose that to you first: looking at the post, do you believe it still stands, or is there anything that I was way off on?

      Reply
      • Yeah, I think your assessment is still accurate – though seeing a bit more push to migrate off 2005/2008 to 2012 at least. It’s still driven by IT rather than business (helps when you waive around a MS bulletin about end of life… but even then there is little real drive to do so, depends on the company culture). And as you said – no real demand for all the nice toys and features – simply cost too much. Budgets aint what it used to be…

        I’m just trying to get my head around what my role as a consultant will look like in 5 – 10 years and how to stay relevant. At the moment, my answer is actively hone my “basic” skills so they never atrophy, use Powershell more (gosh, this stuff is really cool!), expand into what Boris (in another comment above…) mentioned as “supporting tech” (so virtualization, SAN and NAS technologies, storage coupled with new funky ways SQL interacts with them) and actually dipping into BI a bit more (I see WAY WAY **WAAAAY** more roles available for BI then straight up prod support, in the U.K at least).

        Reply
  • Recently reviewed the in-memory features of Sql Server 2016 which I believe will be the biggest feature in the next five years, but after this within 10 years, arrays of NVMe drives over PCIe storage networks will allow for completely new database architectures. After reviewing the in-memory feature, I think they are still limited a bit too much to be useful for my current project. After migrating a few features to this technology, found the learning curve for natively compiled procs, memory optimized tables, and their limitations is also higher than expected. But thankfully 2016 added outer join support! I hope MS keeps improving support for memory-optimized tables.

    Reply
  • “Five years from now, in March 2018, what will be different about databases? ”

    It is in 3 weeks! 🙂

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.