The 6 Best Things Microsoft Ever Did to SQL Server
This entire blog post is driven by the #1 feature in this list. I think about the #1 feature a lot, like at least once a week. I think about it so much that I had to stop and think about what other similar great things Microsoft has done over the years, and be thankful for what a nice platform this is to work with. Let’s go through 6 of my favorite Microsoft decisions.
I have to warn you: some of my takes are weird.
#6: English Query – see, I told you some were weird. Decades ago, you could ask the database server what you want in plain English, and get results back. Microsoft was way ahead of their time when they brought this feature out in SQL Server 6.5, and eventually removed it by 2005. The technology just wasn’t capable enough yet, but the idea was so good that it has to make this list. I applaud their efforts to be really cutting edge back then: they did the best they could with the technology of the time, and I’m actually thankful that they didn’t try to revisit this in SQL Server 2025. While it was a good idea to attempt this in the data layer 25 years ago, it’s not smart today because rapidly-evolving LLMs are better suited for a different layer, not integrated directly into the database layer.
#5: Always On Availability Groups (AGs, BAGs, DAGs) – when Availability Groups (AGs) came out in 2012, they aimed to replace a whole slew of high availability and disaster recovery features, each of which was based on different technologies and had different controls. Failover clustered instances, database mirroring, log shipping, bidirectional replication, and SAN replication all still survive today, but Microsoft’s continued investments in AGs meant that they could serve as the foundation for a lot of cloud implementations, including Managed Instances. I do wish they’d invested more in making the tech easy to implement, configure, and troubleshoot, but I still gotta include AGs in this list. It was the right idea, at the right time, and I was really excited when they came out.
#4: SSAS, SSIS, and SSRS – it’s wild to think back to 2000-2005 when Crystal Reports dominated the data visualization business. Everybody used it everywhere. It was such a de facto standard amongst small to midsize businesses that Microsoft had to bundle SQL Server Reporting Services for free with your purchase of SQL Server just to be competitive. Similarly, they had to give away Integration Services for free to compete with tools like Informatica. For about a decade, Microsoft kept pouring improvements into these products, and if you bet your career on ’em, you did pretty well for a long period of time. Today, they’re falling out of popularity, but I’d be remiss if I didn’t include them in this list because they did so well for so long.
#3: Standard & Enterprise Edition Feature Parity – up until about a decade ago, it was a real pain in the ass to advise developers on which database features to use. Stuff like Transparent Database Encryption, partitioning, columnstore indexes, auditing, change data capture, and compression were only available in Expensive Edition. SQL Server 2016 SP1 brought those features to Standard Edition, making developers’ lives easier. I bet this decision involved a lot of meetings and arguing, but the result paid off.
#2: Dynamic Management Views (DMVs) – we take them for granted now, but back in SQL Server 2000, you couldn’t just run SELECT queries to get diagnostic data about the health and performance of your database server. Today, we use a single simple language to access our user data and our server’s metadata the same way, selecting it out, processing it for reporting, and taking actions on it. It’s not perfect, but it sure beats the DBCC commands we had back in the day.
And #1: Only Making Additive Changes to T-SQL
You probably read those six words seven times, and you’re only beginning to understand the best thing Microsoft ever did for SQL Server. To really get it, read through the tumultuous history of .NET, which went through a rollercoaster of changes. There was .NET, .NET Core, .NET Framework 4.0, back to plain .NET, with crazy changes along the way for Windows Forms, Windows Presentation Foundation, Universal Windows Platform… I can’t even begin to do justice to how much work developers have had to do to just keep shipping the same app, but with current best practices.
On the other hand, queries you wrote in 2000 still just work today, period.
Sure, you might worry about query performance due to execution plan changes, but by and large, you’ve been able to take the exact same database application and just keep connecting it to newer and newer versions of SQL Server without having to rewrite the application itself. There’s no T-SQL 2000, T-SQL Core, or T-SQL Framework 4.0. There have been deprecated features, but those are almost all management-related, not changes that would affect app code.
That’s not to say T-SQL stood still: Microsoft has been gradually adding new capabilities over the years. Not to say all of those additions have been improvements or problem-free, either – but at least they’ve been continuously additive rather than requiring application changes to maintain the same level of functionality.
Next up, we’ll revisit this topic again, but talk through the worst things Microsoft ever did to SQL Server. Buckle up!
Related

Hi! I’m Brent Ozar.
I make Microsoft SQL Server go faster. I love teaching, travel, cars, and laughing. I’m based out of Las Vegas. He/him. I teach SQL Server training classes, or if you haven’t got time for the pain, I’m available for consulting too.
Get Free SQL Stuff
"*" indicates required fields

46 Comments. Leave new
Looking forward to next week’s list. I have opinions 🙂
That’s gonna be the one that causes controversy, hahaha.
Completely agree with #4.
I’d set up many shops for a Microsoft-centered BI data platform, which paved the way for Power BI and Fabric.
Looking forward to the worst things 🙂
Looking back, #4 was a really bold risk at the time, but boy did it pay off.
#4
Cannot say much about SSAS ( we are using SSAS in Azure) and SSRS, but SSIS is still strong and used by lots of companies…. my two cents. Completely agree with Brent’s statement “it pay off”.
when SSIS would not be such a diva I guess it would be much more loved.
It is fine, when you finally have a working package and nothing changes for the next 20 years, but even loading a CSV can end up in a pain when it is throwing tons of useless error messages…
What a great list and great idea for a blog post!
I think #2 is far more critical than most folks think. Prior to DMVs you really only had sysprocesses (and the views riding over it) to look at perf metrics in real time. And back then, I can’t remember any DBAs who actually used it to look at _waits_. Most DBAs didn’t know what _waits_ were or why they were important. Instead, everyone was focused on using profiler/sqltrace, looking for long running queries, and then attempting to fix the _symptoms_ of the long running query. And that was pure intuition and guess work.
Nowadays, almost nobody does it this way. In the oracle world they always had the notion of v$ and sys views and they taught their DBAs to use the YAPP method to troubleshoot, which is basically to figure out why the database was _waiting_ and then fix that issue. Profiler/sqltrace doesn’t do this. A few folks realized this is the wrong way to get to root cause and would take sysprocesses and snap it off every x seconds and then look at the wait columns and try to deduce what was wrong and fix that, but again, it was usually done using profiler. It was difficult for many years because you would see things like PAGEIO_LATCH and didn’t really know what was going on until you dug deeply into the internals. Eventually I think MSFT realized that sysprocesses and the wait cols wasn’t enough and finally gave us the DMVs, but it still took years b4 the old school DBAs changed their mentalities. And the thing MSFT really missed the boat on was giving actual guidance on _how_ to look at the DMVs, decode them, and a _method_ to do all of this that was repeatable and a _standard_ everyone could get behind…like Oracle’s YAPP method.
The DMVs are soooo important that when SQL DW came out around, whatever, 2014, it had new DMVs but very little documentation on what an MPP was or how it behaved. It required years and a number of brave souls that would look at the DMVs and be able to deduce the real performance killers like the DMS (data movement service). Then you could deduce counterintuitive things about SQL DW (PDW is/was similar) like…in many cases you get better performance using less nodes. Huh? Seems counterintuitive. This is STILL undocumented and lacks official guidance from microsoft, but is very much still true today with Fabric DW.
Nowadays we take the _concept_ of DMVs for granted, but there are still DBMSs like Spark where you don’t have the luxury of DMVs to diagnose weirdness so you are stuck piecing together items from logs and whatnot to try to figure out where the _wait_ is. What the DMVs really gave us performance tuners was the understanding that _the wait_ is really the most important thing and gave us the ability to find the waits.
Thanks, glad you liked it! I agree – DMVs were so magical at the time, and today, I love that new folks can just rely on them without thinking.
If data folks had to deal with the amount of change that us devs had to deal with they’re would be a lot less data folks.
One of the reasons I’ve transitioned from combined dev & DBA to primarily DBA is the stability of T-SQL. Brent only brought up .Net flavors – I don’t know how any dev can keep up with the continual churn of frameworks, packages, language dialects and multiple way to do the same thing (SPA, Node, React, MVC, Razor, Blazor, EF, Dapper, NHibernate, etc.). And now there’s Power Apps (which I’m really hoping are NOT the FrontPage of the 2020’s).
Actually, that’s the reason why I shifted to the world of data. I got tired of the upheavals in the front-end world. I think it’s where original thoughts like “Fix it ’til it’s broken” and “Just deliver it… the users will tell you what’s wrong” came from. It’s also why I’ve also had a bit of a softspot for the Developers at the companies I’ve worked for in the past. Before they (we, in the past) could even learn the basics of a new tool, either the tool went away or changed enough where you had to relearn it only to have it be changed again… and again… and again.
I can’t speak for others but, in the data world, new stuff frequently comes out in a crippled or less than adequate state and so we learn to hate it so much that, when they finally do fix it, we’re skeptical and don’t want to spend a whole lot of time testing supposed “improvements” because, in a lot of cases, the improvements just didn’t hack it. My favorite example of that is “Extended Events” and I still can’t figure out why in the bloody blue blazes they store stuff as XML. They’ll have to pry SQL Profiler from my cold, dead hands for most things.
Cascading Style Sheets pushed me over to the “dark side” 100% in 2002.
re: #1, I wish MS would add a traceflag that prevents legacy syntax from compiling. like sql 89 joins, double quote strings, single quote objects, 4 part column naming, adjacent aliasing (this drives me absolutely nuts) and let new platforms build on a clean code base
I think about #1 all the time too and I agree wholeheartedly 😀
Wayyyyyy back the olden days, the deprecation of the older RAISERROR syntax bit me, but only took like an hour to fix database-wide. So I chalk that up to a good early-career-lesson-learned about doing research before upgrading to a new version of SQL Server 😀
Just reading your sentence mentioning “T-SQL 2000, T-SQL Core, or T-SQL Framework 4.0” made me shudder. So glad to not be a programmer 😀
As a DBA who deals with multiple AO groups every single day, the implementation is incomplete (to be polite). Without built-in tools to sync logins between nodes, to set scheduled job priority (which node to execute on – especially for backups), etc., the management of AO groups leaves MUCH to be desired.
Backup priority is configurable in an AAG. It can be set to primary, secondary only, prefer secondary or any, then when it is anything other than primary, it uses the backup priority to tell the job whether it should use that node to back up.
You can also use it to dynamically offload some other jobs onto secondaries by setting specific priorities that don’t interfere with your backup replica selection but can key off of to designate which node a job can run on.
In my experience (your mileage may vary), backup priority failed spectacularly when a node was drifting out of sync. For example, management *thought* log backups were still being done on an outdated secondary – but transactions were piling up on the primary node.
Some interesting takes there. I’ll admit I was thinking “English Query” as I’ve seen all of these tie-ins to LLMs to ask about what’s in our data. That was definitely _way_ ahead of its time and took a little too much effort to be efficient.
I think there’s still a lot to be said for SSIS/SSRS. Sometimes you just want a quick report or ETL and they still just work. I know I’ve wanted some basic reports to go out for things and don’t really want to pay the PowerBI markups to get that going. SSRS is included and can be lightweight for simple reports. We had a need recently for some ETL to get data out/in for a table and SSIS can be a great solution for that with larger data sets. I’m all for some of the more modern options, but there are times that you can set up a package, tweak the batches, and just let it run. I wish those tools got a little more attention, though I understand why MS is pushing more to Azure services than on-prem services from a business standpoint.
I’d never considered the consistency of TSQL being a feature, but that is a good call. We’ve had some re-writes over the years for efficiency, but most of that code just keeps going.
Great post, but it begs the question “What are the 6 worst things about SQL Server”?
I’ll start with making SSIS no longer integrate with SSMS (without registry hacks).
LINQ improves over SQL on compositionality, type safety, etc. You should translate your 2000 era queries.
Reread the #1 part slowly, out loud.
WinForms apps didn’t break. It’s just that we can write better code. The same is true of SQL -> LINQ.
Again, you’re not hearing what the post is saying. You may want to set it aside, go on about your life for a day or two, and then come back to it with a fresh set of eyes and see it differently. Cheers!
#1: Only Making Additive Changes to T-SQL
MS in 2008: Although the semicolon isn’t required for most statements in this version of SQL Server, it will be required in a future version.
SQL Devs:
MS: Umm, JK?
Stupid HTML filter killed text in brackets. Let’s try square brackets.
SQL Devs: [gathering pitchforks and torches]
Great list.
I would add INFORMATION_SCHEMA views but thats probably because I use them a lot to generate SQL code.
Excellent list. #4 resonates with me. A 2 week engagement creating SSRS reports for a customer led to a 3 year engagement working on a large data warehouse project with same customer.
#5 – Enabled a lot of consulting opportunity, while it could be a pain to configure with a lot of nuances, it worked great.
Wow, what an awesome list. And the mention to Crystal is so true. To this day it remains the only certification I have after 14 years working in MSSQL and Oracle, and to this day I leave it out of my resume on purpose.
Can’t wait for more!
I am not sure which of the 5 I would displace but Temporal Tables for me is pretty huge. Such a giant improvement over all kinds of kludgy trigger code sprinkled everywhere in a Db.
very informative Brent. I am working closely with Microsoft at the moment with respect to 3 node High Availability. One pitfall we have encountered with upgrading CU’s in a HA environment, the C$ share is required by Microsoft on each node. Fallout from the cyber attack on our company this is no longer permitted. Be interesting to see how our conversation with Microsoft evolves ( as the skip node discovery override when applying a CU also does not work). I think from experience to date High Availability patching has a bit to go yet. Great articles = Thanks Brent, Derek
C$ could be a honeypot too.
Instead of your real c: drive you could share another folder, either empty or better filled with something that looks like the real installation, so any attacker would be fooled and modifies stuff there (where you theoretical could monitor it for changes).
[…] The 6 Best Things Microsoft Ever Did to SQL Server (Brent Ozar) […]
Surely Columnstore is worth a shout out, & a worthy mention for batch mode
Sadly Service Broker’s absence is due to its firm spot at #7 on this week’s countdown.
The rollercoaster of .Net changes led me to quit being a C# developer years ago. I had always dabbled in PL/SQL or TSQL work as well, but that got me to switch entirely over to being a DBA. I just felt like I was having to relearn the libraries once or twice a year instead of being productive and deepening my skills. Since then I keep building my knowledge instead of continually starting over, I’m less frustrated, and I make a LOT more money.
[…] week I wrote about the 6 best things Microsoft ever did to SQL Server, but now we gotta pull up a chair and discuss the […]
[…] week I wrote about the 6 best things Microsoft ever did to SQL Server, but now we gotta pull up a chair and discuss the […]
#4: SSAS, SSIS, and SSRS
I cannot list how many times I have had contempt for whoever at MS were responsible for SSIS and to a degree SSRS. They are certainly a necessary evil but they were implemented as poorly as you could get. If only they were done differently, how MUCH more useful they could be. How many years did I hope the next version would improve or get rid of all the flaws but it never happened. They are still a bane to whoever has to do anything mildly complex with them.
I would add columnstore indexes to this. Have made a huge difference in our data warehouse.
It’s like an index on every column (just kidding 😛
For the sake of completeness: the index on every column is unsorted. Or randomly sorted (per default it orders the whole rowgroup (~1 mio rows) by best compressionability , which is usually equal to “random”).
In newer versions you could force a specific order and in older versions you could create a usual clustered index first and replace it with an columnstore (MAXDOP = 1, DROP_EXISTING = ON) to force an order, but even with that e.g. your name, street or sex column will end randomly orderd.
But it knows for every segment (part of the rowgroup) the min/max value for a column, so it can do segment elimination for many columns (works best on incremental columns, only sometimes on names and very good when you are looking for very high / low values, e.g. salery > 1 mio).
Alongside #1, I think a fair mention is the ability to upgrade databases from one version of SQL Server to the next one with barely any effort. It’s almost reasonable to take a 20-year old database running on SQL 2005 and restore it on a recent SQL Server instance with very few tweaks. Just try doing that on some other database platforms…!
AGs are the only reason we didn’t crash and burn in AWS even though the AWS money meter gets us on the cross AZ data transfer
I was SUCH a fan of English Query. I wrote so many articles about it….that I’m not sure anyone read 🙂
At Microsoft when I was on the SQL team, I met with my (then, now he’s at Google making them great) boss and told him “We should call the next release ‘There, I fixed it’ edition. It will have ZERO new features, but we fix everything that has is broken.’ He laughed and said ‘It would be our most popular release ever, and I would get fired the next day.’ For Senior Leadership it’s Features over Fixes – that’s the way it is sometimes. Well, now it’s AI over Everything, but that’s another post. 🙂
Heh heh heh – the only video game I play, Dead by Daylight, announced that for 6 months, they were going to stop shipping new content, and instead only going to work on patches to fix bugs and game play issues. The live streamers were really excited, except one streamer, Scott Jund, who also happens to be a software developer.
Scott put out a video saying, “What makes you think that the same people who created all these issues and did the initial designs that have game play issues, will suddenly be in a position to FIX all of them? Why would the same designers and coders suddenly understand game play better, in a way that their changes won’t have new problems?”
Sure enough, that was exactly what happened. After 3 months, the game developers put a new version out into beta, and it immediately had huge, massive problems – so many that they rolled all their “fixes” back! Then they did the same thing 3 months later.
So the 6 months just finished yesterday, and the game is in exactly the same state it was 6 months ago, because almost all the work was rolled back and never deployed, hahaha. (Only now we didn’t get any content for 6 months either!)
Well, that’s a thing. “The things that got us here probably aren’t going to get us out of here” is sadly true. On the other hand, with all the folks leaving the SQL team maybe it’s a chance to get new, exciting bugs! 🙂
I concur with SSIS and SSAS as worthwhile features. SSRS is a potent tool for pixel perfect – printable – output, but quite a nasty tool to work with . It has so many nooks and crannies that for casual developers it really pales to more dedicated tools like SAS, Business Objects, Cognos and the like