30 years ago, back in 1988, Microsoft, Ashton-Tate, and Sybase got together to start building SQL Server 1.0 for OS/2.
I wanted to take a trip down memory lane to celebrate, but I didn’t wanna go back quite that far. I wanted to go back to what would probably be fairly recognizable – say, over 20 years ago: SQL Server 6.5 as of 1996.
I picked up half a dozen used books about SQL Server 6.5, then spent a delightful weekend reading them. Seriously delightful – lemme tell you just how into it I was. Erika and I eat all weekend meals out at restaurants, but she saw me so happily curled up in my chair reading that she insisted on going out and getting tacos for us just so I wouldn’t have to get up. I was having that good of a time READING BOOKS ABOUT SQL SERVER 6.5. (Also, Erika is amazing. Moving on.)
To bring you that same fun, I wanna share with you a few pages from Inside SQL Server 6.5 by Ron Soukup, one of the fathers of SQL Server:
(And please do NOT pay full price for that. Used ones pop up for sale all the time, or support your local Goodwill or used book store. It’s not like it really matters which one of these 6.5 books you buy, hahaha – they’re all a hoot to read.)
This book was written long before Microsoft Press descended into bland marketing material. Back then, Ron could still pepper the book with all kinds of personal opinions and stories, bringing the material to life. Forgive the pictures – my scanner’s got a big line going down the middle, and I’ll be doggone if I’m going to buy even MORE old technology!
Now, let’s move on to choosing the right hardware:
That’s right: SQL Server used to support Intel x86, MIPS R4000, DEC Alpha, and Motorola PowerPC. When I first read that, I thought, “Wow, I forgot how much of a Wild West hardware used to be.” But you know what? Today’s really no different – cloud vendors are the new processor choices. (And just like you, dear reader, don’t usually make the decision about the cloud vendor, you wouldn’t have been the one making the hardware decision either. That was done by big executives on golf courses as they talked to vendors.)
Once you’ve picked hardware, now it’s time to fill that box up with memory:
Choosing storage is important too, of course:
As antiquated as this stuff sounds, DBAs are still dealing with the same problems today – it’s just that the problem has different names now. The work you have to do today to get maximum throughput from Azure VMs is really no different than what these people were monkeying around with 20 years ago. It’s just that instead of 6 drives per SCSI channel, now you’re dealing with how many Premium Disks you have to stripe together to saturate the network throughput of a given VM type. Same math, different names.
Then, how are we going to license our 1996 server?
Hold on, hold on, that looks cheap, but it’s not as cheap as you think. When you translate it into today’s dollars due to infl….uh, actually, yeah, that’s still pretty cheap.
Alright, moving on. It’s purchased – let’s start installing and configuring it. Take memory, for example:
Twenty years ago, way back in the dark ages, we had to use formulas in order to tell SQL Server – this massively powerful database system – how much memory it was allowed to use.
Today…we still do.
Today, software that costs $7,000 per core still can’t suggest a default amount of memory to take. These are the things that really blow me away as I read these books – sure, the terms have changed, but so much of the work we’re doing today is still exactly the same. I keep hearing about these magical robots that are going to take our job – so are we going to have to tell the robots what their max memory is, too? Or set their MAXDOP so it isn’t unlimited by default? (That’s how the world is going to end, by the way: Microsoft is going to bring out a robot, tell us to set its MAXDOP as part of the unboxing process, and a bunch of people are just going to open the box, let it breed, and we’re all going to die. Seriously, Microsoft, nobody reads the manual. Try not to kill us.)
Moving on – let’s create a database. Back in SQL Server 6.5, you had to create a “device” (not a file) on disk, then put the database inside the device:
And I can almost hear you giggling, ha ha ho ho, Grandpa Database Administrator was so backwater, but stop for a second to think about what you do when you build a brand new, state of the art Azure Managed Instance.
You have to specify how big the storage is going to be.
For a database that doesn’t exist yet.
And when it runs out of space, you have to deal with it – just like Grandpa did.
Not so fancy now, are you, kiddo? Well, I’ve got good news: in this book, Ron gives us a vision into The Future:
THE FUTURE IS HERE, RON, AND IT’S AWESOME, BUT IT’S NOT ALL THAT DIFFERENT
Alright, moving on – let’s start storing some data:
Twenty years ago, you could buy a book off the shelf – just one book – and it gave you the history of the database system, licensing, purchasing, configuration, internals, how to write a query:
That part about the comments down at the bottom is kinda funny – there was a 64KB limit for stored proc comments. However, “neither of these limitations is much of a worry.”
Oh, Ron, the future is terrible. Microsoft lifted that limit, and people build stored procedures that make Shakespeare look like an essayist.
That performance tuning checklist is still viable today. I could totally see this pinned up on a DBA’s cubicle wall. I’ll be tickled pink when the robots master any of this stuff. Take it. Please. Get it under control.
This performance tuning snippet intrigues me:
I’ve heard this “groups of 4” thing repeatedly from different people, and I wonder if this is where it got its start. I’ve noticed that on really complex queries (20+ joins), if most of the filtering is done on tables far down the join list, I get a massive speed boost by moving those to the first 4 tables joined, and filtering on them as early as possible. Just seems to shape which query plan SQL Server considers first. It’d be neat to know that it actually is a hard solid rule of 4 though.
The “Watching the optimizer’s decision process” section is mesmerizing, seeing how they used to work through this stuff:
Man, Grandpa had it rough. Moving on.
They’ve been telling you Priority Boost is a bad idea for over TWENTY YEARS, but I still find it in a disturbing number of servers.
Whew. What did I learn from this monster book? Well, Ron could have finished writing this book, jumped into a time machine, walked into any modern database shop, and his 1990s skills would have translated pretty well into what we’re doing today. The data is bigger, the servers are faster, and the costs are way higher – but the basic plumbing is all still the same. SQL Server has made some things easier – but it’s added a whole lot more toys in the box, and those toys still aren’t easy to manage.
Reading these books, I’m more convinced than ever that I picked the absolute perfect career. I love working with databases – really, really love it – and there’s always so much more to learn. Part of me wishes I could have gotten started earlier to see more of the original genesis, but the rest of me knows that so much awesome stuff keeps coming into view, and I wouldn’t wanna miss that either.
If you liked this one, holler – I bought a few others and read ’em for fun, and I can share those as well if there’s interest.