The 20th Anniversary of the SQL Slammer Worm

SQL Server

Twenty years ago this month (next Wednesday to be exact), sysadmins and database administrators started noticing extremely high network traffic related to problems with their SQL Servers.

The SQL Slammer worm was infecting Microsoft SQL Servers.

Microsoft had known about it and patched the problem 6 months earlier, but people just weren’t patching SQL Server. There was a widespread mentality that only service packs were necessary, not individual hotfixes.

The problem was made worse because back then, many servers were directly exposed to the Internet, publicly accessible with a minimum amount of protection. Since all the worm needed was access to port 1434 on a running SQL Server, and many folks had their servers exposed without a firewall, it spread like wildfire.

Even if only one of your corporate SQL Servers was hooked up to the Internet, you were still screwed. When that server got infected, it likely had access to the rest of your network, so it could spread the infection internally.

So what have we learned in 20 years?

In terms of network security, a lot. I don’t have raw numbers, but it feels like many, many more client servers are behind firewalls these days. But… like with the original infection, all it takes is just one SQL Server at your shop to be infected, and if that one can talk to the rest of the servers in your network, you’re still screwed if something like Slammer strikes again.

In terms of patching SQL Server, to be honest, I don’t think we’ve learned very much. Most of the SQL Servers running SQL ConstantCare still aren’t patched with the latest Cumulative Updates, and many of them are several years behind in patching.

We’re just hoping that the worst bugs have been found, and no new security bugs are getting introduced.

Hope is not a strategy. Patching is. Patch ’em if you’ve got ’em.

Previous Post
[Video] Office Hours: Back Live on Twitch Again
Next Post
[Video] Office Hours: Ask Me Anything About SQL Server

8 Comments. Leave new

  • Allen Shepard
    January 18, 2023 6:41 pm

    Slammer worm – thanks for giving me something to bring up at work (IT) and look a tad brighter.
    Will have to find several other infamous days “I Love You” virus (May 5, 2k) , MyDoom (2004 – Who was Andy in the email text?), CryptoLocker (September 2013) , et al.

  • I personally am always wary about applying updates from Microsoft as you never know when they are going to break some feature either by accident or deliberately.
    Just today I have lost a load of application shortcuts because in the latest Windows update apparently Windows now sees them as security risks and deletes them, i.e. shortcuts to SSMS and VS!
    I like to wait a few weeks to let someone else discover any issues before me.
    But they are important to do I agree.
    I can’t tell you how many times I encounter old versions of SQL still at RTM.

  • There were a few painful events in different companies. It took a while to get the various teams to agree on patching dev/test systems on a regular basis to ensure things went well in production once everything was vetted as much as we could with the SPs back then, and with the CUs now. This is to ensure the updates are good with **our current usage of the system**. OS Patches are applied before and after, SQL on its own schedule – since data is our true gold, the CUs do try to keep the data shinny or less crusty.

  • I guess the nature of consultancy is perhaps that the people who don’t patch are more likely to engage you though? As they are more likely to have problems? This may skew your stats.

  • I find most organizations get caught up in unfounded and irrational fear over hypothetical bad patches without bothering to do due diligence to research/test them. To the extent that commonly feature breaking bug fixes never get patched and they try to live with something broken over an unfounded fear of a bad patch, weeks or months after that patch has been thoroughly vetted. I just can’t understand this train of thought, being so fearful of a hypothetical not wanting to even test, living with exposure to threats and active bugs impacting your workloads.

    With hundreds of millions of SQL servers and close to 1.5 billion windows devices it doesn’t take long for patch problems to be found. Organizations with limited resources can implement an arbitrary policy such as “Don’t patch anything until 21 days after patch release with no problems discovered in the wild with the patch.” and then roll back in the infinitesimal occasions there are problems that no one else has discovered yet.

    some sort of process and procedure has to be developed and adhered to. The hardest and most time-consuming thing about patching for me is always getting through the red tape. Doing the actual patching of the workloads has always been trivial by comparison even when patching dozens of servers manually without assistance of patching tools.

  • Thankfully we patch monthly. We hold back one CU unless it is 90 days old or another CU is released since the one being held back (let someone else test the CU). Security updates are released w/o delay. We wrote a great process that scans the release list from and applies the algorithm to determine what is released in the current month’s patch cycle.


Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.