Contest: What’s Your Biggest Database Regret?

Company News
60 Comments
PAST ME, WHAT WERE YOU THINKING

We all make mistakes.

I certainly have.

I’ll give you a recent one, actually: when we designed the database for SQL ConstantCare®, when we’re storing diagnostic data like wait stats, we thought these 3 columns would be good for identification:

  • user_id
  • server_name
  • instance_name

Because a single user would never send in data for the same SQL Server name, same instance name, but different data, right? Well, wrong-o, because it turns out:

  • Some companies reuse the exact same server name in different places (like they have SQLPROD1 in NYC, and another one in Atlanta, and another one in Taiwan). Even worse, at some of these companies, even IP address isn’t unique. Bananas.
  • Microsoft Azure SQL DB doesn’t group together data at the server/instance level – instead, you have to send in data for each database, separately.

Argh. To solve the first one, we would need to add a column for a user-defined server name, like “NYC SQLPROD1”. To solve the second one, we would need to add a column for database name. (You can’t use database ID because that can change when the database is restored.) But in both cases, we’re talking about a seriously widespread change to the database and all of the app code involved, and … I’m not wild about that.

What about you? What has been your biggest regret in your data career? It can be anything from a design mistake, to an architecture goof, to a production oopsie. Share it in the comments this week. The more entertaining the story, and the more I laugh out loud, the better your chances are to win. On Sunday, September 29th, I’ll pick my favorites and award winners:

Previous Post
[Video] Office Hours: Database Q&A
Next Post
Query Exercise Answers: Why Are These 3 Estimates So Wrong?

60 Comments. Leave new

  • writing code to help a team dynamically create sequences. they didn’t tell me they were going to generate millions of them so now the db is super hard to open in object explorer, run sql compare, or use auto complete tools

    Reply
  • I had to refresh a dev database with new data often and one of the steps involved dropping all sequence objects in that tablespace. Well, I did it as the sys user. No more logins for that Oracle instance. Thankfully it was dev.

    Reply
  • Benjamin Dittmann
    September 17, 2024 4:12 pm

    Ah, yes… data mistakes… I recall dropping the entire audit table for a customer in prod when I thought I was in dev… and of course their backups weren’t present either… good times

    Reply
    • Glad to know I’m not the only one who has done that. And of course, they didn’t actually have backups because the application they were using for backups did not have the SQL Server plugin, they didn’t even know they needed one.

      Reply
    • Glad to know I’m not the only one who has done an accidental production delete. I accidently deleted an entire production database, my second day on the job (I was the only DBA). And of course, they didn’t actually have backups because the application they were using for backups did not have the SQL Server plugin, they didn’t even know they needed one.

      Reply
  • As a co-op working at my very first development job (feels like a previous lifetime now), I was at a shop that had a nice “dry run” feature for certain kinds of scripts, that would show you the results it would obtain if run live. Well some of us got to feeling pretty adept at using it and started skipping the dry run step. One day when I did that, I saw to my horror that my script was deleting the entire database! Whoops. I had to go sheepishly to my boss, fess up, and restore the entire thing from tape.

    Reply
  • No pure SQL compatibility between different products. MSSQL, Oracle, PostgreSQL etc.
    TOP(100), LIMIT(100), FIRST(100).

    Reply
  • A 4TB database with a hard limit of 255 files. The operations desk had been adding 2GB files and almost maxed it out. (Of course I had been adding 128GB files which accounts for the size.) With this system you can delete the small files from the database via the DB Studio which runs a series of commands to relocate the data elsewhere (the 128GB files) but never removes the files at the OS level.

    Guess who deleted an active file by mistake.

    So it only took 10 hours to restore and another 2 hours to rebuild the indexes (It’s a German database from the 80s).

    At the next meeting they asked what disciplinary action was taken. They were told I was beaten with whips. A few took it seriously and emails floated around for a few days.

    Reply
  • I have so many, where do I begin. The largest one I have is a process where we updated rollup aggregations of dollar values after it was summarized. I initially told my product team I could adjust it at the detail level but it would be 10x faster to update the summarized data only. Now my adjustments have grown over the years and now every week I am re-summarizing and re-adjusting my data for hours. Ugh. I should have followed my gut in the beginning and adjusted it one time only at the detail level to set it and forget it.

    Reply
  • I haven’t had anything too crazy so far as I recall (knock on wood), but do have one from someone else. We use an application that uses SQL on the back-end, and a production and a testing environment that has a nightly data sync. Just after a server migration in 2021 the DBA who set everything up for us was asked to get the data sync working. But he configured it to restore the data on the production server (from and older production file), and overwrote a few days worth of live details. Took a number of days and a lot of work to get as close as we could back to what was lost!

    He’s a good guy who works long, late hours, so I feel for him, and understand the ease of making a mistake like that (one or two settings were off). Cut forward 3 years and we’re migrating again tonight(!), with the same guy leading the work. Definitely have our eyes on double-checking things…

    Reply
  • I worked for an alarm company back in 2000. I deleted the contents of the alarm queue. I thought I was on pre prod but I was on production. Luckily, I got it back in about 10 minutes and ran to the central station with the alarms printed out. Customer service reps said, “I thought it was weird the alarm queue monitors went empty”. Yikes. I never made that mistake again!

    Reply
  • My first ever FORTRAN computing assignment at college (’67) just to input and store name, age, sex.
    Report on the invalid data.
    Next day I went back to the lecturer and said there’s nothing in the data to determine if they are an invalid or not.
    Cue red face when he explained it to mean incorrect data, as in not valid.
    Not data about invalids.
    Lesson learned on requirements gathering at a very early stage !

    Reply
  • Install it was my biggest regret! 🙂

    Reply
  • Set up an automated workflow to deliver business critical data to senior management via SQL tables, everything was going so well until suddenly it wasn’t… A manager reported not receiving their data and made quite a fuss. On closer inspection, in my laziness/haste I’d coded the user’s name (which was subsequently being used as a parameter on which the whole process depended) as text and unbeknownst to me, they had decided to get married.

    I had to fix quite a few linked processes and remind myself to use something less volatile, like their employee ID next time!

    I think they got their data a week late.

    Reply
  • My best regret is learning MSSQL instead of Oracle. I still don’t have a Ferrari.

    Reply
  • One of my first jobs as a consultant: critical production SQL Server with performance issues (slow queries, blocking), high workload (thousands of queries/s). To study the problem I decided to run a server-side trace. The idea was to capture a lot of types of events and not lose any records…
    Result: total block.
    No way to stop the task. Flurry of calls from customers… Not knowing what to do, I had to restart the server remotely..

    Reply
    • Andrea Hardesty
      September 17, 2024 6:23 pm

      One time I tried to perform a CHECKDB repair on a poorly maintained OLTP database that was stuck in recovery pending. It was on a windows cluster and the CPU affected every instance on the node. I was both villain and hero that day.

      Reply
  • Andrea Hardesty
    September 17, 2024 6:11 pm

    I’m being vulnerable for your amusement LOL. We used to have this archive database that would grow 4 TB every few weeks. But then at 4TB, it wouldn’t allow any more file growth. (it was a 3rd party tool, we had to have for reasons… but I digress)

    It wasn’t a SQL file growth setting. After some digging, I finally figured out that it was the version of the disk that had a 4TB limit on file size.

    If I knew then what I know now, I would have added new files to the database and let it have a few 4TB files. But no, instead we went through this horrible process of spinning up a new archive database every time it reached 4TB. It’s no longer an issue, but the trauma, regret, and embarrassment linger on.

    Reply
  • That I didn’t join Vault-Tec Corporation as their DBA before they dropped the bomb.

    Reply
  • My two biggest regrets are:

    Naming columns with the “Date” suffix (like CreatedDate) instead of DateTime since they included the time and made it hard to add columns later that only had the date portion with no time.

    Implementing auditing of changes to data in tables with a revisioning strategy where a new row was inserted into the table with an incrementing revision number every time a change was made. This meant the tables ballooned in size as no updates or delete DML statements were ever used, and it was hard to performance tune with a lot of stale audit records needing to be skipped every time.

    Reply
  • This took place in the year of our lord, 2002. I was responsible for all LAN administration. That included MS Exchange, which at the time ran a flavor of SQL as I recall for the mailbox storage. One night, doing some server patching and work, I was having difficulty getting it to come up on the KVM for me to log into. Wasn’t displaying the video properly. So I erroneously held the power button until shutoff, waited moment and then powered back on.
    Older IT folks like myself may already know the result. Exchange didn’t like being forcefully shut down, it wants to do so gracefully. So the Windows OS came back up fine, but the Exchange services refused to start. A quick event history review revealed that the data file for the mailboxes was corrupt. Many attempts to repair and much searching yielded no results.
    Backup restore you say? As it turns out, part of that work I was doing, getting Arcserve setup on that server for just such a reason and it was not ready yet. So no restore.
    A co-worker spent the better part of the next day on the phone with MS support and they recovered most of the lost content. Suffice to say, I did not remain at that company long after.

    Reply
  • using a 32 bit signed integer as an auto increment key. Should have gone with 64 bit.

    Reply
  • Joined Microsoft as a contract Database Developer with two VERY senior database developers rounding out the team. The schema had the PK of each table as a unique identifier. This went against everything I thought I knew, but every time I questioned this I was given a vague non answer. I eventually stopped asking and just got on with my job thinking I must be in the wrong as two senior Microsoft employees must know what they are doing. Fast forward a few months to load testing and it was the disaster you would expect (this is back in the days of spinning disks as well). Still haunted by the fact I didn’t keep asking questions as it could have avoided months of pain in that death march of a project.

    Reply
  • My biggest regret is not starting a online presence. For over a decade I’ve followed the careers of many in the SQL community and thought I could do that and had grand plans to do so but never pulled the trigger. Like planting a tree the best time to start a SQL blog is 20 years ago. The second best time is today. I don’t know that it would have changed my career in any meaningful way but I do know it would have solidified my own knowledge in a very tangible way. “The best way to learn is to teach” as the old saying goes.

    For a work related database regret I inherited a homegrown ETL about 8 years ago that works decently but has it’s own logging built in. After learning the system I thought, well it can’t be that bad, and I left it alone. The problem is the logging system logs each step in sequence and the beginning row for each step isn’t tied to the ending step and that’s just the first of it’s many problems. The amount of work I would have saved over the years troubleshooting the ETL if I had just taken the time to rework it a bit. Over time it’s just gotten worse and since we have been trying to retire the system lately we have just dealt with the pain.

    Reply
  • Dropped the AG, when I meant to drop the Replica from the AG and brought down the Doctors Portal for a large hospital chain in Dallas. It was about 2AM in the morning, super tired, but quickly got the primary back up after completely freaking out. Downtime in total about 15 minutes.

    Reply
  • "Brian" with an R
    September 17, 2024 7:07 pm

    Off the top of my head:
    – Supporting a linked server to send data to another SQL server, which replicates it back to the SQL server it came from.
    – Someone (not me) changed a single configuration in a core SQL system thinking they were in dev, causing months of downstream impacts.
    – There are still have a handful of 2000/5/8 servers (please don’t ask!).
    – When I consulted for a small business many years ago (as general support, before my DBA days), they had merge replication between on/off-prem SQL instances. While troubleshooting a problem of something missing in the application, I started pushing buttons and turning those dials, happily thinking kicking off the merge would let me go home for the day in a good way!

    Reply
  • I try to live by the motto that if you break the database, you darn well better be able to fix what you broke. I’ve presented myself with many wonderful learning opportunities that way. For instance, who has never had to recover from a table update without a where clause? Which leads to why can’t I restore a single table? Do I have enough disk space to restore another copy of the entire database? And so on …

    Or the time you teach your co-worker about the joys of BEGIN TRAN and then later finding out they weren’t too careful with matching up the BEGIN and COMMIT or ROLLBACK, and tempdb is strangely growing.

    Or for that matter, having your monitoring tools telling you that tempdb is consuming more and more disk space, and you keep putting off the investigation into why until it is too late.

    Reply
  • I was trying to run an update that required taking the database offline, but our application was constantly sending requests, so when I set it to single user, the app grabbed the one spid and I couldnt get back in to set it to multi user. Talk about “oh crap!” moment! I was frantically googling how to get it back and ended up running kill spids while trying to set to multi user, just hitting the execute button until it ran successfully.

    Reply
  • Linked Servers. Started using them years ago without understanding the impact. And it became so embedded that years later still have to deal with them.

    What is the Cher song: “If I could turn back time…if I could find a way…”

    Cue the next biggest regret then: I should have started following brentozar.com years ago for all the knowledge and insight and experience you share with the SQL community.

    Reply
  • Daniel Peterson
    September 17, 2024 10:29 pm

    I worked at a software company where we had an executive insist that we go through and apply every index recommendation to our database under the assumption that it would make everything faster. It took quite a while for us to dig ourselves out of that hole.

    Reply
  • I supported an Oracle system which was highly visible to customers. This used two-way replication between primary production and secondary DR replica, and ran on Unix. The theory was that failover would happen without DBA intervention when the DR application was brought up and started to update the secondary. Under normal circumstances, this worked extremely well.
    Arriving at work one Friday I was met by the management welcoming committee which no DBA ever wants to see, saying that something was wrong with the system and data was missing. On investigating, I saw that data was disappearing from production as I watched, apparently at random.
    To cut a long story short, one of my colleagues needed a large database for testing purposes and had restored a production backup to a test server, which he then proceeded to ‘clean’ by deleting the data which he didn’t need, prior to anonymizing what remained. Unfortunately he hadn’t amended the server info used by replication, so the deletes from the restored database were replicated to the DR replicate, which faithfully replicated them to the production primary…..
    My colleague was unfortunately not available over the weekend to repair the damage which he had inadvertently done. My biggest regret was that I was available, so I spent that weekend repairing the database. Due to the convoluted data structures used by that application, this was not straightforward.
    Needless to say, processes were promptly amended so that this could not happen again.

    Reply
  • I had the brilliant idea to change a view just before the holidays, it was a race to the clock to do a few things before going on vacation. There was a problem with a missing record in tabelB that resulted in a view giving back wrong results for a few students. There was a LEFT JOIN tableB ON.. AND bitffield = 0 WHERE tableB.Id IS NULL. Added the missing record in tabelB that resolved problem, but tought better change this so that this problem won’toccur again. I changed it to INNER JOIN ON… AND bitfield = 1, this resulted in good records when tested. And quickly changed the view in production, put in the INNER JOIN but forgot the delete the tabelB.ID IS NULL condition… result no more result, and all students lost their access. Got a telephone at home an hour later, it got repaired quickly but lots of student lost access for a few hours. Ouch.

    Reply
  • So in the early 2000s I was asked to move a SQL Server database from Company A to Company B, and shut down the source server.
    No problem thinks the very junior (toddler?) version of me!! How hard can it be? I mean i see both database files in the SQL something folder.

    First issue, not able to move the files to the new server.. aaah lets shut down the service thing for the source SQL, not gonna use it anyway!
    Yeay, move files to SQL folder of the new server. But the Database did not show up in SSMS?!
    Restart the SQL Service, no luck.. database not showing up.

    Had to do the hard thing, talked to the System admin of the database.
    – “Sorry the database is corrupt, there is absolutley nothing I can do about it”
    – “Oh that’s ok says system admin (she was too nice), create a new database for me and I’ll bring in my daughter to add all the data again”
    The adding of data took about 3-4 days for the daughter manually type in.

    Fast forward a couple of months, I have asked my boss to get some SQL Education since I was practically a DBA and had gotten even
    more SQL Servers to look after (what a great choice of DBA!).

    So sitting there in the classroom with SQL Server Legend and MVP Tibor Karaszi teaching! Not so long into class we get into the concept of detach/attach and I start thinking, this could have helped me when.. oh shit..!! I’m keeping this a secret!

    But hey, at least i learned about it and Tibor also taught me that data and log files should be separated!! Wow now I can solve the issue we have at my current consultant engagement, the Navision/Axapta databases (Dynamics today) should have separated file. YEAH I CAN DO THIS!

    After class I go home and get to work, no need to tell anyone, it’s so simple!! Connect the VPN, onto server, detach db (yeah!), copy log file to new disk and then detach, hmm error message. Detach.. still the same error!! WTF noo I’m panicking now, what should i do! It’s 9 PM and I’m going back to class tomorrow, but the super important database/system is down!

    I do the only reasonable think, start looking for Tibor’s phone number. AND I FIND IT, call Tibor.. Hey Tibor this attach thing is broken, explained in detail. Tibor takes about 10 seconds – ” dbname_log is not the complete file name it’s dbname_log.LDF, try again”.
    IT WORKED! Wow he was truly an MVP in my mind!!

    So it all worked out, Tibor became my hero and I’m still doing this 24 years later (and loving it)!

    Reply
  • On a previous job my coworker decided to not allow NULLs in from/to columns and to use a default value instead. Sadly he decided it to be 1899-12-31 for both, from and to, so you always had to write something as
    GETDATE() BETWEEN t.valid_from AND IIF(t.valid_to = ‘18991231’, ‘20991231’, t.valid_to)
    or

    WHERE (t.valid_to = ‘18991231’ OR t.valid_to >= @date)

    And in my current company someone decided to enable In-Memory-Tables on the biggest database (~15 TB) on prod. We never used this feature but there is no way to disable it again, so we are stuck with some nasty directories (where it would store the DLLs for data access etc.). Doesn’t really hurt in the daily work but you have to be aware of it in your database restore skripts etc. and it disturbs my inner Monk.

    Reply
  • Joseph Peechatt
    September 18, 2024 9:07 am

    One of my more memorable SQL incidents happened during a PACS software installation for a healthcare client. We have a standard procedure at my company: after installing SQL, we disable the Windows administrator account and use an SQL login with SA privileges for database access.

    I followed the usual steps—installed SQL, added an SQL user for the software, and configured the ODBC connection. Believing the SQL login only needed read and write access, I granted just those permissions. Confident everything was set, I disabled the Windows administrator login and started the PACS software.

    That’s when things went sideways. The software threw a database connection error, refusing to run. After a bit of investigation, I discovered that the SQL login also needed permissions to create tables, which I hadn’t granted. Since I had already disabled the administrator login and the SQL login lacked privileges to re-enable it, I was stuck.

    The only way out was to redo the SQL installation from scratch—a hard lesson learned in permissions management. From that point on, I made sure to double-check privilege requirements before finalizing SQL configurations!

    Reply
    • You are leaving systems, where nobody has advanced permissions?

      From security perspective this is both incredible genius and stupid at the same time.

      When nobody has advanced permissions theoretical nobody could harm the system in any unforseen way.

      But when something happens or someone finds a way into it or you just have to apply some bugfixes (I know, your software is perfect but not Windows), nobody can do anything to improve / restore the health state of the system.

      Reply
  • Working under a CTO who believed in speed over quality.
    I found that the accepted process for amending commission amounts was just to update and change the amount in the Commissions table.
    There was no audit trail, no record of the changes anywhere.
    Except in emails asking for numbers to be changed, coming from the Marketing Director.
    Apparently this was acceptable to everyone.

    Soooo I went to the Global Operations Director (neat abbreviation, eh?) who said “what are you, nuts?”.
    I had to point out that I wasn’t nuts, but our CTO who seemed to be unaware of what would happen to a company that didn’t have an audit trail.

    I did what ought to have been done in the first place, added transactions for the addition or deduction amounts, with an audit table for the person authorising and the reason for the change.
    CEO having been through similar audits was happy, GOD was happy, I was happy.
    CTO was upset that we overrode his decision, eventually developing his career elsewhere.

    Reply
  • I worked at a school software company; I once caused an outage that impacted my mom (a schoolteacher). She complained that she had a bad day at work, and said that it wasn’t made any better by their testing software not working.

    I couldn’t really explain that “y’know… at the time I didn’t know that I shouldn’t reinitialize all subscribers to that replication publication”.

    Sorry mom.

    Reply
  • I once ran a DELETE query without a WHERE clause. I basically turned my entire database into a blank slate in seconds… That was the day I learned that backups aren’t just a suggestion!

    Reply
  • René Westerveld
    September 19, 2024 6:04 am

    Truncated the customers table. Had to do a backup restore to a new database and copy the data from there again.

    Reply
  • On my blog I wrote an article about how to setup mails on SQL Server.
    It’s still the most viewed and most commented post.

    Reply
  • My first day at a cyber security firm I pushed a bad update to a sensor configuration.
    Thankfully only 1% of the user base got the bad update, so it only caused BSOD for some 8.5 million Windows devices – just a few airlines, banks & healthcare providers affected. Phew!

    Reply
  • The worst was several years ago at a company that made rugged LCD displays for military use and also some for commercial aircraft. The company a home grown MES system and a commercial ERP system. I was hired as a DBA and ERP admin. In the process of auditing/familiarizing myself with the SQL servers after I was hired I ran across a database for the MES system that was running in compatibility mode for an older version of SQL server. I asked the person responsible for the system why the database was in an older compatibility mode. He didn’t know. I reset it to current. The MES system stopped working. My phone and email lit up like the 4th of July. I had a “fun” conversation with the higher ups regarding that one.

    Reply
  • Not really a regret so much as a ‘what were the IT people thinking?’ story. If that disqualifies it from the contest, so be it:
    My first job as a DBA I was reviewing the database backup strategy in place. There was no DBA before me and backups (SQL and network) were done by the IT guys. They informed me that the SQL databases were backed up to a tape drive overnight. In the morning, the first guy in would eject the tape and put it in a mailer envelope to be sent to the disaster recovery site some 100 miles away. My question was this: “What happens if a database fails AFTER the mailman has picked up the outgoing mail? How long will it take for the ‘most recent backup’ to be back on site so the database can then be restored from tape?” Our dinky little 10 GB databases had a Recovery Time measured in days or weeks.
    The solution was to configure SQL backups to get stored to a file server some time BEFORE the entire network was backed up to tape and shipped off site.

    Reply
  • During a migration process, didn’t select the where part of an update and updated all the financial accounts to zero (instead of the ones that were inactive). Thankfully, had a backup of the database before the migration and the system was offline, so it wasn’t visible and only delayed the migration process by a couple of hours.

    Other stories (not mine):
    – A co-worker dropped a table in production thinking he was in dev instantly stopping all the users from working. Learned really quickly the importance of full backups, transaction log backups and having drive space to do restores…

    The developers of an application that we maintain thought that the best way to do locking is to custom develop it. So they created a table with the locks held by a user on which table/records and before every operation they search and insert a lock record on the table (and delete it when the operation ends). Curiously, the solution isn’t very scalable even though the application only has a dozen users…

    Reply
  • I was asked to add a column a table in our Data Warehouse for a big report due one week into my planned 2 week vacation. I loaded it into the existing table and set up the load job right before I left, thinking it would be easier for me to do than my coworker. In my haste, I aggregated on the wrong level and got a panicked call from my backup my first day of vacation saying the numbers were wrong and didn’t know how to fix it. My husband was super bugged that I got a call on our vacation and was upset at me for even answering my phone. So in order to keep the peace with my husband, I waited until he was asleep that night and worked in the dark for half the night and fixed the issue without my husband knowing. Six years later, I still get asked if I’ve made any changes to the database before I take time off.

    Reply
  • As a DBA: once a developer provided a script to me which contained updates to his SPs, I should have just run it. I did, but on the wrong database…thankfully, there were no identical SP names, so I just created a bunch of new SPs in a wrong database, which I had to delete afterwards (but no one was hurt 🙂 ).
    As a DB developer: the thing was, I did not know the underlying data well enough. I created a query to get values from a ticketing system, which worked well…until a specific point. And this point was, that new info was added to a ticket in a different language (different from the system’s default language). Because of this, the affected tickets were dropped in multiple times, making all my calculations wrong. I did have to filter out the language-specific stuff…

    Reply
  • I have made many mistakes in my professional life. Here’s a recent one!
    We host our customers’ databases using Azure Database where we use failover groups to protect against regional outages. One of the steps taken when we decommission a customer is to rename the production database, essentially parking it off to the side. Well, you cannot rename a database when it’s a member of a failover group. You must first remove it from the failover group which you do thru the Azure portal. So I navigate to the failover group window which lists eight ‘tabbed’ buttons across the top reading, left-to-right,

    [Save] [Discard] [Add databases] [Edit configuration] [Remove databases] [Failover] [ Forced Failover] [Delete]

    I click [Remove databases] and select the database I want to remove which is then displayed in the window, below the menu. So there’s the database I want to delete (remove).

    Dear reader, your challenge is to guess which I button I clicked.

    What’s that? You’d like a hint? Of course!

    Think “Goodbye failover group! Hello customer support switchboard!” which I was later told “Lit up like a Christmas tree!” Yup! That’s right! I clicked [Delete]. It took about 40 minutes to recreate the failover group and add the ~100 databases to it. Remember the saying: “If you’re not breakin’ anything, you’re not doin’ anything.”

    Reply
  • Joan Coll i Ossul
    September 23, 2024 8:14 pm

    varchar(max) everywhere. Developers as designers, using EF Code First. And then, one day, I join the company.

    Reply
    • Joan Coll i Ossul
      September 23, 2024 8:24 pm

      Oops! My mistake? Confusing regrets with rants. Sorry!

      My regret? As GW says, linked servers. A quick and convenient way… to end up making your life difficult.

      Reply
  • My biggest DBA regret is not thinking the career through properly early on and not having a decent outlet to vent my anger towards MS. At least nowadays in the cloud there is dedicated support.
    https://www.azuretherapy.com/

    Reply
  • Biggest regret was choosing a 12 letter string instead of a 7 letter one.

    Back in the day I worked for a telecoms company on the software that connected the calls, tracked call length, etc. One of the tables was a lookup for telephone number area codes. It was a 16 column varchar in FoxPro (yup, that long ago!).

    For a particular feature I needed a magic string that wasn’t an area code [number]. I was torn between LOCAL_CALL (with underscores) or just LOCAL. I went with the former as it’s more descriptive.

    16 wide database column, 12 wide string, done, commit to source control, FTP data changes up so they could be applied to another system. Pack bag, goes home …

    … 3 hours later a somewhat irate phone call from work telling me the CRM system was crashing. Turns out their area code table was only 10 varchar wide and the data truncated :facepalm:

    Reply
  • Tami O (DBArchitectChick)
    September 25, 2024 12:12 am

    I was a wet-behind-the-ears DB developer on site for a system upgrade for a customer. I had been a DBA and would use profiler as a way to show the developers how they were mistreating the SQL Server with “everything should be a temporary object” coding practices and telling me how “indexed views” as an index strategy is the new black. I wanted to make double double sure that my code was high and tight and suitable for this specific production environment, so I started up profiler to run some tests before we hit the “go” button for the upgrade …. and forgot to turn it off. oops. They hit “go” and the SQL Server started to spike and sputter as Profiler frantically tried to write all the “upgrading” activity to my output table. Everyone was panicking so I quickly went to my laptop and discovered my mistake and ended the trace. SQL Server recovered and I fessed up to my boo boo. I got teased about it on every upgrade after that!

    Reply
  • This one may not qualify because it was only indirectly database-related (plus I’m a day late), but some may find it humorous. Back in 1984 I was maintaining a series of FORTRAN reports, one of which consisted of one page for each of the nearly 2,000 part numbers in our “database” (which was actually a series of flat files) with a form feed to start each new page. One day a user tells me she just wants to print the information for one part number. Simple (or so I thought). I’ll just modify the code to include the part in the report if it’s the one she wants, and skip it otherwise. Well, she runs the report, and immediately the paper starts FLYING out the back of the line printer at about 100 mph and goes all over the floor! Yep – I had forgotten to suppress the form feeds on the skipped items, so I had inadvertently sent the line printer over 1000 consecutive form feeds! The operator had to sprint over to the printer and shut it off, and then I received a “concerned” call from him, wondering what in the world my program was trying to do.

    Reply
  • […] recently asked you to leave a comment with your biggest database regret, and the comments were great! Here were my […]

    Reply
  • […] What’s Your Biggest Database Regret? […]

    Reply
  • Using a web app that did not distinguish between various windows. Logged into test DB in one window and ran some tests. Then logged into production in another window, did the tasks I had just tested. All okay, so I switched back to the test window and deleted everything from my test DB – I thought. Turns out the app was written so that there was only ONE login, and when I logged into production in the second window, my test login window was now also pointed at production.

    Okay, no problem, go get the backups, right? Wait, what? We’re supposed to be making backups? Who decided that? When?

    The production DB was not being used much, and it was a fairly new migration, so the management discussion was whether we should simply re-import the original data provided by the customer and play dumb, or ask the customer if they has actually started working with the DB and had created any new data since import, or if they had only been looking at the data and had not yet started working with it.

    Don’t know how it was decided – that was out of my hands.

    Reply
  • Tim Cartwright
    January 2, 2025 3:42 pm

    SQL Server 6.0:

    Was working on integrating SQL Server with MAPI so that we could send emails from SQL Server. Was working on the server, and ran the XP to read mail. I was confused when it ran for a minute or two. Next thing I know, I heard a blood curdling scream.

    Little did I know that my boss had set up his work email address for the MAPI account on the server. For those unfamiliar MAPI used pop3 by default, and downloads emails locally to the machine. So, when I ran the command it downloaded ALL of my bosses emails locally to the SQL Server. LMAO, the scream was from him watching all of his inbox emails disappear in front of him. Boy was he mad. 😀

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.