#TSQL2sday: System-Maintained LastUpdatedDate, LastUpdatedBy Columns

For this month’s T-SQL Tuesday, we’re talking about interesting Connect requests to fix or change SQL Server behavior.

T-SQL Tuesday

T-SQL Tuesday

In our Senior DBA Class, one of the exercises involves figuring out changes to a table after an Always On Availability Group failover. In every class, several students always say the same thing: “I really wish I had a LastUpdatedDate column on all of my tables that just updated automatically.”

Whaddya know – Adam Machanic filed Connect #3105516 for that exact thing. He writes:

“It is extremely common for databases to contain columns like “UpdatedDate,” “UpdatedBy,” etc. These columns work based on a trust model: As a database designer, I must trust that all of my downstream consumers will follow the intended rules, remembering to update the columns every single time any transaction occurs. And of course I have to trust that they’ll put in the correct values. The only alternative to trust is to put ugly and potentially slow triggers in place to make sure everything happened the right way.

This situation needs to change, and this is especially prevalent now, thanks to SQL Server 2016 temporal tables. Many customers I’ve spoken to regarding these tables would really like to be able to audit who made the change. Some of them would like to audit which host name the change came from. And so on and so forth. The bottom line is that we need a flexible solution to allow users to declaratively model these situations.”

I love it! Head over there and give it an upvote.

Previous Post
[Video] Office Hours 2017/1/4 (With Transcriptions)
Next Post
Do DDL Triggers in TempDB Survive Restarts?

13 Comments. Leave new

  • An ideal work around would be to just use Master Data Services as your database platform. There’s even a webservice and everything…

  • You mean to tell me that you let people update your tables directly????
    No stored procedures for inserts or updates????

    Oh woe is me!!!! (grin)

    • I get the humor in your reply, but yours is in fact a typical response to a request like this one. And what annoys me (when it is made seriously, as an argument against such an enhancement) is that it ignores the complexities of many apps. The project I’m working on right now, for example, has a core table that’s updated by probably 25 different stored procedures, each in slightly different ways. Should I really have to copy that code around all over the place? This can be solved so much more elegantly!

      • If I get a dollar every time the DBA said: “Only the app and stored procedures touch the tables, so the update and insert date columns can be trusted”. Except for all those times there was a bug and someone ran a custom SQL script on the table to fix it. And forgot to update those columns. *le sigh*

        • And then when it’s time to build the data warehouse and you’re told that those columns can ABSOLUTELY be trusted for ETL change detection purposes… and six months later users start to ask questions about why the data in the warehouse doesn’t match up and you discover that the team who runs the main database not only runs manual updates but does so ON A DAILY BASIS and never bothers with those updated_date columns … and the entire DW project has in fact been ****ed from Day One…

          Purely hypothetical situation, of course. Oh wait, except not at all.

          • Indeed. I had a source system for the DWH once (Siebel) were “the history was kept at the source”. There was a 3 page explanation on how to handle all of the exceptions…

  • this would save so much overhead at my shop. upvoted and told my entire team to upvote. I’d also like it to come with a history. some table showing last X changes or something.

  • I was just thinking about this the other day since we have some triggers for this behavior that are starting to cause problems on our larger tables. We don’t have that many users changing data but enough to warrant having the tracking columns.

  • Ooh, looks like we got a couple of downvotes.

    Downvoters, are you out there? I’d love to understand your point of view. What about the idea do you not like?

  • The “row-version” column tells you the row has changed, so that coupled with a trigger that updates a “date time” column is probably good enough for most cases. Alternatively there’s also the option of converting the “row-version” to an actual date time value. Not sure how to exactly BUT it must be possible….hopefuly

    • Just add a new field and a trigger to every table?

      Yeah, can’t see how that could possibly be any kind of work or backfire. Especially with third party apps. By jove, Tony, I think you’ve solved it! 😉

      • LOL one does what one can, and all that 😉 MySQL has had this feature for a while but I think its the only DBMS that does. I’m actually a developer not a SQL expert like you (All hail Brent!….No seriously you’re awesome) so I tend to tackle problems slightly differently, and what people are describing could be viewed, in my opinion, as Time Series data. So you could just add a line to the UPSERT procedures to populate some new tables, with as much data about the change as you like, and get a much richer solution as a result. I know its more work but its more flexible than anything Microsoft could provide “out-of-the-box”. What’s that saying again “When all you have is a hammer….” 😉