[Video] You shouldn’t dynamically create/alter/drop triggers.

T-SQL
10 Comments

Let’s get one thing out of the way first: this isn’t about the relative good or evil of triggers. I kinda like triggers, in fact – they’re good for enforcing business logic in a hurry when you can’t redo code across an entire application. Sure, in a perfect world, we’d always have the code do whatever the code needs to do – but there are times when you can’t, and triggers come in handy then.

Just as you need to be judicious about what you do inside a trigger, you also need to be careful about when you actually deploy one, and you should probably never dynamically create/alter/drop a trigger. See, when you create, alter, or drop a trigger, you’re changing the table’s structure, which requires a higher level of locking than you might expect.

To demonstrate it, I’m going to create a trigger on the Votes table in the Stack Overflow database at the same time end user activity is happening, and show the blocking:

Here’s the demo code that I use in the video:

So what should you do instead? Treat trigger changes just like any other kind of deployment of a table change, like adding a column or dropping a column:

  • Try to schedule it off-hours during a low load period.
  • Consider the blocking implications of changing the object.
  • Only do it when there’s a human being around to watch what’s happening, and cancel the deployment if it’s causing a huge disruption.
  • And maybe most obviously, only do it once. Don’t automate the same change over and over if you can avoid it – it’s just too disruptive.
Previous Post
The 2020 Data Professional Salary Survey Results Are In.
Next Post
Three Reasons to (Temporarily) Change Max Worker Threads

10 Comments. Leave new

  • Phil Dietrich
    January 7, 2020 9:16 am

    Is the table locking behavior the same for enable/disable on triggers as well?

    Reply
  • I can only imagine the events that took place that prompted you to write this one, Brent. 😀 I never even considered doing such a thing.

    Reply
  • Kevon Houghton
    January 7, 2020 10:57 am

    Given the locking nature of altering triggers, are you suggesting applying the changes during “low usage” times, or scheduling maintenance windows to ensure no activity, or???

    Reply
  • I learned this the hard way a couple of years ago. Deployed an update trigger on a busy table (took a few attempts using a low priority and short lock timeout). However once it eventually deployed it fell into a bad plan and 4ms updates turned into 5,000ms or higher. Active sessions went from ~20 to ~1000. 32 CPU cores maxed out. Customer traffic was still being served, but latency was massive. Every time I tried to drop the trigger it would block everything writing to the table. After 30 seconds of waiting, customers would be getting timeouts, so I couldn’t let it wait for too long.

    Our server ran at 100% for 5 hours until load naturally reduced at the end of the day – enough to eventually let us drop the trigger.

    The trigger is back in live, this time with query hints to prevent the bad plan and a circuit-breaker (checking for the presence of a global temp table) to allow us to stop the trigger doing anything if it ever goes bad in future.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.