Can you prevent deletes and inserts without a WHERE clause from running?

File this under bad idea thong

In the tradition of Klaus Aschenbrenner wanting to prevent people from writing SELECT * queries, I thought it might be nice to prevent people from running deletes or updates without a WHERE clause.

In a vaguely scientific way.

Now, I don’t really condone this. It just seemed funny at the time.

There’s no way to intercept a query and check for a WHERE clause, but we can… Aw, hell, let’s just do the demo.

Here’s a table, and a 100 row insert.

Why 100 rows? Because I’m awful at math and using bigger numbers would confuse me.

And uh, here’s a trigger. Oh boy.


If you’re looking at the code, and you hate me, I don’t blame you.

We have an after trigger that takes the row count of the update or delete, and checks it against the number of rows in the table.

If the number of rows affected is within 98% of the rows in the table, the transaction is rolled back, and our end user is reminded to use a WHERE clause.

Why 98%? Well, system views and functions aren’t guaranteed to be up to the second reliable, and I needed to pick a number.

There’s no mechanism inside of triggers to tell you if you inserted, updated, or deleted rows, so when you write a trigger to handle more than one action, the only way to tell what happened is to look at the internal tables.

If there’s stuff in inserted and deleted, it’s an update. If there’s just stuff in one or the other, it’s… one or the other.

Does it work?

Like a charm!

If a charm had no charm whatsoever.

Results in:

As we can see here: Your Mom.

Your mom.


But these two queries run just fine.

Why? They’re only 97% of the table.


Because that’s cool, right?

Is there a downside?

Hell yeah. If you’ve got really big tables, the rollback can take a while.

Rollbacks are single threaded, dontcha know?

And there’ll be blocking.

But it might be better than having to restore the whole database.


But what if I need to hit the whole table?

Batch your modifications, you animal.

Where am I wrong?

Brent and Tara really kicked the tires hard on this one for, and I appreciate it.

My half baked idea would have be cold and soggy without them.

Now you, dear reader, get to tell me where I’m still wrong.

Thanks for reading!

, ,
Previous Post
Query Plan Oddities: Two Identical Missing Indexes
Next Post
Database History: The Inventor of the Data Page

40 Comments. Leave new

  • When I saw the headline, “TRIGGER” was my first thought. Horrible idea.. but technically it CAN work.
    But now I’m wondering if one get even more “evil” and in the trigger to look at a DMV, get the running query from that, look for WHERE in the query and abort if it’s not there.

    No, I’m not evil enough (right now) to try this.

  • As I’m reading this, I’m mentally compiling a list of things that could go wrong with this idea (would go wrong!) . But I see the “Bad Idea Jeans” tag on this post…
    Carry on.

  • Not that I would recommend doing this… but here we go 🙂

    CREATE TRIGGER OR ALTER dbo.UseAWhereClauseNextTime
    ON dbo.TriggerTest

    create table #test (eventype varchar(max), parameters varchar(max), Eventinfo varchar(max))
    insert #test
    exec ('dbcc inputbuffer (@@SPID)')

    IF NOT EXISTS ( SELECT 1 FROM #test where Eventinfo like '% WHERE %')


    delete TriggerTest where 1= 0
    delete TriggerTest where id = 1
    delete TriggerTest
    UPDATE TriggerTest SET ID =1 where ID = 2
    UPDATE TriggerTest SET StringCol = 'B'
    DELETE TriggerTest where id = 1

    select * from TriggerTest
    order by id

    • HAHAHA, I love it.

    • What if the filter is in the ON clause of an UPDATE/DELETE with a JOIN? Or the WHERE clause is WHERE 1 = 1?

      Also, an INSTEAD OF trigger might alleviate some (but not all) of the blocking / rollback concerns. Though a lot of people have a nasty hate on for INSTEAD OF triggers…

  • Apex SQL Complete (one of Apex SQL’s free tools for SSMS) includes a feature that will check for this before executing the SQL Statement, and prompt you to Proceed or Cancel. Works for both UPDATE and DELETE with no WHERE clause. I’m not affiliated with them at all, I’ve just been trying a few of their tools recently.

    • Andy — sure, but that doesn’t help in a lot of situations, you can also click “go ahead and run it” by accident 😀

      • True, it’s not 100%, but until Sql Server starts reading my mind (I’m assuming that feature will show up soon), I’ll take advantage of something that means I have to be stupid twice instead of just once.

  • Without a lot of thinking examining the trigger closely can’t the trigger be beat with……
    Delete from my.table where 1=1

    • Sure, but aren’t trying to really protect against accidental queries. You could even add a manual override like this: NOT LIKE ‘%WHERE%’ and not like ‘%ALLOW_NO_WHERE%’, then just include –ALLOW_NO_WHERE in your code comments

  • Or you can just use SQL Prompt and that will block update/delete queries without WHERE and even TRUNCATE TABLE statements. 🙂

  • Quote of the Day – “Batch your modifications, you animal.”

  • So why not use the query governor to catch bad stuff before it runs?
    No rollback problems.

    • Brian — interesting. What’s the query governor, and how would you use it here?

      Is there a query constable we can call to arrest the developers, too? 🙂


    If the estimated execution cost exceeds ‘value’, then SQL will not run the statement.
    I assume the motivation behind this trigger is to catch bad SQL and stop them.

    Thanks for the posts Gubner.

    • No, not exactly. The point is to stop data modifications that would potentially hit the whole table.

      That would also stop an expensive enough SELECT.

  • To bypass triggers without disabling them, I use SET CONTEXT_INFO 0x1000 and start my triggers with
    This way the trigger is only “disabled” for my current session (until I set CONTEXT_INFO to something else)

  • How do you delete the last row from your table?

    • Counterquestion: when do you want to delete the last row in your orders or customer table? I believe, this trigger based approach is only useful for some important tables and not every tiny lookup table. And when you really have to delete all customers (because of rolling out your development database to production), you could either disable / bypass the trigger or just truncate it

    • You have to define “last row”. If you mean “make sure no rows are left” I’d do a truncate table. I believe that will avoid the trigger.
      If you mean last as in it implies an order, keep in mind w/o an order by on a select, a table has no inherent order.

      But also importantly, neither of the triggers suggested are really practical on a production system.
      The real answer on production is never allow anyone to do ad-hoc updates!

      • Greg — you’re right, this isn’t very practical.

        If anything, it’s a good exercise to show just how impractical it is.

    • Slawek — you can still truncate the table. Heh.

  • Randy in Marin
    December 15, 2017 1:27 pm

    How about parsing the input buffer in the trigger?

    ALTER TRIGGER [dbo].[CheckSyntax]
    ON [dbo].[Name]

    IF @rc = 0 RETURN

    DECLARE @m1 binary(32) = master.dbo.fn_get_request_metric() — 1st measurement

    DECLARE @buf varchar(max) = (SELECT REPLACE(REPLACE(event_info, char(13), ‘ ‘), char(10), ‘ ‘) FROM sys.dm_exec_input_buffer (@@SPID, 0))

    DECLARE @isDelete bit = CASE WHEN LEFT(@buf, 7) = ‘DELETE ‘ OR CHARINDEX(‘ DELETE ‘, @buf) > 0 THEN 1 ELSE 0 END
    DECLARE @isUpdate bit = CASE WHEN LEFT(@buf, 7) = ‘UPDATE ‘ OR CHARINDEX(‘ UPDATE ‘, @buf) > 0 THEN 1 ELSE 0 END
    DECLARE @isWhere bit = CASE WHEN LEFT(@buf, 6) = ‘WHERE ‘ OR CHARINDEX(‘ WHERE ‘, @buf) > 0 THEN 1 ELSE 0 END
    DECLARE @isTop bit = CASE WHEN CHARINDEX(‘ TOP ‘, @buf) > 0 THEN 1 ELSE 0 END

    DECLARE @m2 binary(32) = master.dbo.fn_get_request_metric() — 2nd measurement
    DECLARE @delta binary(32) = master.dbo.fn_get_request_metric_delta(@m1, @m2, NULL) — difference

    IF (@isDelete = 1 OR @isUpdate = 1) AND @isWhere = 0 AND @isTop = 0
    RAISERROR(‘Deleting or updating while going TOP-less with no WHERE in mind is a bad idea!’, 16, 1) WITH NOWAIT;

    EXECUTE master.dbo.sp_request_metric_msg ‘check buffer’, @delta; — message



    DELETE FROM [dbo].[Name]

    Msg 50000, Level 16, State 1, Procedure CheckSyntax, Line 25 [Batch Start Line 2]
    Deleting or updating while going TOP-less with no WHERE in mind is a bad idea!
    METRIC [check buffer]: elapsed 0 ms, cpu 0 ms, logical 0 pgs, reads 0 pgs, writes 0 pgs
    Msg 3609, Level 16, State 1, Line 4
    The transaction ended in the trigger. The batch has been aborted.

    I’m not doing much for my poor attempt at the syntax check, so the overhead is too small to measure using my metric code. Time for RegEx? (Remove the metric message stuff if you want to try this. I should post about metrics. EF or ORM products could include such a metric with ease so that the dev can get metrics with no work.)

    The new DMF to get the buffer only worked after I applied the lasted SQL 2016 CU. Otherwise, it did not seem to exist. Good thing because I was not going to use DBCC.

    I would hate to have to support code like this. It feels wrong to poorly parse a query that SQL has already efficiently parsed. But hey, “it depends.” It might be the thing to do in a desperate situation.

  • Randy in Marin
    December 15, 2017 1:52 pm

    If you are stuck using backup-restore as undo, then is a horrid trigger any worse? A few custom error messages might be fun. Be ready to explain what a SQL server “idiot detection” error is to the boss with a straight faced.

    • Randy – I don’t disagree, and I’d bet a whole stack of Query Bucks that object level restore is one of the most requested features to bring back.

      • Oh, when you REALLY want it, you could do it. Just create a file / filegroup per (important) table and use filegroup restores :-). Furthermore some 3rd party tools supports to read direct from backup files too (at least from their own format or uncompressed native backups).

        Or you set up a log shipping server (with delayed restore) and use standby restores to be able to read from the second server.

        • Randy in Marin
          January 2, 2018 12:11 pm

          If you already offload a nightly backup-restore-checkdb, then you might have a day old copy of the data that’s good enough.

          • Randy in Marin
            January 2, 2018 12:17 pm

            BTW, I’ve learned to ask if the user/clerk can fix the error by re-entering the deleted data. I’m not amazed when somebody thinks an hour of my time compares to a minute or two of manual entry.

  • As the first officer in “The Hunt for Red October” says to Captain Tupolev: “The Chief Engineer replies that it is possible to go to 110% on the reactor, but not recommended” 😉

  • Is this better than using the row count?
    Adapted from MSSQLTips site. We also set 100 rows as the stage where we no longer allow table clearing, in order to allow it during DB setup:

    — SET NOCOUNT ON added to prevent extra result sets from
    — interfering with SELECT statements.

    DECLARE @Count int
    SET @Count = @@ROWCOUNT;

    IF @Count > 100 AND @Count >= (SELECT rowcnt
    FROM sysindexes
    WHERE name = ‘PK_MYTABLE’ )
    RAISERROR(‘MYTABLE Cannot delete all rows. Disable trigger if required.’,16,1)


  • yes, querying the sys.indexes would be faster than an SELECT COUNT(*), on the other hand this number could be a little bit out of date, so it is no 100 % protection. And you may have to query sys.dm_db_partition_stats, at least, when it is possible, that you are using partitioning.

  • Peter Vandivier
    October 27, 2018 8:14 am

    Just sensibly dropping in here a year later to say that I tried to be lazy and _not_ batch my modifications this week. I’ve had “Batch your modifications, you animal.” stuck in my head for the past 3 days. I had to look this post up just to remember where the damn quote was from. So… thanks for that… ?

    • Peter — sometimes the point of a post isn’t the most important point of a post 🙂