Fun With Logging Dynamic SQL


While working on a demo

I thought this might make a fun aside to share on the blog, because working with dynamic SQL can be challenging. Especially if you don’t have a monitoring tool or application profiling to figure out who ran what and when, most of the time you’ll have no idea what went wrong.

For example, the dynamic SQL that my query produces (sometimes) looks like this:

Notice those two variables in the WHERE clause? Part of how using parameterized dynamic SQL protects you is by not just sticking whatever stinky nonsense users submit directly in to your dynamic SQL block. That’s also what makes it challenging to troubleshoot. Sure, you can head to the plan cache, but you may not find anything there.

Logging Tables

If your company is too cheap to get you monitoring tools, well, that’s another discussion. If you want to figure out why dynamic queries are slow anyway, here’s one option. Without showing you all of what my dynamic SQL does, I’ll show you how the logging works. It starts with a table!

You can add in more stuff if you want, this is just the bare minimum I’d collect.

Now, in the stored procedure, you can do something like this.

But all that will log is your dynamic SQL with the variables. What I prefer to do is spin up a placeholder that I can replace values in without messing with the dynamic SQL to be executed. Since this command will never be executed, I can use REPLACE to figure out what my values are.


I double logged my stored proc calls so you can get a better idea of what happens. Here’s what my logging table looks like:


You can already see one spot where an optional @Top was substituted with the actual number “the user” asked for. Don’t give me any application security guff here, I log in as sa because I am the app and the app is me.

Zooming in on where the other variables live in the WHERE clause, now we can see which ones were called.

And these are such happy days!

This can pose some security risk. If some malicious person were to realize that you’re logging these calls, they could change your code to execute the printable version instead. If security is a concern, you may want to offload the logging code to other stored procedures.

If you want to really ball out on this one, you could even frame the replaced values in special delimiters like ‘||’ and then parse those out as well. But I’ll leave that as an exercise for the reader.

That’s you.

Thanks for reading!

Previous Post
SQL Server 2017 Release Date: May 31?
Next Post
Free Webcast Next Tuesday: How to Triage SQL Server Emergencies

9 Comments. Leave new

  • Hi Erik,

    nice blog post. I especially love the statement ” I log in as sa because I am the app and the app is me”. Maybe I should rephrase it to “I am the DBA and the DBA is me” while dealing with stubborn users :-).

    All the best

    • Erik Darling
      April 12, 2017 7:18 am

      It’d be even more impressive if you were on a DBA team and did a whole Rockettes style high-kick dance while singinging “we are the DBAs and the DBAs are we” — mind your tights don’t rip though.

  • Patrick Dadey
    April 16, 2017 9:06 pm

    Hey Eric sounds like you are @one with the app :-).
    On another note it is good to see someone else use WHERE 1 = 1. Is that an artefact of dynamic SQL or do you use it out of convention?

  • I did similar things about 15 years ago. It helped me prove that users almost never supplied input for any of the dozens of parameters on the “Swiss Army Knife” proc; and by adding a call at the end to sp_user_counter1 we could directly expose that procedure’s execution time through perfmon rather than keep a profiler trace running on prod all the time. Good times.

    • Erik Darling
      April 27, 2017 7:37 am

      Hahaha, isn’t it funny how almost nothing has changed or improved around how dynamic SQL gets used in… forever?

  • Not sure about the REPLACE, but maybe I’m over reacting … I mean, it must take some measurable time to do a REPLACE on Ahem! 60K of dynamic SQL, and maybe the ISNULLs cost some too … although I have never tested how much CONCAT costs 🙂 but I favour CONCAT so that I can store @parameters separately to @command in my Logger table

    @Parameters = CONCAT(‘@i_PostTypeId=’, @PostTypeId, ‘, @i_UserId=’, @UserId, ‘, @i_Top=’, @Top)

    in part because I would like to see the dynamic SQL as-was to avoid possible confusion over any REPLACE modifications, I do have to watch loss of precision on some datatypes using CONCAT

    On the occasions when I care about whether a string parameter is EMPTY or NULL I can either do

    CONCAT(‘@s_Name=’, ””+@s_Name+””)

    and realise that EMPTY means NULL, given that is what CONCAT is going to do for the non-string params, or

    CONCAT(‘@s_Name=’, ISNULL(””+@s_Name+””, ‘NULL’))

    but if I want oven-ready parameters for sp_executesql I would need to use ISNULL for all of them which, unless you have a crafty idea, means doing type conversion casts for all non-strings

    @Parameters = CONCAT(‘@i_PostTypeId=’, ISNULL(CONVERT(varchar(20), @PostTypeId), ‘NULL’), …

    • Hi! I’m not sure if you’re asking a question – if this is a question, can you post it on a Q&A site like Thanks!

      • Appreciate you taking the time to reply, thanks, and sorry my comment was badly worded.

        It was a suggestion to LOG Parameters separately from Command as I think only storing the Command (with REPLACE) may make solving some problems more challenging due to loss of detail when checking the LOG later.


Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.