The Worst Database User

SQL Server
30 Comments

“Can you tell me a little about this code?” I asked, keeping my best poker face on.

“I’m really proud of it. It gathers a whole bunch of stuff from accounting, human resources, manufacturing, you name it. I want to be able to track how we’re doing over time.” He nearly hovered in his chair with excitement, eagerly pointing out the different places he was gathering data from. “It’s like having my finger on the pulse of what matters most to the company.”

“That sounds really cool. And there’s some really fine-grained stuff in here – it looks like you really are taking the pulse of the company. How often do you run these queries?”

“It depends – some of them every day, some of them every minute because I need better sampling on those.”

I started to ease into the tough questions. “Every minute, wow. Are you concerned about the overhead?”

“No no, they’re really small queries, only took me a few minutes to write. And it’s not like I’m doing SELECT *,” he laughed.

“Oh, good. And did you index the tables involved? Sometimes adding indexes can really help ease the load of frequent queries.”

He paused for a second. “I never thought about that – but I can’t, right? I can’t add indexes on that kind of thing, can I?” Suddenly he realized he was in unsafe territory.

“No, I guess not. I wonder why that is. But who cares – it’s gotta be worth the cost to get such valuable information, right?”

Your job isn't to gather metrics you won't use.
Your job isn’t to gather metrics you won’t use.

“Oh yeah! You bet.”

“So how often do you query that data?”

He paused again. “Well, not very often. I’m pretty busy.”

“Did you query them today?”

“No.”

“Yesterday, or last week, maybe? Like before you called me in?”

His enthusiasm was gone, and he eyed me with suspicion.

“You’re way too busy to query this data, aren’t you? You’re busy putting out fires, fixing people’s mistakes, and keeping the lights on. You don’t really need this data gathered every minute, across all of your servers, do you?”

See, that user was the DBA, and when I showed him the top resource-using queries in the plan cache, they were all his monitoring queries. Queries that hit tables he couldn’t possibly index, because they weren’t actually tables at all – they were dynamic management views that SQL Server had to compute each time he accessed them. Queries that faithfully stored away data in tables he’d never bother to review.

And now you know the rest of the story.

Previous Post
Need High Availability for SQL Server? Use Windows Server 2012.
Next Post
T-SQL Tuesday: Stop Tuning with Wait Stats Percentages

30 Comments. Leave new

  • Great point – don’t gather data you’re not going to use, and make sure you’re considering the overhead. Did this person need to be able to reference them after the fact, so that if something did happen he could review the historical stuff and see what happened to cause it?

    I can think of a few better ways to do it (notably by letting something else do the heavy lifting, like an actual vetted third party tool), but it makes the situation a little more understandable.

    Reply
  • Dave Wentzel
    March 6, 2014 8:23 am

    Sounds like you consult at my company. We have 2 third party perf monitoring tools running on each of our servers and then a customized version of sp_WhoIsActive running every 15 seconds. It’s customized because our DBAs thought they knew more than Adam Machanic.

    What’s more…if our sp_WhoIsActive finds a “blocking storm” (I still don’t know what that is, and neither does google…our DBAs defined it) it goes into “hyper-logging” mode (the DBAs term as well) where it runs sp_WhoIsActive every second. Well, whatever a blocking storm may be, we tend to experience them almost hourly for years now. And there is no budget to fix the code of course. So we monitor and do nothing about it.

    Due to how “custom” sp_WhoIsActive is written (WHILE 1=1 looping and temp tables) it can consume 50GB of tempdb space unless someone restarts it occasionally.

    Putting this all together means the monitors make most relatively benign concurrency “events” rather major performance-sucking issues. I call our DBAs the “HUGs” (Heisenberg Uncertainty Group) in meetings. Those of us in the know chuckle at the joke because us lowly developers can’t change this bizarro monitoring policy. But the DBAs don’t get the joke…one of them got angry once and asked not to be compared with Walter White. Ha ha ha.

    Reply
    • HAHAHAHAHA, Heisenberg Uncertainty Group, that’s so awesome.

      Reply
    • A blocking storm? Then increasing logging in such a situation? I liken that to hosing off my driveway with hot water when it’s -20 outside.

      Reply
    • Tell me more about this “‘custom’ sp_WhoIsActive” ? Is it a modification of my code? Something completely different but with the same name? Something else entirely?

      Reply
      • I hope they at least had the decency to rename it to something else – something more along the lines of sp_WhoIsSoonToBeInactive. Curious to see what they did too – sure doesn’t sound good.

        Reply
        • Yeah, just to avoid confusion. Unlike Mr. Ozar I didn’t trademark the name (yet? I’m not sure if it’s worth the effort), but it is commonly enough known that creating your own version and using my name is kind of an a-hole move to your fellow and/or future DBAs.

          Reply
          • It did cost us around $1k of lawyer time & government fees for the trademarks. There’s probably a cheaper way to do it yourself, but just came down to time constraints on our part.

            Unfortunately, it’s not common knowledge that adapting your code is an a-hole move. (sigh) Even though it’s trademarked, we still hear from users who take our code, change it (often introducing bugs), and then deploy it on other peoples’ servers. Even had a couple of small monitoring software companies email us to say they were including “Powered by sp_Blitz” in some of their future code, and thankfully the trademark/copyright stuff helped us out there.

            It’s definitely an a-hole move – but there’s a lot of a-holes out there. (And of course trademarking doesn’t fix that either.)

          • Dave Wentzel
            March 6, 2014 1:15 pm

            I should probably watch what I say lest I get myself in trouble, but they call it sp_whoIsActive2, kept your header, but the guts look fairly different but close enough that yours was clearly the inspiration. Never had problems with yours making perf worse…can’t say the same for ours.

            Superbly
            Qualified
            Litigator
            for
            Dysfunctional
            Brainless
            Associates

          • Can you send it to me? I don’t want to take any action or get anyone in trouble. Just want to see how my work was re-interpreted. Maybe I’ll be inspired by something 🙂

  • one time we created a table for some application
    asked the dev’s if we need to create an index
    the dev told us to create a non-clustered index on every column of the table

    Reply
    • Dave Wentzel
      March 6, 2014 9:49 am

      I was kidding when I told you to index every col.

      Seriously though, my comment was not meant to foster a “devs vs DBAs” flame war…I would tend to agree with you…devs make plenty of boneheaded decisions too. It was just meant to add to Brent’s allegory on monitoring.

      Reply
      • The key to the Dev/DBA relationship is knowing your role and what each of the team members are best at, and then staying in your lane. The problems arise when you have a gap in experience or expertise – that’s when “fit hits the shan”. Nobody really knows, but they give their best crack at it – at which time the database server dies a slow death.

        Reply
    • Eric Russell
      March 8, 2014 9:08 pm

      The next time someone tells you to index every column, just tell them:
      “Thanks for the adice, but I’ve got it covered. It has a clustered index that includes every column.”

      Reply
  • ok, that was a little confusing. at the beginning he talked about gathering data from accounting, etc.. then it switched to talking about DMV’s…. maybe it’s just me.. still i get the point.

    Reply
  • Eric Russell
    March 8, 2014 9:47 pm

    I’ve seen the “WITH RECOMPILE” hint added to DMV queries. This forced plan recompilation can itself cause high CPU utilization, if used on any complex statement that’s executed frequently (like every second). At a SQL Saturday event, I asked a presenter why he had the hint added on every one of his DMV queries, and he said… “Good question, I can’t recall”.
    So, DMV queries and RECOMPILE hint, anyone here have thoughts on this or know the back story of how it became fashionable in certain circles?

    Reply
    • People add the WITH RECOMPILE hint so that their monitoring query doesn’t show up as a high resource using query in the plan cache. Recompiled queries do not pile up metrics in dm_exec_query_stats.

      I have mixed feelings about that. Sure, if you’re a consultant, you don’t want your own queries to look like the worst ones. However, if they ARE the worst ones – like I discussed in the post – then you probably need to ease off and run them less often.

      Reply
  • Brent, I am sorry to say this was a very disappointing article and has a tone I am not used to hearing from you. I get that there was some monitoring queries that the DBA was running that was causing a performance problem.
    However, unlike the blog posts and other things you have made in the past that are invaluable to me, you never once mentioned what they were or how it could be done better.
    I can feel a strong hint that the biggest issue was they were ran too often.
    I feel an even stronger one that you did not like his responses or attitude.
    There is not even a small hint on any recommendation for resolution.
    What are your recommendations for queries against DMV’s?
    Which ones will cause pain and how can you do good monitoring without feeling this pain?
    How can I make the built in queries in the SSMS monitoring tool stop showing in the top queries on a server?

    One last thing I would like to point out as a missed opportunity for anyone.
    There is always going to be queries in the top ten long running or most resource using queries.
    How do you prove that they are causing a real issue with performance?
    The first argument from most venders and product support teams are that something has to be at the top.

    Reply
    • Howdy! Sorry to hear that you didn’t like the post.

      You’ll be happy to know that this conversation didn’t happen in real life. I would never have a conversation, turn over to my blog, and transcribe it. Instead, rather, I’m using a storytelling technique to teach a lesson here: think of yourself as just another user. Do you really need the data you’re gathering? Do you need it at that frequency? Are you being mindful of system resources when you gather it?

      Sometimes my posts just have questions, not answers.

      And it turns out that’s a great way to teach, too.

      Reply
  • Brent, I completely disagree with the last comment here, and found this to be very helpful to keep in mind. On my blog someone just asked me how often I check sys.dm_os_performance_counters and, among giving similar advice, gave them a link to this post.

    This goes beyond the generic incomplete “it depends” answer everyone hates about DBAs and gives a complete answer instead. It depends on how expensive it is and, more importantly, at what level you’ll actually use it.

    Thank You

    Reply
  • Nice article. One time i was checking the performance of a customer database and the heaviest operations during the day were four instances of Spotlight for SQL being executed by the IT crew. Quite funny.

    Reply
    • Don’t be fooled by cumulative statistics. I’ve been using Spotlight for many, many years and I’ve heard that from the developers that don’t really understand the statistics. Spotlight, for the most part, is pretty light, as it just queries DMV’s and does WMI calls.

      Reply
      • Allen – I hear you, and I hate to well-actually you, but I used to work for Quest, and that wasn’t always the case. For example, there was a Spotlight version before Enterprise that did all kinds of polling from each workstation running the software – including running traces. The impact was actually really high.

        Reply
        • I don’t endorse developers (or an “IT Crew”) leaving it open indefinitely, but I personally have it open all day long. It’s a DBA tool. It has saved me light years of DBA-hours, as I’m sure you know. I was actually the only IT person to get three monitors – one for Spotlight alone. It’s my left arm for the most part. I realize it contributes to OLEDB waits and all that jazz, but for the most part it’s a solid tool. Did you contribute to it’s development by chance? If so, thanks 😉

          Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.