Consultants: want a utility to gather SQL Server data?

When you’re a consultant, you need to get diagnostic data from your clients.

When you’re talking to a new sales prospect – do you waste a lot of time asking basic investigation questions like what version they’re on, how much data they have, what kinds of workloads they have? You know they’re not really giving you accurate answers because often they just flat out don’t know.

Instead, imagine just emailing them a diagnostic app that gathers data and packages it into an Excel spreadsheet they can send you. You can review their server before your sales call even starts, and you can give them real value right from your first interaction.

When you’re working on a client’s SQL Server – do you waste a lot of time running diagnostic queries, saving their outputs to a file, and then trying to shuffle that data around from place to place? Do you feel like you’re constantly having to update your diagnostic queries each time you find a new problem, and you don’t really wanna be in the writing-basic-scripts business anymore?

Instead, imagine the diagnostic app running deeper diagnostic queries, like gathering the customer’s top queries by reads, CPU, duration, executions, and more, saving those execution plans out to files that you can review later. You’ll spend less time hitting F5, and more time doing the kinds of analysis that your customers really pay you for.

When a client has an emergency and you’re not around – isn’t it frustrating that they ask you, “hey, SQL Server was slow yesterday around 6pm – do you know why?” You ask them questions, but…of course they didn’t gather any diagnostic data, or if they did, it’s a useless Profiler trace with no insight. You can’t give them answers, and everybody’s frustrated.

Instead, imagine the customer running this diagnostic app whenever they have problems, and emailing you the spreadsheet saying, “Hey, when you get time, can you analyze this and see why our SQL Server was slow at this moment?” The data would include running queries, wait stats, memory metrics, and more.

When you hit a wall and you need help – do you email your network and kinda fumble around, not sure how to pass the data around? Everybody seems to use slightly different ways of gathering and viewing data, and it’s really hard to interpret someone else’s data.

Imagine being able to pass a standard spreadsheet around to your network – after redacting client-sensitive stuff – and asking, “Hey, you’ve seen this metric setup before – do anything about these metrics look weird to you? What would you do next in my shoes?” And imagine being able to just book a spot on my calendar to get that second opinion.

This app will make you more money.

This was the very first app I had Richie build when he started with us, and it’s been a core part of our business for years. It shaved hours off every consulting engagement we had and helped us do a better job every time. I can’t imagine consulting without it, and I’m excited to share it with you to see what you think.

To get it, sign up for a free trial of the Consultant Toolkit, and the download link will be on your receipt and in your My Account page. Extract it to your local drive, like c:\temp. You don’t have to run this on the SQL Server itself – you can run it from any desktop or laptop with at least .NET 4.5.2. Then, to gather diagnostic data on one of your servers, go to a command prompt and run:

Windows authentication:

SQL Server authentication:

If you use tricky characters in your server name, user name, or password, like quotation marks or spaces, you’ll need to surround it with double quotes. For example, if my password is @#!, I might want to use “@#!”.

It’ll take about 5-15 minutes to run a bunch of DMV queries, and when it’s done, check out the Output folder. There’s a zip file with your server’s data.

By default, it runs a relatively limited set of queries. When you’re working hands-on with a client’s server yourself, and you want even more, add –deepdive (that’s two dashes, not one) as a parameter, and it’ll run more. If you go into the Resources folder, you’ll see the queries it runs: things like sp_Blitz, sp_BlitzCache, sp_BlitzIndex, and other utility queries we’ve written over the years.

“Wait, what’s with this AppName.exe stuff?”

The real app name is shown in the readme file, but I’m purposely not mentioning it here, dear reader, and I never will. (You’ll see it in screenshots here from time to time, but never typed out.)

See, if you’re a consultant, you’re going to be giving this app to your clients to run for diagnostic purposes. I don’t want your clients Googling the app’s name and finding me. This is your tool, the thing that’s gonna give you an edge over the amateurs who are still copy/pasting data out of SSMS.

Now, back to what the app does.

You get in-depth diagnostic data that makes you look like a SQL Server surgeon.

You can download a sample zip file from my lab. Here’s what the output folder looks like after a run:

The output includes a spreadsheet plus query plans

The top line is a log file with any error messages from the app.

The second line is a spreadsheet chock full of diagnostic data organized in tabs.

The remainder are the most resource-intensive query plans as reported by sp_BlitzCache. (The query metrics are in the spreadsheet.)

However, note that in the output, we’re NOT showing things like sp_Blitz’s URL column – because it links back to BrentOzar.com. When you hand this app to a client, trust me, they’re going to run it and look at the spreadsheet contents. They’re going to be amazed at the amount of great diagnostics you’re getting. However, I don’t want them seeing links back to me – again, this is your app. (Besides, as a trained SQL Server professional, you know the place to find more information about each of those warnings already anyway. You’re here, reading my stuff. You’ve already found me. Carry on.)

These queries support SQL Server 2008 and newer. Older database compatibility levels will throw errors as it runs, but you’ll still get a lot of useful data.

It’s by no means a monitoring app – if you’ve got $1,000 per server to spend, I’d highly recommend a full blown monitoring application. Those will get you cool features like email alerting, historical playback, and a database repository. This is a much more lightweight, one-off utility designed to solve the 3 specific use cases at the top of this post – especially in situations where clients just don’t have the budget or approval to put in monitoring tools.

This app is our new Consultant Toolkit.

We’re offering a free trial during our early access period, and the download link will be on your receipt and in your My Account page.

The toolkit includes documentation for how to best use it as part of your pre-sales process, building a health check, and giving instructions for your clients as to how to run it during an emergency if you’re not around.

Get it, play around with it, and let us know what changes & improvements you might need in order to use it during your daily consulting work by emailing us at Help@BrentOzar.com.

Previous Post
The 2019 Data Professional Salary Survey Results
Next Post
How Should We Show Statistics Histograms in sp_BlitzIndex?

56 Comments.

  • alen teplitsky
    January 8, 2019 8:38 am

    Looks like pssdiag from Microsoft

    used it last year and it took me a month to track down what was running a specific query that needed new indexes

  • Registration for webcast doesn’t _appear_ to work. I.e. nothing happens when you click “Register”

  • André Kamman
    January 8, 2019 11:02 am

    The parameter -deepdive should be –deepdive or it will think -deepdive is part of the datasource param and won’t connect

  • Just a heads up… upon running, right at the beginning of the script it’s missing info. My second line after executing reads:
    Debug – Changing the log level from to info for console
    There’s missing info in there… Not critical for the app, but just housekeeping stuff. As I go through the data I’ll let you now if I see any other items. Everything else looks great so far Brent!

  • Henrik Staun Poulsen
    January 9, 2019 1:33 am

    Great tool, even for employees. I hope it is not going to be too expensive.

  • Are there any plans to sell this to european based consultants/consulting companies? I remember reading that you had stopped selling things to europeans because of GDPR.

    • That’s a great question! Yes, but when we stopped selling to the EU, I told myself I wouldn’t revisit it until one year after enforcement started. In May, I’ll do another pass through our 3rd party partners and see if they’ve made enough progress, and what changes I would need to make to our business to make it a delightful experience for EU folks. We’ll start sales again to the EU at some point, it just won’t likely be in the next 6 months.

  • Daniel Thompson
    January 9, 2019 6:33 am

    Hey Brent, as usual a great tool to add to my tool-belt! I’m pleased to report a lot of the queries I run are contained in the output & I learnt them from your Perf. Tuning in-person class in Philly! 🙂

    Are you planning on selling this simply as an app where consultants / DBA’s will analyse the results themselves? Or as per your readme file, grab the zip file, we send that to you, for you guys to analyse?

    As usual I’m in the UK so I will be following your progress regarding selling again to the UK! We miss you Brent & Co! 🙂

    • Daniel – thanks, glad you’re getting mileage out of that! Yes, this will just be an app where folks analyze their own results. I wouldn’t want you to have to pay for my time until you actually need it. That readme file is actually for YOUR customers to send the data to YOU. My goal is for your customers to see this as a value you’re providing, coming from you.

      You *will* be able to buy time from me to analyze a file though – I’ll have a mechanism set up so that you can send me the (client-cleaned-up) data, have me look at the metrics, and then do, say, a 1-hour call to talk through it and give you a second opinion. But my goal is to set up a network of consultants where people can get advice from each other on how to improve performance on a given server.

      • Daniel Thompson
        January 9, 2019 7:15 am

        Thanks Brent! I love your goal of setting up a network of consultants. I may have to leave my current company and become a consultant. 😉 Cheers D 🙂

  • Sayed Hosny Elshemy
    January 9, 2019 1:34 pm

    Very good

  • James Horsley
    January 10, 2019 8:41 am

    This seems to be a great tool … but one thing it can’t handle is disks mounted to mount points – it gets the disk wrong (so space not shown correctly and it is bleating about databases on the C: drive that are really on mount points on the C: drive)

    It would be great if this could be fixed – because monitoring space use on disk mounted via mount points is a pain – so a tool that reported correctly would be a big plus.

    • James – thanks! This definitely isn’t an OS monitoring tool though. If you want sysadmin-level monitoring for things like free drive space, there are a lot of other tools that are a better fit.

      • James Horsley
        January 11, 2019 7:55 am

        I agree re monitoring and can see the way it is collecting the disk info it would be hard for it to know – it is hard enough to see from Windows itself – and even things like Resource Monitor in Windows gets confused with mount points. WIth VM’s lots of separate disks for groups of databases are a useful tool to target them to appropriate speed disk/LUNs so old archive db’s can drop off to slower storage – without any change of the SQL setup itself.

        One other thing – for what I can only assume was some sin in an earlier life I support a client who uses Microsoft Dynamics GP for accounts – not sure if you have ever stumbled across GP databases but they “derive” from an old BTrieve file based (on DOS) system – so all tables have 8 character names (because of the old 8.3 DOS filenames) – and almost all the tables are heaps – GP advice is not to go and add the obvious clustering to the PK’s – but in my experience when you hit something performing slowly in GP ignoring that advice is actually usually the sensible thing. Anyway – it means at the moment that when your checks hit the index checks it dies with timeouts probably because it is finding so many to report on. Is it just a matter of changing the timeout values in querymanifest.json to get it to have a bit longer to do the index check?

        • James – yeah, that’s something you’re best off using a traditional monitoring app for.

          About the timeouts – sure, you can edit the timeout, but what I’d rather do is figure out how to get sp_BlitzIndex to respond faster. It may be an issue of identifying which query runs slowly, and then adding an issue in the First Responder Kit repo – http://firstresponderkit.org – and I can tune that particular query, or add a bypass if there are more than, say, 10,000 heaps. For example, right now we have bypasses if we discover more than 100 partitions.

  • Kevin G. Boles
    January 11, 2019 7:27 am

    1) MAJOR KUDOs for releasing this!!

    2) Can you name the output folder to include the server\instance name in addition to the date/time stamp?

    3) Please put my name in the hat as a consultant to be added to your resource pool for interpreting results.

    4) Is there a way to disable the TopQueryByMostRecentCompilation?

    5) Is there a generic way to disable unwanted checks? Hmm, probably editing the querymanifest.json?

    6) What information is TopQueryByPlanQuantity meant to provide?

    7) Why do the various plan sequence numbers have gaps and value curiosities? Is this meaningful?

    8) Is there a better way to provide feedback/suggestions than this thread?

    9) Note that all of the above was just from running default against local SQL Server. I haven’t actually opened anything yet, so I may stumble across answers when I do. 🙂

    • 1) Thanks!

      2) It does when you connect with a server/instance name – sounds like you ran it with no parameters.

      3) That’ll be driven by the customer list, basically. (You’ll need to be a current subscriber.)

      4) No.

      5) Yeah, editing querymanifest.json, but of course that’ll cause problems the next time you get an updated version. I’m aiming to bring out new versions either 1-3 months to coincide with new First Responder Kit versions. I totally see the value in adding a second querymanifest.json file so people can add their own custom queries, but I don’t think we’d add the ability to skip existing queries. The problem is that if you need to pass the spreadsheet around to someone else to get a second opinion, they’re going to need the data.

      6) The top queries by the number of plans they have in the cache: https://www.brentozar.com/archive/2018/03/why-multiple-plans-for-one-query-are-bad/

      7) Not sure what you mean there, may need to elaborate.

      8) Yes, when we go live with the paid version, there will be a support forum inside the site.

      9) OK, cool!

  • Kevin G. Boles
    January 11, 2019 8:53 am

    2) I used this to call app:

    Appname.exe –datasource indiciumlaptop6\sql2016

    and directory name was:

    20190111_090723

    I note the ZIP file DID have the server/instance name. But that isn’t what consultants will be using. They will be going directly to the folder and working from there. Well, I work directly on my client’s systems the vast majority of the time anyway.

    7) sampling of file names, starting with first in sequence:

    TopQueryBy_CPU_1__20190111_090827.sqlplan
    TopQueryBy_CPU_4__20190111_090827.sqlplan
    TopQueryBy_CPU_8__20190111_090827.sqlplan

    TopQueryBy_LogicalReads_3__20190111_090836.sqlplan
    TopQueryBy_LogicalReads_6__20190111_090836.sqlplan
    TopQueryBy_LogicalReads_7__20190111_090836.sqlplan

    TopQueryBy_MostRecentCompilation_28__20190111_090934.sqlplan
    TopQueryBy_MostRecentCompilation_29__20190111_090934.sqlplan
    TopQueryBy_MostRecentCompilation_30__20190111_090934.sqlplan

    TopQueryBy_PlanQuantity_3_20190111_090750_0x0841162AB41CB393.sqlplan
    TopQueryBy_PlanQuantity_4_20190111_090750_0x03B260D186489B0A.sqlplan
    TopQueryBy_PlanQuantity_7_20190111_090750_0x01A153C62097A86C.sqlplan

    • 1) OK, cool. The current behavior is the way it’ll ship. I hear ya, but I don’t see us changing that one – if you’re working directly on the client’s system, then you know the server/instance name. 😀

      7) Oh sure, those line up with the spreadsheet – TopQueryBy_LogicalReads_3 is the #3 query by logical reads. You can see that in the LogicalReads tab of the spreadsheet, and you can see which ones didn’t get plans retrieved. (There are a lot of reasons why SQL Server doesn’t store some plans in the cache.)

  • Kevin G. Boles
    January 11, 2019 10:00 am

    1) Not when you run it against N different instances from the one VM they give you for your work machine. 🙂

    7) Ahh, no plans retrieved! Makes perfect sense (now). Thanks.

    • 1) Sure, then the easy way is to just expand the zip files after they’re created. They should be created with the folder names to match the zip file (with most of the unzip tools I’ve used.)

      I totally hear where you’re coming from, we’re just not changing that one for the release. (I wanna focus on high-value stuff.)

  • Kevin G. Boles
    January 11, 2019 10:13 am

    Definitely down with high-value fixes, and clearly this is small potatoes. 🙂

  • Darron Chapman
    January 14, 2019 7:34 am

    I would hate to fall in love with a tool only to find out my boss is unwilling to spring for it. Can you give me some idea about pricing?

    • Darron – great question, and I haven’t decided yet, still running pricing experiments. It’s targeted at consultants where the faster they can help solve problems, the faster they can deliver value. (Going by your email address, I don’t think this is a good fit for you – you’re probably better off with traditional monitoring products.)

  • Hi Brent, the consultant tool download link is broken.

  • Yvette Leijten
    January 15, 2019 11:40 am

    Hi Brent, the tool stops working after 9 days despite of 90 days…… (-:
    It is a great tool!!

  • Hi, here my error:
    Unhandled Exception: System,Exception: This assembly has been built with SmartAssembly …….. which has expired.

  • Yes it works 🙂
    Thanks a lot Brent

  • This is very cool thanks Brent!

  • Looks really useful, thanks!

    Ran this on my dev box and the SQL Server CPU Utilisation averaged around 25% whilst it was running. Just sharing in case anyone else (like me) was sweating over whether or not to run this on a Production box.

  • Duel Core with HT (Intel i7-7500U CPU @ 2.70GHz). Not exactly a beast but largely does the job.

    I’m going on-site in a few days to do some consultancy and was debating whether it’s safe to run this on my customer’s production server during peak time, or whether I’m best to schedule it to run out-of-hours before I arrive.

    The data captured appears to be split between data you could reliably capture out of hours (basically, the ‘averages’ from the DMVs e.g. WAIT STATS), and data best captured DURING peak/problem periods (e.g. current waits and stats). The latter is easy to get independently of the app anyway using Brent’s existing scripts.

    Of course, if it doesn’t hammer the server and cause a heap of blocking, it doesn’t really matter too much.

  • Here’s what I’ve got:
    Unhandled Exception: System.Exception: This assembly has been built with SmartAssembly {8e7255a3-e84e-465e-88f6-44c3e2a09c55}, which has expired.
    at..()
    at AppName.Program.(String[] )
    Any ideas on make it run as it should ?

    • Rob – you’ve got an older version. Go ahead and download the current one. Thanks!

      • Thank you !
        Is there any dedicated web page where we can report errors ? I’ve got some error messages thrown on my screen … Actually, there were just two.
        At step: Info – Calling sqlData.ExecuteQuery to execute query ‘sp_BlitzCache by Spills’
        Info – Write result [0] data to worksheet ‘Plans Spills’.
        Error – Source: System.Data; Message: Cannot find table 0.;
        The same error was reported at step: Info – Calling sqlData.ExecuteQuery to execute query ‘sp_BlitzCache by Avg Spills’

        • Rob – yep, as we go live with the paid product, we’ve got a support area. For now, this is fine.

          This error means you’re running an older build of SQL Server that doesn’t support sorting by spills. Totally harmless, not a problem. May want to patch though.

    • ok, so there’s an early build. I’ll try with the new one.

  • This is one of the best SQL tools I have used that does not require a ton of explanation/configuration to use it. The output is fantastic. Saved me hours of trying to get all of this data into a spreadsheet to talk to non-technical folks. Looking forward to hearing about the cost.

    I am sure you have gotten this question a few times but I will ask anyway, do you have any idea of what the cost (ballpark) when you get out beta stage will be?

    • Joe – thanks, glad you like it! I don’t have anything to announce yet, but will announce it on the webcast and open up registrations then. Thanks!

Comments are closed.