Azure SQL DB is Slow: Do I Need to Buy More DTUs?

Microsoft Azure
25 Comments

You’ve got an Azure SQL DB, and your queries are going slow. You’re wondering, “Am I hitting the performance limits? Is Microsoft throttling my queries?”

There’s an easy way to check: run sp_BlitzFirst. sp_BlitzFirst is our free performance health check stored procedure that analyzes a lot of common performance issues and then gives you a prioritized list of reasons why your server might be slow right now.

Here’s what it looks like when you’ve hit your DTU limits – note that there are multiple rows, one for each 15-second period where you approached or hit your DTU limits in the last 15 minutes:

Azure SQL DB is maxing out

If you click on the “ClickToSeeDetails” column, you get:

We’re checking the last 15 minutes of the sys.dm_db_resource_stats management view to see if you hit 90% or higher for avg_cpu_percent, avg_data_io_percent, avg_log_write_percent, avg_memory_usage_percent, max_worker_percent, or max_session_percent.

Uh oh – I’m maxed out. Now what?

You could tune queries, tune indexes, lower your app’s workload, or upgrade to a higher DTU limit.

To find which queries to tune, run sp_BlitzCache @SortOrder = ‘___’, where ___ is based on which limit you’re hitting:

  • CPU limits – run sp_BlitzCache @SortOrder = ‘cpu’
  • Data IO – run sp_BlitzCache @SortOrder ‘reads’
  • Log write – run sp_BlitzCache @SortOrder = ‘writes’

If you’d rather tune indexes, run sp_BlitzIndex and focus on the missing index warnings. They’re chock full of dangerous pitfalls, but they’re still one of the easiest ways to get started. We teach you how to interpret the recommendations in our Fundamentals of Index Tuning videos.

Lowering your app’s workload is a totally valid option, too: try caching things in your application rather than hitting the database every time, or staggering your workload so that queries are run at different times.

And of course, you could throw hardware at it by raising your DTU level. If you started on a really low level, like S1/S2/S3, and you’re constantly hitting these throttling limits, then it might be time to upgrade. To keep things in perspective, an S4 with 200 DTUs is only $300/month – if you’re any lower than that, you’re probably going to hit limits sooner rather than later.

All these scripts are part of our free First Responder Kit.

Previous Post
Indexed View Matching With GROUP BY And DISTINCT
Next Post
“Surely this one will get a clustered index scan.”

25 Comments. Leave new

  • Thanks for this info. I’m slowly ramping up my Azure learning as we’re looking to create a datawarehouse in Azure. Would sp_blitzfirst work on Azure DWs as well?

    Reply
  • I’ve been trying to run some of these Blitz utilities for a while on an Azure sql instance, however I just cannot get it to work. This time round I’m getting this when running sp_BlitzFirst:

    Invalid object name ‘sys.master_files’.

    Am I running this in the right place? Normally I would run these things against [master], however I cannot seem to do that on an Azure Sql instance.

    Reply
    • David – no, you’ll need to run it in your user database. (Also, make sure you’re on the latest version of sp_BlitzFirst.)

      Reply
      • Brent, thanks, I thought I was trying the right thing.

        I took the latest version off of your github repo. It looks to me like the bits which compare against “SERVERPROPERTY(‘Edition’)” for “SQL Azure” are being incorrectly run on my database. Perhaps this is because of some collation issue?

        When I select SERVERPROPERTY(‘Edition’), I do seem to get the correct “SQL Azure” string.

        One thought I have is that my database is hosted on a UK Azure instance.

        Once I hacked the procedure to skip/include the azure bits, it worked in the standard mode, but when I tried Expert Mode, it came up with this:

        Msg 468, Level 16, State 9, Procedure dbo.sp_BlitzFirst, Line 3808 [Batch Start Line 3965]
        Cannot resolve the collation conflict between “SQL_Latin1_General_CP1_CI_AS” and “Latin1_General_CI_AS” in the equal to operation.

        Reply
        • OK, cool. Go ahead and follow the support instructions in the readme about where to file issues. Thanks!

          Reply
        • I fixed this error by changing “IF (SERVERPROPERTY(‘Edition’)) = ‘SQL Azure’)” to
          “(CONVERT(varchar(max),SERVERPROPERTY(‘Edition’))”

          Reply
          • copy paste error…“IF (SERVERPROPERTY(‘Edition’)) = ‘SQL Azure’)” into ““IF (CONVERT(varchar(max),SERVERPROPERTY(‘Edition’)) = ‘SQL Azure’)”

  • Before looking at the missing index warnings, I look at the Azure Portal page for the database and go to the Performance recommendations blade. It will only show the top five recommendations, but they are often smarter than the missing index recommendations because they take the entire workload into account, put the key columns in an optimal order, and are prioritized. But I do still sanity check them against the missing index warnings, and the additional information provided by sp_BlitzIndex and/or Jason Strate’s sp_IndexAnalysis (https://github.com/StrateSQL/sqlserver_indexanalysis) can be very helpful.

    Reply
  • Adam Chandley
    March 9, 2019 9:19 pm

    Perhaps this is a surprise, perhaps you already knew and didnt say anything, perhaps this was the point of the article. MS has added a new row to the various resource limits pages for single database and elastic pools. For vCore-based SKUs, there is now a published “Log rate limits (MBps)”.

    TL;DR? Standard SKUs get up to 20MBps. Premium SKUs get up to 48MBps.

    WOW.

    Reply
  • Hi Brent, I’m not a a DBA so forgive my uneducated question but I’m struggling to understand the avg_memory_usage_percent metric from sys.dm_db_resource_stats. It is always 99.34% to 100 on a database that I am currently working on. CPU, I/O are all usually low with some peaks when running standard ETL loads. I cannot find anything online that explains exactly whether or not I should be concerned by this metric, I’m hoping that you will ease my concerns.

    Reply
    • David – hmm, lemme step back a little. Who/what told you that that was a problem you should focus on?

      Reply
      • Okay, there was monitoring configured by a DBA (has since left) which looked at avg_memory_usage_percent. Once this went over 98% questoins were asked of myself to investigate as I was carrying out some development. I confirmed with a DBA that we did not see any performance issues so we agreed to remove the check with no full explanation for it’s use and was left for a few months. Now that I have some free time I revisited the issue trying to understand what was the reason for the check and what exactly is avg_memory_usage_percent as it does not appear to change dynamically as it has not dropped below 99% since it first went over it. *My thinking is that it is similar to tempdb growth (probably a terrible example). Keeping it simple some query is running, the tempdb allocates the required space to fulfill that query and grows, once the query is finished tempdb space is cleared but the space allocated on the drive which stores tempdb files remains the same until a user physically shrinks space on tempdb. So at some stage avg_memory_usage_percent peaked at 99-100% due to some queries running and will now never come below this figure as there is no way to refresh this value?

        Reply
  • The solution is a two part fix: 1) Ditch Azure for AWS 2) Ditch SQL Server for another database. Problem solved. Azure has been nothing but painful.

    Reply
  • Eddie Hernandez
    June 5, 2019 9:42 am

    Hi Brent. Thank you for the post and resources. I am having an issue. I have SSMS on my laptop pointing to my Azure SQL database. I run your sp_Blitz.sql file to create your sp_Blitz stored procedure and get the following error. Any idea on how I can get past this?

    Msg 40515, Level 15, State 1, Procedure sp_Blitz, Line 16 [Batch Start Line 3]
    Reference to database and/or server name in ‘master.sys.all_objects’ is not supported in this version of SQL Server.

    Reply
  • Here I thought I could run a view heavy direcy query PBI report with 10 DTUs, that will teach me. Thanks Brent this is great!

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.