Sizing A New Server? Start With Maintenance.

Clientele

Most of out customers are on slightly (ahem) older (AHEM) hardware, and they have big questions:

  • How do we know what kind of hardware we need?
  • How do we load test it?
  • How do we know what to change to make it better?

The load test question is interesting, particularly because getting a reliable user workload is so difficult and expensive.

Tara talks quite a bit about how at her old jobs, she had entire teams with expensive software to generate workload on Dev and QA boxes, along with pricey software to help them.

One thing you can do to immediately kick the tires is run maintenance tasks to see how your new hardware responds, then compare it to your current hardware.

I mean, you are taking backups, running DBCC CHECKDB, and probably rebuilding indexes like your job depends on it, right?

If you’ve decided you want X times improvement in performance, all of these things need to perform X times faster, too.

Backups

Back when we wrote white papers for Google about doing similar trending, we used scripts.

Scripts.

You know, like it’s 2005 or something.

Since then, we’ve written sp_BlitzBackups to give you all that information and more in one spot. This’ll give you database level information about backup size, speed, duration, frequency, as well as some warning if you’re doing anything weird.

That’s a much better way to track the information down, because we built in processes to persist that information, too.

If you’re taking VSS snaps, this is slightly less helpful.

I suppose it’s not terribly helpful if you don’t take backups at all, either.

So, you know, do that.

DBCC CHECKDB and Index Maintenance

Timing these is another way to judge hardware performance, though getting the timing is a little more difficult.

  • If you’re using Ola Hallengren’s scripts, that information will likely be present in the CommandLog table in the master database.
  • If you’re using MinionWare, check the MaintLog* tables in the Minion schema.
  • If you’re using maintenance plans, you’ll likely have to go spelunking into the toilet bowl hell of msdb.dbo.sysssispackages and the DTS schema.

I mean, look, there’s a reason people write their own tools.

Maintenance plans are like those old videos of monkeys in tuxedos pretending to be in fancy restaurants.

At first it’s funny because there are monkeys in people clothes but then you realize those monkeys don’t know French and the food is fake.

What’s your deal?

Hardware factors that will affect maintenance speed:

  • Disks (c’mon SSD)
  • Path to disks if you’re on a SAN (1 Gb iSCSI is not your friend)
  • CPU speeds (can’t hurt the hertz)
  • Memory (ain’t it always?)

You have a treasure trove of data on your current server to give you an idea of how things are running currently.

Now all you have to do is restore a production database or two over on your new server, and see how things run there.

How you choose to do that is up to you, but I’d love it if you used sp_DatabaseRestore. It’s free, and a lot of people have put a lot of work into it.

If the speed of your maintenance tasks on the new server isn’t blowing you away, chances are your queries won’t be much better off from a hardware perspective.

Remember that queries operator under many of the same hardware limitations as maintenance tasks.

If you strip out optimizer choices and just run SELECT * or SELECT COUNT(*) from your tables without a WHERE clause, or your plans have large scans in them anyway, sequential reads from disk into memory, or just from memory are on par with what backup, CHECKDB, and index rebuilds do.

If you need some starter queries for Ola’s table, these work pretty well:

Thanks for reading!

 

Previous Post
Do Disabled Indexes Affect Missing Index Recommendations?
Next Post
[Video] Office Hours 2017/12/20 (With Transcriptions)

7 Comments. Leave new

  • I am getting wired result when running the last query

    Reply
  • Thanks Erik, this is really helpful and insightful especially since we are planning a move to GCP from our own servers in a CoLo. But, assuming costs are acceptable, with GCP we can just add resources to get acceptable response times in the new implementation.

    Reply
  • Henrik Staun Poulsen
    January 2, 2018 8:47 am

    May I recommend a “collate database_default” clause on the DatabaseName joins?
    just for those of us that cannot fix the old server collation?

    Reply
    • Henrik — Sure, can you shoot me a gist or pastebin with the change you made? Just don’t wanna go back and forth in the comments.

      Thanks!

      Reply
      • Henrik Staun Poulsen
        January 3, 2018 1:40 am

        hi Erik,

        Well, I could send you an email, if I knew your email address. But it is just the two lines with join to the CTEs:
        JOIN im ON im.IndexName = i.name collate database_default
        and
        INNER JOIN cl ON cl.DatabaseName = BS.database_name collate database_default

        so that the code becomes

        SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
        DECLARE @startDate DATETIME;
        SET @startDate = GETDATE();

        ;WITH im
        AS ( SELECT DatabaseName, Command, MAX(DATEDIFF(SECOND, StartTime, EndTime)) AS Index_Seconds, IndexName
        FROM master.dbo.CommandLog
        WHERE CommandType LIKE ‘%INDEX%’
        AND NOT DatabaseName IN ( ‘master’, ‘msdb’, ‘model’, ‘tempdb’ )
        AND StartTime BETWEEN DATEADD(YEAR, -1, @startDate) AND @startDate
        GROUP BY DatabaseName, IndexName, Command )
        SELECT t.name,
        i.name AS IndexName,
        ( SUM(a.used_pages) * 8 ) / 1024. AS [Index MB],
        im.Index_Seconds AS WorldRecord,
        im.Command
        FROM sys.indexes AS i
        JOIN sys.partitions AS p ON p.object_id = i.object_id AND p.index_id = i.index_id
        JOIN sys.allocation_units AS a ON a.container_id = p.partition_id
        JOIN sys.tables AS t ON t.object_id = i.object_id
        JOIN im ON im.IndexName = i.name collate database_default
        WHERE t.is_ms_shipped = 0
        GROUP BY t.name, i.name, im.Command, im.Index_Seconds
        ORDER BY t.name, i.name;
        GO

        –DBCC CHECKDB
        SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
        DECLARE @startDate DATETIME;
        SET @startDate = GETDATE();

        WITH cl
        AS ( SELECT DatabaseName, CommandType, StartTime, EndTime, DATEDIFF(SECOND, StartTime, EndTime) AS DBCC_Seconds
        FROM master.dbo.CommandLog
        WHERE CommandType = ‘DBCC_CHECKDB’
        AND NOT DatabaseName IN ( ‘master’, ‘msdb’, ‘model’, ‘tempdb’ ))
        SELECT DISTINCT BS.database_name AS DatabaseName,
        CONVERT(NUMERIC(10, 1), BF.file_size / 1048576.0) AS SizeMB,
        cl.DBCC_Seconds,
        CAST(AVG(( BF.file_size / NULLIF(cl.DBCC_Seconds, 0) ) / 1048576.0) AS INT) AS [Avg MB/Sec]
        FROM msdb.dbo.backupset AS BS
        INNER JOIN msdb.dbo.backupfile AS BF ON BS.backup_set_id = BF.backup_set_id
        INNER JOIN cl ON cl.DatabaseName = BS.database_name collate database_default
        WHERE BF.file_type = ‘D’
        AND BF.file_type = ‘D’
        AND BS.type = ‘D’
        AND BS.backup_start_date BETWEEN DATEADD(YEAR, -1, @startDate) AND @startDate
        GROUP BY BS.database_name, CONVERT(NUMERIC(10, 1), BF.file_size / 1048576.0), cl.DBCC_Seconds;

        Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.