Blog

Black Friday Sale: Free $299 for the Last Holdouts

All day long, you’ve been watching and waiting, thinking you could resist a killer deal.

Black-Friday-Sale

Coupon LastHoldout299 gets you $299 off any of our $299 training videos, including:

Move fast – it’s only good for the first 5 buyers.

And with that, we’re done for the day! See you next year.

Black Friday Sale: 40% Off Training Videos

So you didn’t get in on the midnight doorbuster sale for 50-90% off our training videos, and you’re jealous of all the cool kids who got up early.

Black-Friday-Sale

Coupon FunTrainingAtHome will take 40% off any of our online training videos including:

This discount is in place for the rest of the day, with no limit on buyers. Go check ‘em out!

Black Friday Sale: 30% Off In-Person Classes

So you didn’t get in on the midnight doorbuster sale for 50-90% off our in-person classes, and now you’re kicking yourself because your boss gave you permission to attend.

Black-Friday-Sale

Coupon EasyPainRelief will take 30% off any of our 4-5 day in-person training classes. Now, instead of the first number in their price being a 3, it’s a more palatable 2. Probably even more palatable than that cranberry salad you’re warming up.

This discount is in place for the rest of the day, with no limit on buyers.

Good luck, and hope to see you in Chicago, Denver, or Portland!

Black Friday Sale: Learn to Read Execution Plans for $99

You’re comfortable writing queries, but some of them are slow – and you need more ways to tune than just adding indexes. You’ve heard that you should read execution plans, but you don’t know where to start.

Black-Friday-Sale

In this 5+ hour video class taught by Microsoft MVP Jeremiah Peschka, you’ll:

  • Learn how to read SQL Server execution plans
  • Understand how different query syntax changes plans
  • Explore the plan cache to find problem queries
  • Learn solutions to common query problems

You even get sample code to download, plus queries to check your own system and find the worst execution plans that need help.

You can watch it for 18 months on your desktop, laptop, iPad, or iPhone.

The first five people who use coupon code LearnToRead99 get the course for just $99. Go!

(Oh, and we may have activated a few more of those Doorbuster50 codes for 50% off, too.)

Black Friday Sale: Free $29 Coupon – And a Test

SQL Server’s Maximum Degree of Parallelism (MAXDOP) controls the number of processors that are used for the execution of a query in a parallel plan.

Black-Friday-SalePop quiz: what’s our favorite knowledge base article for recommendations and guidelines for MAXDOP?

That KB# is your coupon code good for $29 off our training classes or videos – even the $29 videos! Good luck – it’s only good for the first 10 buyers.

How Would You Change AlwaysOn Availability Groups?

SQL Server 2012 introduced AlwaysOn Availability Groups, a way to achieve high availability, disaster recovery, and scale-out reads. SQL 2014 brought some improvements around higher uptime and more scale-out, and all signs point to continued improvements in the next version of SQL Server, too. (I love it when Microsoft brings out features like this and continues to invest in them over time.)

A lot of the emails I get start with, “I’d like you to help me implement AlwaysOn AGs,” but it’s funny – most of the projects don’t end up actually deploying AGs. There’s a few barriers to adoption, and even when you’ve built an Availability Group, management can be a little tricky. Don’t get me wrong – I love the feature – but it comes with some surprises.

Rather than me prejudicing you, I’ll just put it out there as a question:

How would you change AlwaysOn Availability Groups?

Leave your answer in the comments. (And yes, Microsoft is watching.) Bonus points if you link to your Connect request.

SQL Server’s Cost Threshold for Parallelism

“Should a query get to use more than one CPU core?” That’s an important question for your SQL Server. If you’re not sure what parallelism is, get started by exploring the mysteries of CXPACKET with Brent. He’ll introduce you to the the setting, ‘Cost Threshold for Parallelism’.

Let’s test Cost Threshold for Parallelism

I generate an estimated execution plan for the following query. I’m running against a copy of the StackOverflow database that doesn’t have many indexes.

SELECT COUNT(*)
FROM dbo.Posts
WHERE PostTypeId=2

I get a parallel plan with an estimated cost of 272.29. (I can tell it’s parallel because of the yellow circle with double arrows on three of the operators.)

execution plan parallel

I decide I’m going to test out ‘Cost Threshold for Parallelism’ and make this plan go serial. This is a server wide configuration, but I’m on my test instance, so no worries.

exec sp_configure 'cost threshold for parallelism', 275;
GO
RECONFIGURE
GO

I run my query again and look at my actual plan this time…

Actual plan- paralleism

Hey wait, that’s still parallel! It has the same estimated cost, and that cost is below where I set the cost threshold for parallelism. This seems broken.

At this point, I might get confused and think SQL Server was using a cached plan. But changing configuration options like cost threshold for parallelism will trigger recompilations– that shouldn’t be it.

What ‘Cost’ is SQL Server Using?

The secret is in the serial plan. I need to look at the estimated cost for this query — it’s the same as our original, but I’ve added a MAXDOP 1 hint to force a serial plan:

SELECT COUNT(*)
FROM dbo.Posts
WHERE PostTypeId=2 OPTION (MAXDOP 1)
GO

Estimated cost - serial plan

The estimated cost for the serial version of the plan is 287.963, which is over the threshold I set at 275! This is the cost that is being considered and which is above the bar I set for who gets to go parallel. I can prove it by raising my cost threshold to just above this level:

exec sp_configure 'cost threshold for parallelism', 288;
GO
RECONFIGURE
GO

And now when I run my query (with no maxdop hint to force it), I get a serial plan.

serial plan because of cost threshold

Cost Threshold Doesn’t Just Apply to the Plan You See

Behind the scenes, the optimizer is considering many possible plans. The Cost Threshold for Parallelism setting comes into play during that process and uses costs that you don’t see in the final plan.

This can be tricky to remember. Special thanks to Jeremiah, who’s explained this to me in the past (more than once!) This is covered in SQL Server Books Online, but oddly enough most of us don’t seem to find it there.

When Does Index Fragmentation Matter?

A while back, we posted about how SQL Server index fragmentation doesn’t matter. That blog post generated a lot of heated discussion and has served to make people aware that fragmentation may not be the source of their woes. Sometimes, though, there are data access patterns where fragmentation really does matter.

Perpetual Fragmentation

Some data access patterns lead to tables and indexes that will always be fragmented, no matter what you do. If you have frequent data changes in the middle of a table or index, you can see heavy fragmentation.

What’s this pattern look like? Check it out!

CREATE TABLE process_list
    (
      transaction_id INT IDENTITY(1, 1)
                         PRIMARY KEY ,
      status_flag BIT ,
      last_update DATETIME2 ,
      transaction_type INT ,
      transaction_desc VARCHAR(25)
    );
GO

CREATE INDEX IX_giant_index_process_list
ON dbo.process_list (transaction_id, status_flag);
GO

TRUNCATE TABLE process_list ;
GO

/* Random string generation code taken from:

http://social.technet.microsoft.com/wiki/contents/articles/21196.t-sql-random-string.aspx

*/

/* insert another 1000 rows */
DECLARE @string_max_length TINYINT = 25;

INSERT  INTO process_list (status_flag, last_update, transaction_type, transaction_desc)
        SELECT  0 ,
                GETUTCDATE() ,
                v.number % 4 + 1 ,
                x.txt
        FROM    master.dbo.spt_values v
                JOIN ( SELECT TOP 1
                                LEFT(REPLACE(CAST (NEWID() AS NVARCHAR(MAX)),
                                             '-', ''),
                                     ABS(CHECKSUM(NEWID()))
                                     % @string_max_length + 1) AS txt
                       FROM     SYS.OBJECTS A
                                CROSS JOIN SYS.OBJECTS B
                     ) AS x ON 1 = 1
        WHERE   v.type = 'P'
                AND v.number < 1001;

/* Look at table/index size.
   The table gains 9 pages.
   The index only gains 1 page from its previous state
 */
SELECT  OBJECT_SCHEMA_NAME(o.object_id) AS schema_name,
        o.name,
        i.name,
        ddps.reserved_page_count AS reserved_pages
FROM    sys.objects AS o
        JOIN sys.dm_db_partition_stats AS ddps ON ddps.object_id = o.object_id
        JOIN sys.indexes AS i ON i.object_id = ddps.object_id AND i.index_id = ddps.index_id
WHERE   o.name = 'process_list' ;

-- Change the status of 70%ish of transaction to TRUE
UPDATE process_list
SET     status_flag = 1
WHERE   transaction_id % 10 < 7 ;

/* Get rid of processed rows */
DELETE FROM process_list
WHERE status_flag = 1 ;

/* Look at table/index size.
   No page size changes...
 */
SELECT  OBJECT_SCHEMA_NAME(o.object_id) AS schema_name,
        o.name,
        i.name,
        ddps.reserved_page_count AS reserved_pages
FROM    sys.objects AS o
        JOIN sys.dm_db_partition_stats AS ddps ON ddps.object_id = o.object_id
        JOIN sys.indexes AS i ON i.object_id = ddps.object_id AND i.index_id = ddps.index_id
WHERE   o.name = 'process_list' ;
GO 10

/* And now we rebuild the table
   Before the rebuild:
     index - 25 pages
     table - 57 pages

   If you just run ALTER TABLE process_list REBUILDema:
     index - 25 pages
     table - 25 pages

   After the rebuild with the index:
     index -  7 pages
     table - 25 pages
 */
ALTER INDEX ALL ON dbo.process_list REBUILD ;

SELECT  OBJECT_SCHEMA_NAME(o.object_id) AS schema_name,
        o.name,
        i.name,
        ddps.reserved_page_count AS reserved_pages
FROM    sys.objects AS o
        JOIN sys.dm_db_partition_stats AS ddps ON ddps.object_id = o.object_id
        JOIN sys.indexes AS i ON i.object_id = ddps.object_id AND i.index_id = ddps.index_id
WHERE   o.name = 'process_list' ;
GO

TL;DR – The code in this example inserts a large number of rows at the end of the table. 70% of rows are marked as processed and are then deleted. This causes a lot of free space or, as you call it, fragmentation.

The only way to get rid of this fragmentation is to rebuild both the non-clustered index and the clustered index. For this example, the defragmentation reduces the size of the index by more than 3x (from 25 pages to 7 pages) and the table consumes just over 50% less space.

For workloads with a lot of random deletes (or updates that change the size of a row in a way that causes data movement), index defragmentation is necessary. Without index fragmentation, these database will continue to grow in size and result in the usual problems that we associate with fragmentation.

What Doesn’t Cause Perpetual Fragmentation

At this point you might be saying “Ah ha! You’ve proved that I should always defragment my indexes! Constantly! ZOMGWTFBBQ!!!11!11!shiftone!!!!!”

You’re wrong.

For some workloads, you can still avoid index fragmentation – if you’re adding data to the tailing end of the table and deleting data from the beginning of the table, you won’t need to defragment your indexes. Ghost record clean up should take care of deleted rows in this case.

For most workloads, your best bet is still to analyze SQL Server wait statistics and understand the basic problem before you start defragmenting indexes on a regular basis.

Acknowledgements

The original idea for this blog post comes from Hemant K Chitale’s Index Growing Larger Than The Table. Even though SQL Server handles this situation differently from Oracle, the underlying pattern is still interesting for database practitioners.

The random length string code was adapted from T-SQL: Random String.

Why Core-Based Licensing Matters for Performance Tuning

If you don’t license all of your SQL Server’s CPU cores, you need to pay particular attention to your server hardware.

Say you’re using an HP DL580 with four CPU sockets, each of which has a Xeon processor plugged in with 10 cores. That’s a total of 40 real cores, or 80 logical processors if you’ve enabled hyperthreading.

In SQL Server’s log at startup, if you’re using an upgraded core-based license capped out at 20 cores, you’ll see a message like this:

SQL Server detected 4 sockets with 10 cores per socket and 20 logical processors per socket, 80 total logical processors; using 40 logical processors based on SQL Server licensing. This is an informational message; no user action is required.

To understand the performance implications, it helps to look at a diagram of the server’s system board:

HP DL580 System Board

HP DL580 System Board

The 4-3-2-1 across the top are the four Xeon processors, each of which has 10 cores (20 with hyperthreading.)

The 8-7-6-5-4-3-2-1 across the bottom are the memory banks. Memory is not shared across all CPUs – each CPU has its own local banks of memory, a critical concept in NUMA – non-uniform memory access. To learn more about NUMA, check out my MCM video on the SQLOS.

Here’s what that concept means in practice for our server – say we’ve got 1TB of total memory:

System board color coded

System board color coded

Each processor gets its own local memory.

And when SQL Server says, “Hey, you’re only licensed for half of these cores, so I’m only going to turn on half of them,” it will only use the memory in half of the NUMA nodes. In this case, we’re licensed for 40 logical processors, so we have to dig further into DMVs and logs to figure out which NUMA nodes got turned on, and whether or not our memory is even available to us. See, if memory is hooked up to a NUMA node that isn’t enabled, that memory won’t be accessible to SQL Server.

Similar problems exist when you run Standard Edition on a server with more than 4 CPU sockets. Sounds crazy – most of us don’t use servers that large – but this situation pops up a lot when VMs are misconfigured with 8 virtual CPUs, each of which is presented to the guest as its own CPU socket.

How to Fix It

One option is to license all of your sockets and cores. (Hey, listen, you’re the one who wanted a solution.) Don’t buy hardware with more cores than you actually need because CPUs with more cores typically have slower clock speeds (often by 1/3 or 1/2) compared to CPUs with less cores.

Another option is to use affinity masking to tell SQL Server, “These are the specific cores I want you to use,” and manually load balance the work across all NUMA nodes.

Using sp_BlitzCache™ Advanced Features

sp_BlitzCache™ is a fantastic tool for finding the worst queries in your SQL Server. Did you know that it has a number of advanced options available to you? These options can help you dig into your worst queries, slice and dice the data, and get back exactly what you need, without wading through hundreds of results.

Only/Ignore Query Hashes

Version 2.3 of sp_BlitzCache™ introduced two parameters so you can ignore individual query hashes. Reader Brian Baker had the idea of filtering out a single stored procedure by supplying all of the query hashes for a stored procedure. There’s one problem – there’s no easy way to filter an entire procedure (well, not without making sp_BlitzCache™ really complex).

Here’s how we’d go about answering Brian’s question:

Step 1: sp_BlitzCache™ in expert mode.

EXEC dbo.sp_BlitzCache @results = 'expert' ;
Keep scrolling right.

Keep scrolling right.

Find the stored procedure that you want in the plan cache and then scroll all the way to the right until you get just past the Query Plan column. There you’ll find the Plan Handle. This lets us find a stored procedure. Copy it to your clipboard and then past it into the following script:

/* Step 1: Run sp_BlitzCache @results = 'expert' */
EXEC dbo.sp_BlitzCache @results = 'expert' ;

/* Step 2: Grab the plan_handle of the plan you want to examine. */
DECLARE @plan_handle AS varbinary(64) ;

SET @plan_handle = 0xEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE ;

WITH XMLNAMESPACES('http://schemas.microsoft.com/sqlserver/2004/07/showplan' AS p)
SELECT query_hash
FROM   ( SELECT n.value('@QueryHash', 'nvarchar(max)') AS query_hash
         FROM   sys.dm_exec_procedure_stats AS deps
                CROSS APPLY sys.dm_exec_query_plan(deps.plan_handle) AS deqp
                OUTER APPLY deqp.query_plan.nodes('//p:StmtSimple') AS q(n)
         WHERE  deps.plan_handle = @plan_handle
       ) AS x
WHERE  query_hash IS NOT NULL ;

Alright, now that we have our query hash(es), you can plug them into sp_Blitzcache™:

EXEC sp_BlitzCache @only_query_hashes = '0x6A26E5C5E3825653' ;

Or, if you feel like ignoring that particular set of statements:

EXEC sp_BlitzCache @ignore_query_hashes = '0x6A26E5C5E3825653' ;

And, there you have it – how to zoom in on a single procedure, or completely ignore it.

Only Look at Queries or Procedures

This is much simpler, to narrow down the scope of sp_BlitzCache™, you can use the @query_filter parameter. This parameter can be one of three values: procedures, statements, all. The default is all and, by default, sp_BlitzCache™ will look at stored procedures, statements, and triggers. By specifying either “procedures” or “statements”, you can forcesp_BlitzCache™ to narrow down its scope.

This can be particularly helpful if you have encapsulated your business logic into stored procedures and you want to know where to focus your tuning, rather than worrying about individual statements.

Rapidly Re-Running sp_BlitzCache™

Have you ever run sp_BlitzCache™ and noticed that it takes a while on production systems? And then you realized, after you wandered off and got coffee, that you forgot to use the@export_to_excel parameter?

I’ve done that, too.

sp_BlitzCache™ now ships with a @reanalyze feature. By default, sp_BlitzCache™ will analyze your plan cache fresh every time the query is executed. By using @reanalyze = 1, you’re telling sp_BlitzCache™ to skip collecting data and immediately report on the information that it already has. This is great when you want to export to excel, send the report to your co-workers, and then immediately get to work on the results. It also lets you re-sort the already collected data in a different ways.

There You Have It!

Three fancy features in sp_BlitzCache™ that you can use to make your life easier and more fun.

What are you waiting for? Download it right now!

css.php