Let’s say you’ve got a query, and the point of that query is to take your largest customer/user/whatever and compare their activity to smaller whatevers.
You may also stumble upon this issue accidentally if your query happens to capture populations like that by accident.
For our purposes, let’s look at a query that compares Jon Skeet to 1000 users with the lowest activity. To make this easier, I’m not going to go through with the comparison, I’m just going to set up the aggregations we’d need to do that.
I’m going to do this on the smaller 10GB version of the Stack Overflow database, and running the query on my beefy desktop, in case you’re wondering. I need a few indexes first:
CREATE INDEX ix_users ON dbo.Users(Id, Reputation, CreationDate DESC) INCLUDE (DisplayName);
CREATE INDEX ix_comments ON dbo.Comments(UserId) INCLUDE (Score);
CREATE INDEX ix_posts ON dbo.Posts(OwnerUserId) INCLUDE (Score);
CREATE INDEX ix_badges ON dbo.Badges (UserId);
MAXDOP is set to four, and CTFP is set to 50. Now for the query:
WITH hi_lo AS
SELECT TOP 1 u.Id, u.DisplayName, 1 AS sort_order
FROM dbo.Users AS u
ORDER BY u.Reputation DESC
SELECT TOP 1000 u.Id, u.DisplayName, 2 AS sort_order
FROM dbo.Users AS u
WHERE EXISTS (SELECT 1/0 FROM dbo.Posts AS p WHERE u.Id = p.OwnerUserId AND p.Score > 0)
AND u.Reputation > 1
ORDER BY u.Reputation ASC, u.CreationDate DESC
SUM(CONVERT(BIGINT, p.Score)) AS PostScore,
SUM(CONVERT(BIGINT, c.Score)) AS CommentScore,
COUNT_BIG(b.Id) AS BadgeCount
FROM hi_lo AS u
JOIN dbo.Posts AS p
ON p.OwnerUserId = u.Id
JOIN dbo.Comments AS c
ON c.UserId = u.Id
JOIN dbo.Badges AS b
ON b.UserId = u.Id
GROUP BY u.DisplayName, sort_order
ORDER BY u.sort_order
This query hits a dead end really quickly. If I sample wait stats with sp_BlitzFirst, they tell an interesting story:
EXEC dbo.sp_BlitzFirst @Seconds = 30, @ExpertMode = 1;
At the query level (yes, that’s 26 minutes in), we’ve been waiting on CXCONSUMER nearly the whole time.
The wait stats generated for a 30 second window are no less appealing. Nine total waits, each lasting 136 SECONDS on average.
In this sample there are absolutely no waits whatsoever on CXPACKET.
They are nonexistent.
If you were hoping to find out that they were way crazy out of control call a priest and ditch your bar tab we’re driving on the train tracks, you’ll be awfully disappointed.
There just aren’t any.
There’s only one core in use for nearly the entire duration, aside from some blips.
Here’s the plan for it, so far. This is only an estimated plan.
Adding a MAXDOP 1 hint to that query reduces query time to about 55 seconds.
It’s also possible to get faster queries by supplying join hints, though not faster than limiting DOP to 1.
The wait stats for it are pretty boring. Some SOS_SCHEDULER_YIELD, some MEMORY_ALLOCATION_EXT.
Stuff you’d expect, for amounts of time you’d expect (lots of very short waits).
This isn’t a call to set MAXDOP to 1, or tell you that parallelism is bad.
Most of the time, I feel the opposite way. I think it’s a wonderful thing.
However, not every plan benefits from parallelism. Parallel plans can suffer from skewed row distribution, exchange spills, and certain spooling operators.
Today, it’s hard to track stuff like this down without capturing the actual plan or specifically monitoring for it. This information isn’t available in cached plans.
This also isn’t something that’s getting corrected automatically by any of SQL Server’s cool new robots. This requires a person to do all the legwork on.
One other way is to use sp_BlitzFirst/sp_BlitzWho to look at wait stats. If you see queries running that are spending long periods of time waiting on CXCONSUMER, you just might have a thread skew issue.
If you blindly follow random internet advice to ignore this wait, you might be missing a pretty big performance problem.
This query is now about 17 hours into running. Through the magic of live query plans, I can see that it’s stuck in one particular nut:
I got paranoid about missing cumulative wait stats, so I started logging sp_WhoIsActive to a table.
Here’s what that looks like, for the most recent rows.
Right now it’s July 20th. This post is scheduled to go live on the 24th.
Will it finish before then?!
In Updating More
This query ran for a over 45 hours.
Somewhat laughably, the wait stats for this query show up like this:
<Wait WaitType="PAGEIOLATCH_SH" WaitTimeMs="1" WaitCount="15" />
<Wait WaitType="RESERVED_MEMORY_ALLOCATION_EXT" WaitTimeMs="1" WaitCount="2919" />
<Wait WaitType="CXPACKET" WaitTimeMs="304" WaitCount="44" />
<Wait WaitType="SOS_SCHEDULER_YIELD" WaitTimeMs="33575" WaitCount="41044718" />
The CPU time for it looks like this:
<QueryTimeStats CpuTime="164336421" ElapsedTime="164337994" />
And the last few minutes of CXCONSUMER waits look like this:
I missed the last 30 seconds or so of the query running, which is why the CXCONSUMER waits here don’t quite line up with the total query CPU time, but they’re very close. Why doesn’t that wait show up in the query plan? I have no idea.
What really gummed things up was the final Nested Loops Join to Posts.
That’s a 13 digit number of rows for a database that doesn’t even have 50 million total rows in it.
Bottom Line: Don’t go ignoring those CXCONSUMER waits just yet.
Thanks for reading!
Brent says: we were writing new parallelism demos for our PASS Summit pre-con and our Mastering Server Tuning class to show EXECSYNC, and we kept coming across wait stats results that just didn’t line up with what Microsoft has reported about “harmless” waits. This is going to be a really fun set of demos to share in class.