Blog

Bad Idea Jeans Week: Dynamically Generating 999 Indexes on a Table

Let’s say, just theoretically, you wanted to demo a query that takes SQL Server a long time to compile.

And in order to make SQL Server’s job tougher, you wanted to create the maximum number of indexes possible on a table.

You might do something like this:

SELECT TOP 999 'CREATE INDEX [IX_' + c1.COLUMN_NAME + '_' + c2.COLUMN_NAME + '_' + c3.COLUMN_NAME + ']'
    + ' ON [' + c1.TABLE_SCHEMA + '].[' + c1.TABLE_NAME + '] ' 
    + '([' + c1.COLUMN_NAME + '], [' + c2.COLUMN_NAME + '], [' + c3.COLUMN_NAME + ']);' 
FROM INFORMATION_SCHEMA.COLUMNS c1
INNER JOIN INFORMATION_SCHEMA.COLUMNS c2 ON c1.TABLE_SCHEMA = c2.TABLE_SCHEMA AND c1.TABLE_NAME = c2.TABLE_NAME 
    AND c1.COLUMN_NAME <> c2.COLUMN_NAME 
    AND (c2.CHARACTER_MAXIMUM_LENGTH <> -1 OR c2.CHARACTER_MAXIMUM_LENGTH IS NULL)
INNER JOIN INFORMATION_SCHEMA.COLUMNS c3 ON c1.TABLE_SCHEMA = c3.TABLE_SCHEMA AND c1.TABLE_NAME = c3.TABLE_NAME 
    AND c3.COLUMN_NAME NOT IN (c1.COLUMN_NAME, c2.COLUMN_NAME) 
    AND (c3.CHARACTER_MAXIMUM_LENGTH <> -1 OR c3.CHARACTER_MAXIMUM_LENGTH IS NULL)
WHERE c1.TABLE_SCHEMA = 'dbo' AND c1.TABLE_NAME = 'Users'
AND (c1.CHARACTER_MAXIMUM_LENGTH <> -1 OR c1.CHARACTER_MAXIMUM_LENGTH IS NULL)
ORDER BY c3.COLUMN_NAME, c2.COLUMN_NAME

Which would generate output like this:

Look, you're the one who comes here to "learn stuff"

Look, you’re the one who comes here to “learn stuff”

The CHARACTER_MAXIMUM_LENGTH <> -1 OR IS NULL stuff is because I don’t want to try to index NVARCHAR(MAX) fields, but I do want to index NVARCHAR(50) fields.

The ORDER BY is because I wanted to get a wide variety of leading fields. If I just ordered by c1.COLUMN_NAME, I wouldn’t get all of the first fields covered in the first 999 indexes. (And I’m not even using a large table.)

I could have dumped all these into strings and executed them, but depending on the speed of the system, the demos I’m trying to do, the fields in the indexes, etc, I may not want them all created right away. Sometimes I experiment with just 25-50 indexes before moving on to, uh, 999.

Bad Idea Jeans Week: Dynamically Generating Long Queries

As part of an experiment, I needed to build a really long query. (Don’t ask.)

From another recent experiment, I know that SQL Server won’t let a query return more than 65,535 columns. I set about writing a one-line query that would return 65,535. I’m a big fan of writing the simplest reproduction scripts possible – I don’t want them to rely on tables if they don’t have to – so we’ll start a CTE like this:

My favorite Thanksgiving side dish

My favorite Thanksgiving side dish

That’s a simple CTE which gives me a fake table called Stuffing, with one field named Stuff.

I want to select that Stuffing field 65,535 times, but ideally, I’d like to return that field with a different name each time. (I might use this to artificially insert 65,535 columns later.) I’ll need a trusty numbers table (like Method #7 from this StackOverflow answer), which will give me 65,535 rows that I can use to build dynamic SQL:

Including 8, 6, 7, 5, 3, 0, and 9

Including 8, 6, 7, 5, 3, 0, and 9

Now we’ll use those to build dynamic SQL:

Using SQL to Build SQL

Using SQL to Build SQL

And I can just copy/paste that stuff into my SELECT statement and run it:

Hey, Mister DJ, I thought you said we had a deal

Hey, Mister DJ, I thought you said we had a deal

Alright, so we’ve learned that I can’t return more than 65,535 columns, AND I can only use 4,096 elements in my SELECT. I can think of several workarounds there – 65,535 / 4096 = about 16, which means I could create a few tables or CTEs and do SELECT *’s from them, thereby returning all 65,535 columns with less than 4,096 things in my SELECT. But for now, we’ll just start with 4,096 things in my SELECT:

Lots of Stuffing

Lots of Stuffing

Presto! We’ve got stuffing. As long as we’re here, it’s interesting to see how SQL Server clips off queries in execution plans. If we hover the mouse over the query in a plan, we can see the query – or at least, the beginning of it:

Hovercraft

Hovercraft

Right-click on the plan and click Show Execution Plan XML, and you can see exactly where SSMS clips it:

Cuts out at n156 (of 4096)

Cuts out at n156 (of 4096)

This is why I say you should never comment your code. You want to make sure you can see your entire query in its glorious glory. If you absolutely have to comment your code, avoid vowels at all costs, and type like a 14 year old texter.

Bad Idea Jeans Week: Building a Fork Bomb in SQL Server

Somewhat different than a sex bomb, a fork bomb is a denial-of-service attack that just starts a process that replicates itself, thereby starting more and more processes until the service goes down. Wikipedia’s fork bomb page lists examples on most operating systems (including Windows).

I’ve always found fork bombs funny because of their elegant simplicity, so I figured, why not build one in SQL Server?

In order to do it, I needed a way to spawn a self-replicating asynchronous process, so I built:

  1. A stored procedure
  2. That creates an Agent job
  3. That runs the stored procedure

So it just infinitely runs itself, spawning more and more Agent jobs as it goes. It’s just seven lines:

CREATE PROC ##ForkBomb AS
BEGIN
DECLARE @GUID UNIQUEIDENTIFIER = NEWID();
EXEC msdb.dbo.sp_add_job @job_name = @GUID;
EXEC msdb.dbo.sp_add_jobstep @job_name = @GUID, @step_id = 1, @step_name = 'Uno', @command = 'WHILE 1 = 1 EXEC ##ForkBomb;', @database_name = 'msdb';
EXEC msdb.dbo.sp_add_jobserver @job_name = @GUID;
EXEC msdb.dbo.sp_start_job @job_name = @GUID;
END

Run that stored proc just once – WHICH YOU SHOULD NEVER DO, but I know how you people roll and you’re going to go try this in a VM, just like I did – and the result is rather spectacular: within seconds, SQL Server creates thousands of Agent jobs, and they’re fighting for CPU resources:

Beggin for threads

Beggin for threads

Seriously, you shouldn’t try this on a VM you ever want to keep again, but if you want to try this trick repeatedly, this Stack answer on deleting Agent jobs will come in handy. To recover, stop both SQL Server and Agent, then start SQL Server without starting Agent.

I’m using a global stored procedure here to limit the damage for fun and games, but if you really wanted to see some explosions, you could:

  • Create the stored proc in a user database, or as a permanent object in TempDB
  • Add a line to call msdb.dbo.sp_add_schedule to run this job on startup
  • Add additional lines in here to run more queries, such as this little gem that randomly creates GUID-named tables in each user database and inflates them
DECLARE @StringToExec NVARCHAR(4000);
SET @StringToExec = 'USE [?]; SELECT m1.text, m2.text AS text2, m3.text AS text3 INTO dbo.[' + CAST(@GUID AS VARCHAR(50)) + '] FROM sys.messages m1 CROSS JOIN sys.messages m2 CROSS JOIN sys.messages m3;' 
EXEC sp_MSforeachdb @StringToExec

If you ever needed a reason as to why you shouldn’t allow untrusted people to be sysadmin or manage SQL Agent jobs, this is probably a pretty good post to bookmark. You can do a hell of a lot of damage with less than ten lines of T-SQL.

Wanna watch me run this live, plus a few other stunts? Check out the recording of the Watch SQL Server Break & Explode webcast from Dell DBA Days.

Bad Idea Jeans Week: Prevent the Use of the Database Engine Tuning Advisor

Every now and then, we put on our bad idea jeans in the company chat room and come up with something really ill-advised. We should probably bury these in the back yard, but…what fun would that be? Bad ideas are the most fun when they’re shared.

One day, after seeing enough horrific but well-meaning indexes created in production by the Database Engine Tuning Advisor, we got to brainstorming. Could we set something up that would stop the DETA on a production box, but let it keep running in a development environment? Meaning, not break the contents of a database, but somehow disable DETA at the server level?

Sí, se puede – and the first step is to understand that the DETA creates these tables in msdb when it first fires up:

I see what you did there

I see what you did there

So all we have to do is set up a database trigger inside msdb on the production server. Trigger adopted from an idea from Richie:

USE msdb;
GO

CREATE TRIGGER NO_DTA_BAD_DTA   
ON DATABASE   
FOR CREATE_TABLE   
AS   

   DECLARE @indexname NVARCHAR(4000);
   SELECT @indexname = EVENTDATA().value('(/EVENT_INSTANCE/ObjectName)[1]', 'nvarchar(4000)')

   IF (CHARINDEX('DTA_', @indexname) > 0)
   BEGIN
     PRINT 'DTA Tables are not permitted to exist.';
     ROLLBACK;  
   END

GO

This database trigger in MSDB stops the prevention of any tables with DTA in the name. Now, whenever you try to point the DETA at the production server:

 

Gosh, I can't imagine what the problem might be

Gosh, I can’t imagine what the problem might be

In the immortal words of a timeless philosopher, no no, no no no.

(People who liked this post also liked how I proposed to stop changes to tables.)

Query Tuning Week: How to Run sp_BlitzCache on a Single Query

I’m teaching our SQL Server Performance Tuning class out in Portland this week, so we’re running all performance posts this week. You can learn more about my upcoming classes here.

The most popular way of using sp_BlitzCache® is to just run it – by default, it shows you the top 10 most CPU-intensive queries that have run on your server recently. Plus, it shows you warnings about each of the queries – if they’re missing indexes, experiencing parameter sniffing issues, running long, running frequently, doing implicit conversion, you name it.

But sometimes you just want to see the warnings for a query you’re working on – and the query isn’t in your top 10 most resource-intensive queries.

First, get the query hash. Run the query with the actual execution plan turned on, or just get the estimated plan in SSMS by hitting control-L. Right-click on any of the SELECT statements in the plan (or the one you’re most interested in) and click Properties.

Check out the QueryHash in the Properties pane:

Right-click on a select, click properties, and check out the QueryHash

Right-click on a select, click properties, and check out the QueryHash

Now you can run:

sp_BlitzCache @only_query_hashes = '0xC842136488344341';

And you get:

sp_BlitzCache filtered by query hashes

sp_BlitzCache filtered by query hashes

You get all of the executions of this query – which might be in different connection contexts, too, indicating you might be catching some parameter sniffing issues.

Want to see all of the other queries in that same stored procedure?

sp_BlitzCache @only_query_hashes = '0xC842136488344341', @results = 'expert';

Scroll across to the SQL Handle column, copy that data out, and paste it into sp_BlitzCache:

sp_BlitzCache @sql_handle = '0x03000600F4320D6D78EC70000BA6000001000000000000000000000000000000000000000000000000000000';

And you get a detailed analysis of that stored procedure and its queries:

sp_BlitzCache filtered for one SQL handle

sp_BlitzCache filtered for one SQL handle

Go download it, put it to work finding your ugliest queries, and discover what sneaky anti-patterns hide in the code. Yours, I mean, not ours. Ours has plenty too.

Query Tuning Week: What’s the Difference Between Locking and Blocking and Deadlocking?

I’m teaching our SQL Server Performance Tuning class out in Portland this week, so we’re running all performance posts this week. You can learn more about my upcoming classes here.

If Erik takes out a lock on a row,
and no one else is around to hear it,
that’s just locking.

There’s nothing wrong with locking by itself. It’s perfectly fine. If you’re the only person running a query, and you have a lot of work to do, you probably want to take out the largest locks possible (the entire table) in order to get your work done quickly.

You don’t want to monitor for locking – it just happens, and that’s okay. Until…


Solves all your blocking problems

Solves all your blocking problems

If Erik takes out a lock on a row,
and shortly thereafter Tara wants that same row,
that’s blocking.

Blocking is bad, but note that you won’t see blocking until you see concurrency. This is one of the many reasons why that application runs so well in development, but then falls over in production – there’s not enough concurrency to cause blocking.

Blocking will solve itself whenever Erik’s query releases his locks. Tara’s query will wait forever for Erik’s query to finish. SQL Server won’t kill or time out Tara’s query automatically because there’s no reason to. SQL Server, sweet naive fella that it is, believes Erik’s query will eventually finish.

(Note that I’m not specifying the difference between readers and writers here – by default in the boxed product of SQL Server, readers can block writers, and writers can block readers.)

Users hate blocking because it exhibits as very slow query performance.


If Erik takes out a lock on a row,
and Tara takes out a lock on a different row,
and then Erik wants to add a lock on the row Tara already locked,
and then Tara wants to add a lock on Erik’s locked row,
that’s a deadlock.

Both queries want something that the other person has, and neither query can finish on its own. No query will automatically relinquish its lock – both of them are going to wait forever for the other query to finish because neither one is really aware of why it can’t get the lock it wants.

(I’m keeping this one simple by referring to rows of the same table, but this can happen with lots of different objects scattered through different databases.)

SQL Server fixes this one for you by waking up every few seconds, looking around for deadlocks, and then killing the query that it believes will be the cheapest to roll back.

Users only notice blocking if it’s in a user-facing application that doesn’t automatically retry deadlock victims. More often than not, deadlocks happen silently to service applications, and nobody even knows they’re happening – except the DBA. The developers just assume that their queries fail sometimes for no apparent reason, and it’s probably their own code, so they keep quiet.


How to Fix Locking, Blocking, and Deadlocks

Here’s the short version:

  1. Have enough nonclustered indexes to support your queries, but no more than that – typically, I recommend that you aim for 5 or less nonclustered indexes, with 5 or less fields each
  2. Keep your transactions as short and sweet as possible – don’t do a BEGIN TRAN, and then lump in a whole bunch of SELECT queries that grab configuration values from various tables
  3. Use the right isolation level – by default, SQL Server uses pessimistic concurrency (read committed), but there are better optimistic concurrency options like snapshot isolation and RCSI

Want the long version? We’ve got a whole ton of blocking resources to help you identify if you’ve got locked waits, which tables have the worst blocking problems, and find the lead blockers in blocking firestorms.

Query Tuning Week: How to Start Troubleshooting Parameter Sniffing Issues

I’m teaching our SQL Server Performance Tuning class out in Portland this week, so we’re running all performance posts this week. You can learn more about my upcoming classes here.

When a query is sometimes fast and sometimes slow, for the same input parameters, and you swear nothing else in the environment is changing, that’s often a case of parameter sniffing.

After SQL Server starts up and you run a stored procedure, SQL Server builds an execution plan for that proc based on the first set of parameters that are passed in.

Say we’ve got a stored procedure that runs a report based on country sales data:

  1. EXEC rpt_Sales @Country = ‘China’    — SQL Server builds an execution plan optimized for big countries with big amounts of sales, and it runs in about 750 milliseconds.
  2. EXEC rpt_Sales @Country = ‘Monaco’   — it reuses China’s cached execution plan for big data. It’s not great for small countries, but who cares – it’s a small amount of data, so it still runs in 500 milliseconds.

Now we restart the SQL Server, and someone queries for Monaco first:

  1. EXEC rpt_Sales @Country = ‘Monaco’   — SQL Server builds a plan optimized for tiny countries with tiny amounts of data, and it runs in just 50 milliseconds – way better than when Monaco reused China’s plan. Yay!
  2. EXEC rpt_Sales @Country = ‘China’   — it reuses Monaco’s cached plan for tiny data. It takes 30 seconds, and suddenly our server starts falling over because several people are running queries at a time.

SQL Server sniffs the first set of parameters that get used. Even though nothing in our environment has changed, suddenly the SQL Server runs much more slowly, and query times for China are horrific.

How to Fix Parameter Sniffing Temporarily

Parameter sniffing fixes are based on your career progression with databases, and they go like this:

1. Reboot the server! – Junior folks panic and freak out, and just restart the server. Sure enough, that erases all cached execution plans. As soon as the box comes back up, they run rpt_Sales for China because that’s the one that was having problems. Because it’s called first, it gets a great plan for big data – and the junior admin believes they’ve fixed the problem.

2. Restart the SQL Server instance – Eventually, as these folks’ careers progress, they realize they can’t go rebooting Windows all the time, so they try this instead. It has the same effect.

3. Run DBCC FREEPROCCACHE – This command erases all execution plans from cache, but doesn’t clear out many of SQL Server’s other caches and statistics. It’s a little bit more lightweight than restarting the instance, and can be done online.

4. Rebuild indexes – Doing this has an interesting side effect: when SQL Server reads an entire index to rebuild it, it gives you double the bang for the buck, and also updates your index’s statistics at the same time. This fixes the parameter sniffing issue because when SQL Server sees updated stats on an object used by an incoming query, it’ll build a new execution plan for that query.

5. Update statistics – As folks learn the above juicy tidbit, they realize they could get by with just updating stats. That’s a much lighter-weight operation than rebuilding indexes, so they switch over to just updating stats. However, I didn’t say it was lightweight.

6. Run sp_recompile for one table or proc – This system stored procedure accepts a table or a stored procedure name as a parameter, and marks all related execution plans as needing a recompile the next time they run. This doesn’t change the stats – but if you run it for China first this time, you may just get a more accurate query plan using the same stats.

7. Run DBCC FREEPROCCACHE for a single query – This one’s my favorite! Run sp_BlitzCache @results = ‘expert’ and scroll all the way to the right hand side. You’ll see plan handles and sql handles for your most resource-intensive queries. Find the one experiencing the parameter sniffing issue, and save its execution plan to disk first. (You’ll want this later.) Then pass its sql or plan handle into DBCC FREEPROCCACHE. Presto, just one plan gets evicted from the cache. It’s like a targeted missile strike rather than a big nuclear bomb.

How to Prepare to Fix Parameter Sniffing Permanently

Assuming you saved the execution plan like we just discussed, you’ve got a huge head start! That plan includes:

  • A set of calling parameters for the stored proc
  • An example of what the execution plan looks like when it’s slow

Now you need:

  • At least one other set of calling parameters that produce a different execution plan (like our Monaco vs China example)
  • An example of what the execution plan looks like when it’s speedy for each set of parameters

To get that info, check out How to Start Troubleshooting a Slow Stored Procedure.

Once you have all that, you’re ready to build a single version of the stored procedure that is consistently fast across all calling parameters. Start digging into Erland Sommarskog’s epic post, Slow in the Application, Fast in SSMS. Even though that title may not describe your exact problem, trust me, it’s relevant.

Query Tuning Week: How to Start Troubleshooting a Slow Stored Procedure

I’m teaching our SQL Server Performance Tuning class out in Portland this week, so we’re running all performance posts this week. You can learn more about my upcoming classes here.

When you need to find out why a stored procedure is running slow, here’s the information to start gathering:

Check to see if the plan is in the cache. Run sp_BlitzCache® and use several different @sort_order parameters – try cpu, reads, duration, executions. If you find it in your top 10 plans, you can view the execution plan (it’s in the far right columns of sp_BlitzCache®’s results), right-click on a select statement in the plan, and click Properties. You can see the optimized parameters in the details. Save this plan – it might be a good example of the query’s plan, or a bad one.

Collect a set of parameters that work. Sometimes the reporting end user will know what parameters to use to run the stored procedure, but that’s not always the case. You can also check the comments at the start of the stored proc – great developers include a sample set for calling or testing. Once you get past those easy methods, you may have to resort to tougher stuff:

  • If it runs frequently, run sp_BlitzFirst @ExpertMode = 1 while it’s running, and save the execution plan to a file. Look in the XML for its compiled parameter values.
  • Run a trace or use Extended Events to capture the start of execution. This can be pretty dicey and have a performance impact.
  • Worst case, you may have to reverse-engineer the code in order to find working params.

Find out if those parameters are fast, slow, or vary. Ideally, you want to collect a set of calling parameters that are slow, another set that are fast, and possibly a set that vary from time to time.

Find out if the stored proc does any writes. Look for inserts, updates, and deletes in the query. There’s nothing wrong with tuning those – you just have to do it in development, obviously. Not only that, but you also have to watch out for varying performance.

If it does writes, does it do the same amount of writes every time? For example, if it’s consistently updating the same rows every time it runs, it’ll probably take the same amount of time. However, if it’s adding more rows, or deleting rows, it may not – subsequent runs with the same parameters might not have work to do, or might have a lot of work to do. You may have to restore the database from backup each time you test it.

Does the stored proc run differently in development? Is it slow in production, and fast in development? Or is it consistently slow in both locations? If you measure the query with SET STATISTICS IO, TIME ON in both servers, does it produce the same number of reads and CPU time in both environments? If not, what are the differences between those environments? You may have to change the dev environment’s configuration in order to reproduce the issue, or you may have to resort to tuning in production.

Erik says: Relying on SQL to cache, and keep cached, relevant information about stored procedure calls can feel futile. I’d much rather be logging this stuff in a way that I have better control of. Whether it’s logged in the application, or the application logs to a table, it’s a much more reliable way to track stored procedure calls, parameters passed in, and how long they ran for. You can even track which users are running queries, and bring people running horrible ones in for training. This can take place a classroom or a dark alley.

Dell DBA Days: What kind of hardware can you buy for one core’s worth of Enterprise Edition?

Dell R730

Dell R730

Microsoft charges $14,256 for a 2-core pack of SQL Server 2016 Enterprise Edition.

So how much hardware can we get for $7,128?

I started with one of the more common 2-socket server models, a Dell PowerEdge R730, and then optioned it with:

  • Chassis – up to 8 2.5″ drives (because 2.5″ drives aren’t just the future, they should be your default today)
  • Two Intel Xeon E5-2637 v4 – quad core, 3.5GHz – I splurged $995.41 to get the second processor
  • Four 32GB RDIMMs – that’s 128GB memory
  • PERC H730P RAID controller – a $605.05 step up, and I felt a little guilty about this, but I’m assuming that you’re going to add more drives over time, and I want the best storage throughput I can get from the vendor
  • Two 200GB SSD boot drives – which sounds crazy to do SSD on a boot drive, but c’mon, they’re only $174.31 apiece, which is less than the magnetics, and gives you enough space to house Windows Updates
  • Two 480GB SSD MLC drives – at $376.22 each – as a mirrored pair for
  • Redundant power supplies – $181.71, but a common sense safety measure for database servers
  • Total: $7,196.71 (plus tax, shipping, etc)

That’s a lot of hardware horsepower for the cost of just one core of Enterprise Edition.

Wanna go wild and crazy and buy two cores worth?

Take that same starter box, and throw in some power-ups:

  • Upgraded redundant power supplies – $60.58 extra to handle the…
  • 512GB RAM – gonna need another 12 memory sticks for $5,806.92, which sounds like a lot, but now we’re caching pretty large databases in RAM, avoiding writes to TempDB, and giving queries plenty of room for workspace
  • That brings us to $11,128.55 from Dell, leaving us $3,127.45 change left over, which I’ll use for…
  • Two 800GB Intel P3700 PCI Express SSDs – each just $1,277.02, a crazy deal for their rockin’ performance, and we’ll mirror them together for even faster reads (and leave the two 480GB SSDs for TempDB or logs or whatever, hey, who cares, hardware is cheap)
  • Total: $13,682.59

And take the remaining change and buy yourself some hardware training and a nice dinner.

SQL Server Enterprise Edition makes great hardware look cheap.

I’m not saying Enterprise Edition is overpriced – I think it’s completely fair given its amazeballs capabilities. The problem I have is with people who pour all their money into licensing and proceed to install it on something with less cores, memory, and solid state space than a Microsoft Surface.

When you’re struggling with performance management on a 4-core Enterprise Edition VM, wheezing with 16GB RAM, storing data on spinning rusty frisbees, spending weeks of developer time trying to rewrite legacy code, take a break and run some numbers:

  • How much do your developers and DBAs cost per day?
  • How many days have they spent working on the performance issue?
  • How much have you paid in Software Assurance over the last couple of years?
  • How much would your hardware cost today to replace? (Not what you originally paid for it.)

Bad Idea Jeans: Dynamically Generating Ugly Queries and Task Manager Graffiti

Say you’re at Dell DBA Days, and you want to build a really ugly query, fast. You want to generate a StackOverflow database query that will take a long time to compile, and maybe demand a huge amount of memory to run, but not actually take any time to execute.

You might start by building a numbers table with Method #7 from this StackOverflow answer, and then:

SELECT 1 AS Ordered, 'SELECT * FROM dbo.Posts p1'
UNION
SELECT nt.Number AS Ordered, 'INNER JOIN dbo.Posts p' + CAST(nt.Number AS VARCHAR(10)) + 
  ' ON p' + CAST(nt.Number AS VARCHAR(10)) + '.Id = p' + CAST(nt.Number - 1 AS VARCHAR(10)) + '.ParentId'
FROM dbo.NumbersTest nt
WHERE nt.Number > 1
ORDER BY 1

Which gives you a 10,000 join query, but I’m only showing the first 10 lines here:

Big dynamic query

Big dynamic query

So then you can copy/paste the contents of that second column into a new window. You could also assemble a string, but I like having it available for copy/paste because it, uh, doesn’t exactly work the first time.

For example, when I run it with 10,000 joins:

Msg 8631, Level 17, State 1, Line 1
Internal error: Server stack limit has been reached. Please look for potentially deep nesting in your query, and try to simplify it.

When I drop it down to a much more realistic 5,000 joins:

Msg 4074, Level 16, State 1, Line 1
Client drivers do not accept result sets that have more than 65,535 columns.

Ah! Okay, that’s fair. (That’s also two error messages I’ve never seen before. Probably a good thing.) Alright, let’s take out the SELECT * and replace it with SELECT p1.* and see what happens:

Msg 8621, Level 17, State 1, Line 1
The query processor ran out of stack space during query optimization. Please simplify the query.

Come on, SQL Server. Work with me. Cowboy up. Alright, let’s try 2,000 joins, and….

Fun trivia: SQL Server uses exactly one core to compile a query.

Fun trivia: SQL Server uses exactly one core to compile a query.

Fifteen minutes later, SQL Server is still trying to compile an execution plan for the two-thousand-join query – and I know for a fact that the query won’t produce a single row.

So as long as I’m waiting for it to compile, might as well have a little fun with Task Manager. On a 72-core box, this query:

ALTER SERVER CONFIGURATION SET PROCESS AFFINITY CPU = 0 TO 1,4 TO 5,7,10,12,14,16,19,22,24,26,
28 TO 29,31,34,64,66,68 TO 69,71,74,76,78,80,83,86,88 TO 89,92 TO 93,95 TO 96,98 TO 99
GO

When combined with a really ugly CPU-burning query, you get:

Graffiti in Task Manager

Graffiti in Task Manager

Over an hour later, it was still trying to build a query plan for the 2,000 join query, so I figured hey, let’s put affinity masking back the right way, then throw the query into SQLQueryStress and run 72 executions of it:

FIRE IN THE HOLE

FIRE IN THE HOLE

Which gives me a slightly different Task Manager:

High score!

High score!

It’s surprisingly hard to achieve 100% CPU usage across all 72 cores, but be patient, and you can get it. SQL Server is in bad shape, too – if you run sp_BlitzFirst to check the running queries:

The elusive RESOURCE_SEMAPHORE_QUERY_COMPILE

The elusive RESOURCE_SEMAPHORE_QUERY_COMPILE

Most queries are waiting on RESOURCE_SEMAPHORE_QUERY_COMPILE – which means they can’t get enough memory in order to start compiling.

I canceled that – fun, but not all that useful – and over 90 minutes later, I was still waiting for the 2,000 join query to compile.

Wanna watch these kinds of shenanigans live? Sign up for today’s Dell DBA Days webcasts.

Update: after 12 hours and 43 minutes, the compilation gave up:

SQL Server puts in half-days

SQL Server puts in half-days

css.php