Query Exercise Answer: Generating Big TempDB Spills
In last week’s Query Exercise, I challenged you to play some code golf to generate big spills with tiny T-SQL.
Today, I’m going to walk you through my thought process – the initial attempts I tried and failed with, and the discoveries I made along the way, because I think it makes for fun storytelling.
In the past, when I’ve seen people trying to generate a lot of data quickly, they’ve cross-joined SQL Server’s built-in system tables like sys.all_objects or sys.all_columns. The more times you cross-join them with each other, the more rows you create with a cartesian join. However, that takes up a lot of space to type, and we’re playing code golf here.
Attempt #1: Sorting Big Text Data
SQL Server 2022 brought big guns to this fight with the GENERATE_SERIES function, which accepts nice big numbers as input parameters. You can simply put in a starting number and ending number, and generate that many rows. Just one call to this can produce more rows than cross-joining two system tables in the same amount of typed characters.
I figured I’d generate a large number of rows, each with a big chunk of text attached to it:
Transact-SQL
|
1 2 |
SELECT value, REPLICATE('x', 9223372036854775808) AS x FROM GENERATE_SERIES(-9223372036854775808, 9223372036854775808); |
For each row returned from GENERATE_SERIES, I’m also calling the REPLICATE command to build what the cool kids call “unstructured data”, a big long nasty string. Whatever you do, don’t actually execute that query, but the estimate is amusing:

That gets me a lot of data, but no memory is required to generate those results. Heaven forbid you execute that thing, because whatever has to absorb the firehose of data is going to need a hell of a lot of memory. But our goal here is to generate workspace spills, and the easiest way to do that is to add an ORDER BY:
Transact-SQL
|
1 2 3 |
SELECT value, REPLICATE('x', 9223372036854775808) AS x FROM GENERATE_SERIES(-9223372036854775808, 9223372036854775808) ORDER BY x; |
That doesn’t work, because SQL Server seems to understand that the replicated strings are all the same, so there’s no sort in the plan. Same if I try to just sort by the values, either ascending or descending. But slap just a little math in there:
Transact-SQL
|
1 2 3 |
SELECT value, REPLICATE('x', 9223372036854775808) AS x FROM GENERATE_SERIES(-9223372036854775808, 9223372036854775808) ORDER BY value - value; |
The plan gets a nasty little sort:

The desired memory grant on the sort is fairly high, at 1,125,899,906,843,128KB:

That’s 1,125 petabytes, or 1.1 exabytes of memory desired.
Now normally, if I’m trying to get SQL Server to spill to tempdb, I might try to get it to underestimate the number of rows coming into the sort, thereby lowballing the memory grant. But who cares? None of you have a SQL Server with 1.1 exabytes worth of memory, nor 1.1 exabytes worth of TempDB space, so I feel fairly confident that when you hit execute on that query, you’re gonna have a bad time.
Again, that’s the estimated plan because I can’t hit execute on that, because I’d have to deal with a metric poopton of results. I can’t just select TOP 1, either, because I’ll get a lowball memory grant, and I’m going for a high score here. In fact, the memory grant difference between these two versions of the query is one of the most comical differences I’ve ever seen:
- SELECT TOP 1: 24KB memory desired
- SELECT TOP 100: 24KB memory desired
- SELECT TOP 101: 1.1 exabytes desired
Given the size of the data I’m replicating, I’m not even sure SSMS would react okay to getting the top 101, so we’ll dump the results into a temp table:
Transact-SQL
|
1 2 3 4 |
SELECT TOP 101 value, REPLICATE('x', 9223372036854775808) AS x INTO #t FROM GENERATE_SERIES(-9223372036854775808, 9223372036854775808) ORDER BY value - value; |
The TOP 101 makes the resulting execution plan quite wide:
And the desired memory grant is still at 1.1 exabytes:
Finally, we have a query I can execute without bringing SSMS down with it, hahaha. I mean, I can execute it, but I can’t get the actual plans because it’ll fill up TempDB with astonishing speed.
If I put all of that on a single line, that’s 159 characters including spaces. I could make the query shorter by removing the TOP 101 and the INTO #t parts, but I think the fun part of a query like this is being able to knock a SQL Server over without taking your workstation down with it.
I hit execute, giddily excited at the thought of trashing my lab server, only to be met with the Sad Trombone of Disappointment:
Transact-SQL
|
1 2 3 |
Msg 8115, Level 16, State 2, Line 1 Arithmetic overflow error converting expression to data type int. The statement has been terminated. |
What the… I’m not using any integers here, only bigints. I started troubleshooting to see where SQL Server was implicitly converting my bigints into integers, and along the way, I stumbled upon something delightfully terrible.
Attempt #2: Exploiting GENERATE_SERIES
As part of my troubleshooting, I wanted to make sure GENERATE_SERIES really did work with bigints, so I started by getting the smallest numbers possible:
Transact-SQL
|
1 2 |
SELECT TOP 101 value FROM GENERATE_SERIES(-9223372036854775807, 9223372036854775808); |
And that returned data instantly. But when I tried to get the biggest numbers, what I should have done was passed in the max bigint as my starting point, and tell it to increment by -1 with each step. That’s not what I did – I just thought I could take a shortcut:
Transact-SQL
|
1 2 3 |
SELECT TOP 101 value FROM GENERATE_SERIES(-9223372036854775807, 9223372036854775808) ORDER BY 1 DESC; |
I hit execute on that, waited a few seconds, waited a few more seconds, and realized I’d stumbled upon something wonderful. SQL Server doesn’t have any logic to reverse-engineer your ORDER BY, and push that sorting down into GENERATE_SERIES. No, you asked for a metric poopton of data, so SQL Server will gladly build all that data and sort it for you!
And here’s the funniest part: the desired memory grant is still 1.1 exabytes, even though I’m not generating or sorting any text:
Since we’re not generating big text data, I can just plain hit execute on that query without worrying about overwhelming SSMS with the result sets (if they ever get produced.) As the query runs, it gradually fills up TempDB, as shown by sp_BlitzWho @ExpertMode = 1, which shows things like tempdb allocations and memory grant usage:
The live query plan (also shown by sp_BlitzWho) dutifully shows the rows marching on through the plan:
And after a couple of minutes, we run TempDB out of space:
Transact-SQL
|
1 2 3 4 5 6 7 |
Msg 1105, Level 17, State 2, Line 14 Could not allocate space for object 'dbo.SORT temporary run storage: 140737559658496' in database 'tempdb' because the 'PRIMARY' filegroup is full due to lack of storage space or database files reaching the maximum allowed size. Note that UNLIMITED files are still limited to 16TB. Create the necessary space by dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup. |
Nice! So that’s a 100-character query that runs the server out of TempDB space, maxing out possible spills:
Transact-SQL
|
1 |
SELECT TOP 101 value FROM GENERATE_SERIES(-9223372036854775807, 9223372036854775808) ORDER BY 1 DESC |
In the spirit of code golf, we can do a little tuning to reduce characters:
- Remove the TOP 101 since the query won’t finish anyway, and I don’t have to worry about overwhelming SSMS’s memory
- Change the select to select *
- Change the bigint numbers to the min/max integer sizes – but then our desired memory grant is “just” 300GB
That brings us down to 70 characters for an estimated 300GB spill:
Transact-SQL
|
1 |
SELECT * FROM GENERATE_SERIES(-2147483648, 2147483647) ORDER BY 1 DESC |
Although I guess as long as I’m at 70 characters, why use small numbers? Let’s change them all to 9s to get the same string length:
Transact-SQL
|
1 2 |
SELECT * FROM GENERATE_SERIES(-2147483648, 2147483647) ORDER BY 1 DESC SELECT * FROM GENERATE_SERIES(-9999999999, 9999999999) ORDER BY 1 DESC |
(Note that this technique ONLY works if you pass in -9999999999 as a starting number, or some other bigint. If you just try to GENERATE_SERIES(0,9999999999) it will fail, saying both the starting and ending numbers need to be the same datatype.)
And we get a 1.5TB desired memory grant, which is pretty respectable:
So I end up with a few final entries:
Transact-SQL
|
1 2 3 4 5 6 7 8 |
/* 59 characters, 150GB grant/spill on 2025: */ SELECT * FROM GENERATE_SERIES(0,2147483647) ORDER BY 1 DESC /* 69 characters, 1.5TB grant/spill: */ SELECT * FROM GENERATE_SERIES(-9999999999,9999999999) ORDER BY 1 DESC /* 87 characters, 1.1 petabyte grant/spill: */ SELECT * FROM GENERATE_SERIES(-9223372036854775807,9223372036854775808) ORDER BY 1 DESC |
So in less than 100 characters, you can drain a SQL Server out of drive space for TempDB. Of course, keep in mind that the estimated memory grant may not be the amount of the actual spill – but I’ll leave it to the reader to provision a petabyte SSD and then let me know what the actual spill amount was.
I thought that was pretty good, but I watched y’all’s solutions come in, and I learned stuff!
Reece Goding’s Solution
Reece Goding (Github) and Andy Mallon (Blog – LinkedIn) worked on a similar theme of using GENERATE_SERIES, but they leveraged a sneaky trick: using scientific notation to shorten the long integer text. To understand Reece’s 72-character solution, you have to know that this works first:
Transact-SQL
|
1 2 |
DECLARE @H BIGINT=9E18; SELECT CAST(@H AS BIGINT); |
Which produces a result of 9000000000000000000. That saves you some typing:
Transact-SQL
|
1 2 3 4 5 |
DECLARE @H BIGINT=9E18 SELECT*FROM GENERATE_SERIES(-@H,@H)ORDER BY 1DESC /* Or on the same line, which is the same number of characters: */ DECLARE @H BIGINT=9E18;SELECT*FROM GENERATE_SERIES(-@H,@H)ORDER BY 1DESC |
This is better than my solution in TWO ways. First, both Reece and Andy Mallon figured out that you could take out the spaces between SELECT*FROM and 1DESC, which blows my mind and saves “strokes” in code golf.
Second, the use of local variables surprises SQL Server because the query optimizer doesn’t realize that @H is going to be such a big number, so SQL Server lowballs the amount of memory required to run the query. Second, both Reece and Andy Mallon figured out that you could take out the spaces between SELECT*FROM and 1DESC, which blows my mind.
One downside is that it means the estimated plan for Reece’s query doesn’t show the true awfulness of the spill that would result. To approximate the spill, I’m going to revise his solution to natively use the @H values:
Transact-SQL
|
1 |
SELECT*FROM GENERATE_SERIES(-9000000000000000000, 9000000000000000000)ORDER BY 1DESC; |
That would get a 1.1 petabyte memory grant, just like my 87-character solution – but Reece’s solution is just 72 characters! Excellent work. Andy Mallon used a similar approach, but without the variables.
Tuyen Nguyen’s Solution
Tuyen (Github) wrote, “I use the cardinality estimation limitation from using local variables so SQL Server misestimates the rows coming out from the function and going into the sort, guaranteeing a spill to TempDB on any server. A giant string sort key then magnifies the spill. The bigger @N is, the larger the spill, but also much slower the query is.”
I love this because it’s the spirit of what I thought for my first attempt, but she did a much more elegant job than I did, especially with the local variable. Top notch. Tuyen’s query:
Transact-SQL
|
1 2 3 4 |
DECLARE @N int = 100000 SELECT * FROM GENERATE_SERIES(1, @N) ORDER BY REPLICATE(NEWID(), 1000000000) |
In the spirit of code golf, let’s switch the numbers to 9s, to get a worse query with no additional characters – on the order by, I need to use the max int size:
Transact-SQL
|
1 2 3 4 |
DECLARE @N int = 999999 SELECT * FROM GENERATE_SERIES(1, @N) ORDER BY REPLICATE(NEWID(), 2147483647) |
Because this solution relies on mis-estimations, I have to actually execute the query to see the memory grants, so I’m going to modify it to only return TOP 101 to avoid causing problems with SSMS. The actual query plan:

Heh heh heh, there you go: 3,995,134 pages is a 30GB spill. Now, that may not sound like much, but you can use that exact technique and simply amp up the numbers in the query, and get yourself a much bigger spill.
How I Adapted Mine
Ripping off Mallon’s idea of ripping out spaces, here’s what I ended up with for my own answer:
Transact-SQL
|
1 2 3 4 5 6 7 8 |
/* 55 characters, 150GB grant/spill on 2025: */ SELECT*FROM GENERATE_SERIES(0,2147483647)ORDER BY 1DESC /* 65 characters, 1.5TB grant/spill: */ SELECT*FROM GENERATE_SERIES(-9999999999,9999999999)ORDER BY 1DESC /* 83 characters, 1.1 petabyte grant/spill: */ SELECT*FROM GENERATE_SERIES(-9223372036854775807,9223372036854775808)ORDER BY 1DESC |
So in less than 100 characters, you can drain any SQL Server out of drive space for TempDB. Reece’s solution beats me though at 72 characters for that same giant spill.
If you liked this exercise, check out the ideas from other readers, and seriously, use SQL Server 2025’s Resource Governor feature to prevent users from doing something similar.
Related

Hi! I’m Brent Ozar.
I make Microsoft SQL Server go faster. I love teaching, travel, cars, and laughing. I’m based out of Las Vegas. He/him. I teach SQL Server training classes, or if you haven’t got time for the pain, I’m available for consulting too.
Get Free SQL Stuff
"*" indicates required fields








10 Comments. Leave new
You can shorten the local variable name to @
Saves anothjer 3 character from Reece Goding’s solution
DECLARE @ BIGINT=9E18;SELECT*FROM GENERATE_SERIES(-@,@)ORDER BY 1DESC
HAHAHA, nicely done!
How did you discover that? I can’t believe it’s legal!
Curiosity, after seeing a # in the middle of a column name, I check the docs and tried several things to check if the docs were correct. In this case they were correct!
first character: @ # _ and letters (as defined by unicode)
subsequent characters : @ # _ $ letters and numbers (also as defined by unicode)
so yes this is legal as well 🙂
declare @ int select @ # into #
This is kind of tangential, but you had some concern about the GENERATE_SERIES function working with BIGINT. It looks like you have the stop value set to one higher than BIGINT’s max value (and eventually the start value is one higher than the minimum, but that hardly matters). Is that important in some way for exploiting the function, and it just isn’t mentioned? The documentation just says “the series stops once the last generated step value exceeds the stop value.” That makes me kind of curious what the column type would be on return, and what the rules are for deciding that, when you just key in a literal that huge.
I don’t have access to a SQL Server at the moment, but wouldn’t REPLICATE blow up when you tell it to repeat something more than MAX(BIGINT) times?
Dude, dbfiddle.
Whenever you’ve got access to one again, feel free to try your experiments. It’s not really fair to ask others to do your work for you. Hope that’s fair. Cheers!
Sorry, it wasn’t my intention to ask you to do anything. I just thought one of those two things might explain the error you said you started digging into midway through the article, and then didn’t mention again. Maybe I should have just outright asked if you found the issue.
I couldn’t find a free, no-sign-up SQL Server 2022 sandbox that worked correctly when I was killing time this morning, but I found one now. It returns that “Arithmetic overflow error converting expression to data type int” error whenever the length argument for REPLICATE exceeds the max for signed 32-bit, unless you cast the string expression to a MAX type. Seems like a weird way to complain about an implied length restriction.
Not going to play around with GENERATE_SERIES now. This sandbox doesn’t have the right compatibility level for a start, and I don’t know if I’d want to play around on somebody else’s toys, when a typo could kill their server.
Dude, Docker.