Who’s Hiring in the Database Community? December 2021 Edition

Who's Hiring

Is your company hiring for a database position as of December 2021? Do you wanna work with the kinds of people who read this blog? Let’s set up some rapid networking here. If your company is hiring, leave a comment.

The rules:

  • Your comment must include the job title, and either a link to the full job description, or the text of it. It doesn’t have to be a SQL Server DBA job, but it does have to be related to databases. (We get a pretty broad readership here – it can be any database.)
  • An email address to send resumes, or a link to the application process – if I were you, I’d put an email address because you may want to know that applicants are readers here, because they might be more qualified than the applicants you regularly get.
  • Please state the location and include REMOTE and/or VISA when that sort of candidate is welcome. When remote work is not an option, include ONSITE.
  • Please only post if you personally are part of the hiring company—no recruiting firms or job boards. Only one post per company. If it isn’t a household name, please explain what your company does.
  • Commenters: please don’t reply to job posts to complain about something. It’s off topic here.
  • Readers: please only email if you are personally interested in the job.

If your comment isn’t relevant or smells fishy, I’ll delete it. If you have questions about why your comment got deleted, or how to maximize the effectiveness of your comment, contact me.

I close the comments here after 1 week, and publish another post each month in the Who’s Hiring category so y’all can get the latest opportunities.

If you want to be notified as people leave new job comments here, there’s a “subscribe without commenting” option where you leave a comment.

Are You Underpaid? Let’s Find Out: The Data Professional Salary Survey is Open.

1 Comment

We’re data people, you and I. We make better decisions when we work off data instead of feelings.

It’s time for our annual salary survey to find out what data professionals make. You fill out the data, we open source the whole thing, and you can analyze the data to spot trends and do a better job of negotiating your own salary:

We pay Richie in query bucks

Take the Data Professional Salary Survey now.

The anonymous survey closes Sunday, Jan 2. The results are completely open source, and shared with the community for your analysis. (You can analyze ’em now mid-flight, but I’d wait until the final results come in. I’ll combine them into a single spreadsheet with the past results.)

Thanks for your help in giving everybody in the community a better chance to talk honestly with their managers about salary.

Today is the last day to buy a Live Class Season Pass, EVER.

Company News

Today is literally it. This is the end of an era.

One of the most popular things I sell is my Live Class Season Pass, which gives you two cool things:

  • The ability to drop in on any live online class that I teach for a year straight, plus
  • The ability to stream the recordings, too

For one low price, students can attend classes whenever works for them, as often as they like, continuously raising their skills. Plus, when an emergency pops up at work, they can watch the recordings after hours to prep for the next day’s class. When they’re getting ready to tackle a problem at work, they can jump back to a specific module, watch the recordings, step through the demos, and even try their hand at the lab assignment to see if they really got it.

I love the Live Class Season Pass, but starting in 2023, I’m going to be teaching way less live online classes. I can’t be selling an unlimited pass if there’s a limited number of live classes you can attend. So because of that, today is literally the last day you can buy a pass, and attend a huge number of classes over the next 12 months. In 2023, I might teach an online class or two, but I’m aiming to scale way back, and folks will have to pay for each class they want to attend.

Today, the Live Class Season Pass is $1,595.

If you want an even bigger/better deal, the Level 3 Bundle is just $100 more, and it adds SQL ConstantCare and the Consultant Toolkit.

But move quick. When the calendar flips over to December, the Black Friday deals shut off, and the Live Class Season Pass is gone forever. See you in class!

Update 2021/12/01 – the Black Friday Sale is over.

My Black Friday sale ends tomorrow.

Company News

The countdown timer is on: you’ve got less than 48 hours to grab one of my massively discounted bundles and get the SQL Server training you’ve always wanted at a price your manager can live with:

Update 2021/12/01 – the Black Friday Sale is over.

Sales are open until December 1. When the calendar flips over to December, sales are over, and you’ll never be able to buy a Live Class Season Pass again.

For prices this low, the only way to buy is online with a credit card or Paypal. If you wanna play around with invoices, purchase orders, wires, checks, Bitcoin, or trading in your old Toyota, you can contact us for an invoice, but it’ll be at regular price, not the sales price.

For groups of 10 or more, email us at Help@BrentOzar.com and we can make the checkout process easier. In your email, include the bundle you want to buy and the list of emails you want to enroll. For smaller groups, you can just walk through the checkout process online once per student.

Taxes are handled automatically during checkout. If you’re buying for a business that needs tax info or exemptions, fill out the business’s info during the checkout process. An email is sent to you during checkout – that’s the only invoice & receipt that we generate. If your accountants want something else on the invoice & receipt, your best bet is to print that email to PDF and then add the information you need.

Let’s do this! See you in class.

Update 2021/12/01 – the Black Friday Sale is over.

[Video] Office Hours: Black Friday Edition, Ask Me Anything About SQL Server


Yesterday, I finished setting up my new Apple MacBook Pro with the new M1 Max processor and did its first live stream on my Twitch channel to see how the hardware held up. I was tickled pink – worked great!

Here’s what we covered:

  • 00:00 Introductions
  • 03:48 TheGreatFode: Hi Brent, have you ever seen cases where to low HBA queue depth setting was a problem, and increasing it improved I/O performance? BusyIsGood: Hi Brent, I’m trying to use sp_BlitzLock to get recent deadlocks but the procedure has started taking over 30 minutes to produce records. What could be causing this delay? Or is this just my life now…
  • 09:33 NewPadawan: Hey Brent, is splitting tempdb still needed on SSD? On the company I work at, we have about 40 databases with each db is about 20-40GB, on 1 SSD. We only use 1 tempdb (on another SSD) so far, on sqlserver 2008 r2.
  • 10:39 BusyIsGood: Hi Brent, I’m trying to use sp_BlitzLock to get recent deadlocks but the procedure has started taking over 30 minutes to produce records. What could be causing this delay?
  • 12:15 LookingAtProcsAllDay: Hi Brent I’ve seen many different upsert patterns (some of them considered anti) in TSQL. Everything from: If (not) exists / merge / @@rowcount / updlock / serializable / list goes on… What is your “go to UPSERT pattern” in TSQL and why?
  • 15:20 sol: Howdy, sir, tasty be your beverage! Earlier this century twas like “MaxDOP at 50? – Brent wrote this… – Who’s Brent?!” and now it’s more like “Why..? – Thus Brent Hast Written – Ooh, so wise!”. By giving more and more to the community, are you afraid of becoming “the default”?
  • 18:04 i_use_lowercase_for_select: Hi Brent, My friend who is also my customer has a 100GB database and a terrible application which uses ‘select * from [table]’ for every query. Performance is terrible and now he wants to use ephemeral disks for data and logfiles. Can you help me expess how bad of an idea that is
  • 20:25 Lostindb: Hi Brent, I used to put mdf and ldf on separate physical disk location. But it is still needed when we use SSD? Or is it safe to put mdf and mdf in same SSD? Thanks!
  • 21:26 RelationalDBA: Hey Brent, love your office hours videos. If you were a Production DBA, what would your approach be to monitor AlwaysON AG? My friend’s office decided to have HA on the DBs and he wants to be prepared.
  • 24:22 Accidental DBA: Hi Brent, sometimes when we run sp_blitzfirst with no arguments, it runs for several minutes (15 minutes). Is this something to be concerned about? Looking at messages tab, it says “Beginning investigatory queries”. SQL2014
  • 25:28 Stuart: Hi, on SQL2016 I have a query using outer-join to empty columnstore index table to benefit from batch mode; on 2019 (compat150) if I remove the outer join to columnstore table it gets a very different plan and no batch mode so seems this is still useful on 2019 – any thoughts?
  • 27:50 Stuart: Hi Brent, I recently created a high-value missing index on a single (computed) column, according to sp_BlitzIndex. When I check the index usage stats DMV days later though it shows it’s not been used at all, any thoughts on why that might be?
  • 28:54 Ivan: Is it necessary to rebuild index which had page count more then 500 above & fragmentation more then 50% daily or weekly once should be fine ?
  • 31:09 Wrap-up

Wanna get alerted whenever I go live? Subscribe to my Twitch channel. See you there!

[Video] Office Hours Speed Round: 17 Answers in 11 Minutes

1 Comment

Not all of your questions need detailed answers. In this speed round, I race through 18 answers in 11 minutes:

Here’s what we covered:

  1. 00:11 UnfortunatelyPostgresIsCheaper: I know you’ve answered that, as far as u know, there isn’t a ‘Brent Ozar’ of Postgres, but as you spend a lot of time with it u must know a quite a bit so what were/are your learning sources?
  2. 00:46 dadepretto: Hi Brent, first thanks for your amazing work! Regarding SQL 2022 Parameter Sensive Plans filling up query store, Bob Ward replied to a comment on the YT Microsoft Mechanics video “we have logic here not to cause too many plans to be cached”. Any thoughts? Is there any hope?
  3. 01:14 Jacob H: Hi Brent, what are your thoughts on the new Azure SQL Database Ledger feature? Do you think there is any future for “blockchains in the database”?
  4. 01:45 suku2022 dba: hi Brent, Performance tuning is something will come based on experience. so wonder is there any site where we get queries or kind of mock scenarios and we can tune then or make them run faster..
  5. 02:40 LookingAtProcsAllDay: Hi Brent, waiting on mgr app for Bundle 2 What could explain a slow exponential deterioration in CPU and Duration of certain procs, that is reset every time my friend does a failover? Visible for past year on 2017, not enough data points to see if problem in 2019 with 3x more mem
  6. 03:20 Shy: Hi Brent, For DR Env – When to use Log Shipping verses Availability Groups? What is the point that I have to switch from Log Shipping to Availability Groups with respect to RPO and RTO?
  7. 03:59 Richard: If I duplicate a linked-server connection, except for the friendly name (and with two sys64 connectors), have I doubled the capacity?
  8. 04:15 HybridDBA: Could you mention any equivalent community contributor in Oracle like Brent currently is for SQL Server? Brent, your community mentorship for SQL Server is amazing.
  9. 04:48 Fyodor: Hi Brent, what is your opinion on crypto currencies ? I know this is not related to SQL server but I don’t see where I can ask you anything non-related to sql server.
  10. 05:56 FullTextIndex: Full-text indexing: what is your opinion and experience. My friend has put a full-text index on a XEvent session capturing all SQL statements on an Instance
  11. 06:38 CarlosDBA: just renewed the annual subscription to your classes (3rd year and still loving it!!). Any new courses you are working on (or planning to post in the next year) that we can look forward to?
  12. 07:29 Yawnder: Hi sir. Do you think frameworks like EF should integrate with partition (partition functions, scheme, etc., etc.) or should that be left to the DBAs? Would it be too risky to give that kind of power / responsibility to less “DB-Aware” Devs?
  13. 08:08 Trushit: Is it a good idea to connect tools like Power BI to SQL server via Direct Query mode? Direct Query mode queries data from sql server every time someone interacts with report. I am concerned that there would be too many query hits to live database.
  14. 08:48 Kumaran: What would be the best method/ practices for a query that spans 2 versions of SQL using linked server?
  15. 09:24 PerformanceIsMyLife: If I have an SSIS package that is essentially an ELT and then logic to process – is there a performance difference vs if I did the same process scripts after initial data import via stored procedure instead of part of the SSIS package?
  16. 09:47 Tancredi: Hi Brent, do you have any thoughts on using a pre-existing production SQL Server (that admittedly sees low usage) for DPM 2019? My friend’s boss is pushing for it to save money on licensing.
  17. 10:11 KondzioSSJ4: Hi Brent I have many-to-many relations between the 2 tables Should I create a connection table with separated auto-increment ID or maybe put PK for both columns? or should I do something else to get the best performance? (connection table have unique values)

[Video] Office Hours on the Balcony Part 2


Picking up right where we left off yesterday, we continue down through the not-so-highly-upvoted questions from https://pollgab.com/room/brento.

Here’s what we covered:

  • 00:00 Introductions
  • 00:22 Andrew from Canada: To Null or not to Null? That is the question. On tables with 100M to 1B rows and with several columns that contain data 20% of the time, should we still allow nulls or set them to an empty/zero value instead? If it’s just retuned but not queried, does that matter?
  • 02:02 FARAN: Hi Brent, is separating MDF and LDF on different physical drive still needed when we use SSD?
  • 04:34 Maciej: Hi Brent, Recently Amazon made available Babelfish for Aurora PostgreSQL. Could it seriously threaten Microsoft business model and force them to ship more improvements to SQL Server engine (e.g. like Parameter-sensitive plan optimization)?
  • 06:59 That’s All Folks – already answered
  • 07:17 Michael Devor: Brent. My friend found a plan w/missing index recommendation. For testing he created it but the plan does not show it used. Could the engine be using some part of the index without reporting it? Could there be a reason to keep the index? (nothing else would seem to need it)
  • 09:10 MancDBA: Hi Brent, what are your thoughts on DBA Job postings that ask for a “Expert” on various topics eg “Expert at AlwaysOn” or “Expert at TSQL”. I have 10 years experience, should I be an “Expert” by now?
  • 11:16 Faruk Ami: Hi Brent, i want to create report for top consuming cpu queries on our sqls but some of them are running only Entity FW queries 🙁 how would you approach this? tried using query store but we are running in multi tenancy (schema level) which makes it a bit hard to investigate
  • 12:20 Maciej: Hi Brent, What was the highest value for missing index (returned by sp_BlitzIndex) that you have ever seen?
  • 13:13 DrDrakken: Hi Brent! Learned a lot from your fundamentals training and started my own experiments on cardinality estimation. What I don’t understand (and my friend neither): Event simple queries with a sort request way to much memory even if returned row estimates are accurate (sql2019)
  • 14:45 Steve E: I know it’s maybe too early to say but do you think SQL Server 2022 will finally fix parameter sniffing for good?
  • 16:30 MikeO: I have heard you mention the cost of Sql Server, at what point should processing be done within the application instead of SQL Server ?
  • 18:30 FrustratedAnalyst: Hi Brent, many thanks for hosting these office hours sessions, I get a lot out of them! As an analyst who wants to move over to data engineering or administration, how would I impress potential employers when I don’t have much working experience in those areas?
  • 20:19 Mike: Brent, can you change the default SQL Server port 1433 years after the server has been configured? What steps or links can I read before doing this task? Need to do this for IRS Audit purposes.
  • 21:33 Jim: Hi Brent. Thanks for doing these! We have a vendor app with a ~3 TB DB, mostly one table (docs in DB…grrr). The DB is stored on a Netapp LUN (flash storage). Would you have any recommendations here, as to whether it would be good to add a database file?
  • 23:53 Todd Chittenden: You always state that there are enough poorly configured SQL Server to keep us all employed. I’m having a hard time convincing bosses, clients, etc. that SQL performance tuning is needed. How do I get this off the ground? Any suggestions?
  • 26:22 SayItAintSo: Do I understand from your black Friday sale “The final time I’m selling it!” that you might be retiring??? (: Asking for many friends.
  • 28:41 Oleg: Hi Brent, you often mentioned you wish developers do this or that in your trainings, what would be your ultimate checklist or wish list that you would like all database developers to keep in mind when developing new application?
  • 30:48 Torben Iisager: Hi. During rollback and database recovery we ask for the status, but the percent count fluctuating up and down, we don’t know why (and don’t know the look for doc on this issue), the recovery/rollback completes despite that, but I would like to know why the number is fluctuating.
  • 32:03 LookingAtProcsAllDay: Hi Brent, love your office hours. A friend just upgraded to 2019 at the same time as the hardware. He is seeing reduction in CPU and Duration of some of the most used procs. But query store metric on memory grant has gone up 8x. New hardware has 3x the memory. Is this OK
  • 35:04 DataWriter: My friend’s table has 25 indexes on it. dm_db_index_usage_Stats shows many are unused/barely used: 12 indexes are .002% of the total index usage, but 53% of the total user_updates. His manager isn’t a SQL guy. Is there a way to quantify the performance gain from removing them?
  • 36:33 Wrap-up

NOLOCK Is Bad And You Probably Shouldn’t Use It.

Eyes up here, kid
I’m waiting for the YOLOck option.

When you put NOLOCK in your query, SQL Server will:

  • Read rows twice
  • Skip rows altogether
  • Show you changes that never actually got committed
  • Let your query fail with an error

This is not a bug. This is by design. To understand what happens, read these blog posts in order:

  1. Using NOLOCK? Here’s how you’ll get the wrong query results.
  2. “But NOLOCK is okay when my data isn’t changing, right?”
  3. “But surely NOLOCK is okay if no one’s changing data, right?”

After reading those, you’re going to ask, “So what am I supposed to do if I need to avoid blocking problems?” There are 3 solutions:

  1. Have enough indexes that your queries go fast, but not so many that your deletes/updates/inserts take forever. I cover this in the locking module of my Mastering Index Tuning class.
  2. Keep your transactions short and sweet, and use batching to avoid lock escalation. I cover this in the deadlocking module and the batching module of my Mastering Query Tuning class.
  3. Use read committed snapshot isolation (RCSI) or snapshot isolation (SI) so that readers and writers can coexist. I cover this in the isolation levels module of my Mastering Server Tuning class.

I know: NOLOCK sounds so much easier because you can just slap it into a query and it feels like it’s going faster. Just remember the demos in those first 3 blog posts up above: sooner or later, your users are going to see inaccurate query results.

If it’s so bad,
why is NOLOCK an option?

Because in some applications, accuracy really doesn’t matter. I know it sounds like a joke, but I’m being serious.

For example, one of my clients is a high-volume online shopping site. Their admins want to see that data is moving through the shopping cart process, that users have been adding new shopping carts in the database. They use monitoring queries to check for the last 10 carts that were created recently and checked out recently. They don’t really care whether the results are transactionally accurate – they just wanna see that any new orders are moving through the system. NOLOCK works fine there.

Another example is the open source First Responder Kit scripts like sp_BlitzIndex. I don’t need transactional accuracy when I’m checking the usage statistics on an index – we’re just getting rough numbers. If the same index happened to be shown twice, you would simply go, “Well that’s odd,” and you’d rerun it to check.

But if your application involves money, health care, or anything else vitally important, you better think twice before using NOLOCK.

Free Live Class This Week: Fundamentals of TempDB

Company News

You’ve been working with Microsoft SQL Server for a couple of years, and you know a little bit about TempDB. You’ve heard that temp tables and table variables have different performance characteristics than regular user database tables, and you’ve heard that you’re supposed to have multiple data files.

You’re wondering what exactly goes on behind the scenes.

This class is for curious folks who want to learn:

  • What uses TempDB, and how that affects performance: not just temp tables and table variables, but also triggers, cursors, sorting indexes, workspace spills, the version store, and more
  • How to host TempDB depending on how you use it: understanding what goes in the log file, the data files, why the number of data files matter, and whether your particular workload should be on SAN, local storage, or ephemeral storage, and why
  • How to monitor it: after you understand what uses it and how to host it, you need to keep tabs on how its performance changes over time using DMVs, Perfmon counters, and the First Responder Kit scripts

This course is 90% demos: the only slides are the introductions at the start of the day, illustrations to support a few topics, and the recap at the end of the day. The rest of the time, we’ll be working in SQL Server Management Studio. Roll up your sleeves and join me!

Join me live, for free, on Tuesday 8:00-16:00 UTC in Europe-friendly times, and again Wednesday in Americas-friendly times, 9AM-5PM Eastern. Register here.

[Video] PASS Data Community Summit Keynote: 5 Ways the Cloud Impacts Your Career

Succeeding as a data professional is all about placing the right bets at the right times. The more you know about what’s happening around you, the better bets you can place.

I’ve been working with databases for over two decades. In this session, I’ll invite you into my home to have a frank, honest discussion about where our industry has been recently, and where it’s heading next. We’ll look at how that will impact the available jobs in our industry, and how it impacts where you focus your spare training time. We’ll even look at job ads and resumes in an effort to help you stand out.

In one hour, I can’t make you a pro at everything – but I can help you understand where you need to be a pro, and how to make your resume reflect that. Join me for this recording of my PASS Data Community Summit keynote:

If you liked this session, there are plenty more, all free at PassDataCommunitySummit.com.

Big thanks to Redgate for organizing this year’s Pass Data Community Summit, and I’m honored to have been involved! I’ve already got my travel booked for the in-person event in Seattle in 2022, too. See you there!

[Video] Office Hours on the Balcony Part 1

1 Comment

I hang out on my balcony in San Diego with a gin & tonic and a bunch of your highly upvoted questions from https://pollgab.com/room/brento.

Here’s what we covered:

  • 00:00 Introductions
  • 01:33 FishCakesForTea: I have designed a filtered index for a specific query but SQL Server doesn’t use it unless I move the columns in the WHERE clause (in the index and the query) into the INCLUDEd columns as well. Why do I have to do that? These columns are not in the SELECT statement or a join.
  • 04:25 HeavenKnowsIAmMiserableNow: Sometimes I tune a query and the logical reads are lower but the duration is consistently higher even after a few executions. What are the likely causes and is using reads as a performance yardstick misguided?
  • 06:13 NotMySelf: Hi Brent, what are your thoughts on the new multiple plans per query feature in SQL Server 2022?
  • 09:06 Alexandre Jobin: How can you know if your SQL Server is over provisionned (or under provisionned)? We are planning to go in the cloud and we are questionning the server specs.
  • 11:02 bobthebuilder: Hi Brent, My friend has a table of 18M rows with no primary key or clustered index. But there are 6 non-clustered indexes, few on null and nvarchar columns. Is it advisable to change one of the index to a clustered index for better performance even though having duplicates values
  • 13:13 DrDrakken: Hi Brent! My friend’s DB has tables with a lot of columns. For reasons this is not to be changed in an easy way. Is the rule of max 5 indexes per table still valid if you have about 50 columns where 4 of them are foreign keys and therefore already having an index?
  • 15:36 UseLessFellow: I am excited to join in the class. but at the same time one of your suggestions about getting a certification is useless because it doesn’t provide any job opportunity, if that is the case after your course done what are the opportunities you will provide? and pls provide a path
  • 18:29 Peter Sawatzki: Hi Brent, would you put tempdb files on an unmirrored storage volumes or unmirrored ssd volumes ?
  • 19:26 Mordac: Hi Brent, do you have any best practices for optimizing linked server query performance (i.e. putting servers on same switch, etc)?
  • 21:14 i’m_a_null_set: Hi Brent. Love your content and classes. Asking for a friend: Is there a hierarchy for which pages SQL server will flush out first when it needs memory for queries? Are some pages more expendable than others, such as indexes that won’t get used by the query?
  • 23:26 The SQL Guy: Hey Brent. Let’s say you have a query that INNER JOINS the orders table twice, using different fields, so different indexes are used. When you look at the execution plan, there is a troublesome Key Lookup. How do I know which index I should modify to get rid of the Key Lookup?
  • 25:50 Thunderous roars
  • 26:29 Doug: On average, after becoming a DBA, how long does a person experience “imposter syndrome?”
  • 29:07 Stranger in the night: Hi Brent! Long time listener, first time caller. Who does the drawings of you that we see when you post?
  • 30:29 Gio: Who is the “Brent Ozar” of PostgreSQL?
  • 31:47 Wrap-up

SQL Server 2022 Slogan Contest Winners

SQL Server 2022

Last week, I challenged y’all to come up with slogans for SQL Server 2022.

I have to confess that when I came up with the idea for that blog post, I thought y’all would take it seriously. I thought you’d come up with ideas that were meant to show off Microsoft’s hard work.

I should have known better.

Being readers of this blog, many of you share my oddball sense of humor, and you came up with totally goofy stuff. I laughed out loud a lot. Here are my 3 favorite entries, each of which won a Level 3 Bundle with a Live Class Season Pass, Recorded Class Season Pass, Consultant Toolkit, and SQL ConstantCare:

“SQL 2022 – When You’re *Almost* Ready for the Cloud” – Will

“SQL Server 2022 – Your data can rest Azure’d.” Will S.

“SQL Server 2022 – We added some new things but left all the broken things you’ve come to love” – Bo J. Anderson

Honorable Mentions:

I know I said I was only gonna pick 3 winners, but there were so many good ones that I had to take it up a notch. These 10 folks also won a Level 2 Bundle with a Recorded Class Season Pass, Consultant Toolkit, and SQL ConstantCare:

“SQL Server 2022: Something Something Blockchain” – Ryan A.

“SQL Server 2022: Now even a better way to store excel spreadsheets” – Leon Spangenberg

“Yo dawg, I heard you like SQL, so I made a sequel to your SQL so you can Error 1205 : Transaction (Process ID) was deadlocked on resources with another process and has been chosen as the deadlock victim. Rerun the transaction.” – Edgar Evans Cayce III

“SQL Server 2022: because we know you didn’t like 2020 and 2021 too.” – Gustav Swanepoel

“Leave the gun. Take SQL Server 2022.” – Federico Minca

“SQL Server 2022: Pay Purview” – Andre Speek

“SQL Server 2022: We number a year ahead, but we’re really a decade behind” – Michael Paul

“SQL 2022 – Paved with Good Intentions ™” – Caroline

“SQL Server 2022: Now with Cloud Entanglement” – Kevin McDonnell

And a bonus winner, Mattia Nocerino actually drew a new logo:

SQL Server 2022

Free Live Class This Week: Fundamentals of Columnstore

Company News
1 Comment

Your report queries are too slow.

You’ve tried throwing some hardware at it: your production SQL Server has 12 CPU cores or more, 128GB RAM, and SQL Server 2016 or newer. It’s still not enough to handle your growing data. It’s already up over 250GB, and they’re not letting you purge old data.

Fundamentals of Columnstore IndexesWill columnstore indexes help?

In one day, you’ll learn:

  • How columnstore data is stored, and how that impacts your architecture choices
  • How to do a hands-on self-assessment of your workloads, data, and server to figure out whether columnstore indexes make sense for you
  • Why partitioning makes so much sense for columnstore indexes
  • How to do a proof-of-concept implementation with initial creation, querying, ongoing loads, and index maintenance

Join me live, for free, on Tuesday 8:00-16:00 UTC in Europe-friendly times, and again Wednesday in Americas-friendly times, 9AM-5PM Eastern. Register here.

Is Microsoft SQL Server 2022 a Big Release?

SQL Server 2022

Normally when y’all post questions here on Pollgab, I let ’em pile up and I eventually do an Office Hours webcast where I talk through my answers. I check in on the list every couple of days just out of curiosity, though, and one of ’em jumped right out at me.

The question, posed by Brent’s Tasty Beverage (nicely done) was:

My friends feel announcement from MS regarding SQL22 were only relatively small changes (since we didn’t see too much of multiple plans technically or demo), nothing groundbreaking or revolutionary. What are your thoughts?

There are a few questions here.

Should a Microsoft SQL Server release be groundbreaking or revolutionary? No. And I mean, no isn’t even strong enough of a word – absolutely, positively hell no. Microsoft SQL Server is one of the world’s most popular relational databases, trusted by companies from banks to hospitals to manufacturers to corner shops. This isn’t some tiny project used by a few hobbyists. Whatever Microsoft ships in SQL Server is going to be installed by thousands of companies and relied on for decades. If you want something groundbreaking or revolutionary, check out the poop that gets flung at the wall the new services that get created in the cloud. When Microsoft creates something brand new in the cloud, that’s where they can experiment with crazy groundbreaking stuff like CosmosDB. If 14 people use it and get burned, life goes on. (I’m being humorous here – at Microsoft’s scale, anytime they sign their name to anything, even an open source project, they’re signing a check for a ton of support workload and technical debt.)

Are SQL Server 2022’s changes relatively small? Oh my gosh, no. Even when I look at just the four big-picture changes that I listed in the What’s New in SQL Server 2022 post:

  • Failover from SQL Server 2022 to Azure SQL DB Managed Instances and back, plus related, the ability to restore versionless databases from Azure SQL DB Managed Instances down to on-premises SQL Server 2022
  • Azure Synapse Link to avoid complex ETL jobs between SQL Server and Azure Synapse
  • SQL Server Ledger – blockchain immutable histories of tables
  • Parameter-sensitive plan optimization that caches multiple plans per stored procedure

Those are way, way bigger than they look at first glance, and I’m really excited at each of them. I don’t think Microsoft will do a perfect job of shipping each of them by any means, nor am I saying I’m going to agree with how much (or little) work they put into each feature, but all four of those are bigger, harder work than they look like at first glance, and they have the potential to be really big. It’s going to be up to us – the real world users – as to how well they’re adopted, and in the following thousands of words – there’s going to be a lot, because I just opened this bottle of wine – I’m going to talk about which ones I think will catch on.

Did Microsoft demo much of the changes? From a very high level yes, and I’m perfectly fine with that because this isn’t SQL Server 2021. I don’t expect the features to be fully baked yet, and I expect the team to still be frantically making decisions about what parts they want to show, and which parts they want to hide under the rug. I’m not saying the code isn’t fully baked – by now, it is – but the decisions about marketing the features are just beginning. Plus, when the features are just unveiled, the team hasn’t had time to build solid marketing demos yet. They’re still focused on building the features and fixing the bugs. (And sometimes, early-announced features don’t even make the final release build – remember Always On Availability Groups for Kubernetes?)

I love, LOVE this question from Brent’s Tasty Beverage – and I’m laughing now as I write this because I’m under the influence of a Tasty Beverage – Famous Gate by Domaine Carneros, one of my favorite wineries – because the question is thought-provoking. Let’s talk through all the thoughts it provokes.

I think Microsoft made good choices,
and they’ll make more sense long term.

When I judge a release, I think about a few criteria:

  • Is Microsoft trying to tackle real problems?
  • Do I think the overall strategy of the solution is good? (Forget small implementation details for now – just are we on the right track?)
  • Do I think a lot of users are actually going to adopt the solution?

For example, if I think back a decade ago when Microsoft first released Always On Availability Groups, I thought they were trying to tackle the right problems, and I thought the solution was good, and I thought a lot of people could actually adopt the solution. On the flip side, when 2008R2’s data-tier applications come out, I told you that there was nothing to see here, move along.

So with those criteria in mind, let’s think about the four SQL Server 2022 features I mentioned.

Failover back/forth from SQL Server 2022 and Azure SQL DB Managed Instances

Is Microsoft trying to tackle a real problem? Absolutely. For decades, people have asked, “How do I restore a database to an earlier version of SQL Server?” And for decades, we simply haven’t had an answer. This posed a real problem when Microsoft started hosting databases in the cloud under the brand name of Azure SQL DB because they kept saying, “Azure SQL DB is the next version of SQL Server.” When the cloud is running vNext, that also means you can’t restore the databases back down to your local machine.

Do I think the overall strategy of the solution is good? Yes, and it wasn’t as easy as it looks. Microsoft had to do a spectacular amount of work to plan out a long-term solution: they had to standardize on a file format that would work for multiple versions in a row, not just today but for the coming decades. My mind boggles at the meetings and consensus that’d be required by this. This isn’t five people coming to an agreement in a Github issue – this is dozens or hundreds or thousands of stakeholders around the world coming to an agreement about a balance between the agility of changing the database file format versus the stability of a future-compatible file format. SQL Server 2022’s announcements by no means make it easier to restore a SQL Server 2022 database down to 2019, nor should it. It’s hard enough just to get the cloud and the on-premises databases to agree on a file format, and I’m completely in awe that they managed to pull this off. It’s hard.

Do I think a lot of users are going to adopt it? Well, here’s where we hit a problem. The solution relies on not just an Availability Group, but a Distributed Availability Group (DAG). The demos look point-and-click easy, but:

  • They skip over a lot of stuff (like provisioning the Azure SQL DB Managed Instance and getting connectivity working between your on-premises SQL Server and the Managed Instance)
  • The demos look like one database per Availability Group and Distributed Availability Group, which is going to be really painful management at scale
  • When things go wrong, troubleshooting a DAG is awful. The DAG documentation page looks impressively long at first until you realize that’s pretty much all there is, and heaven forbid something go wrong with your DAC. These things are troubleshooting nightmares. Throw in a mix of an on-premises AG mixed with the DAG up in Azure, and you’ve got the devil’s cookbook.

But despite these gotchas, I’m relatively happy with the big picture direction. Building v1 of anything is tough. This isn’t v1 of Availability Groups, but it’s v1 of something very ambitious. I like it. It also has the possibility of increasing Azure SQL DB Managed Instance adoption as a new easy default option for disaster recovery. (In reality, it’s gonna be anything but easy if they don’t start putting a hell of a lot more work into manageability though.)

Azure Synapse Link

Is Microsoft trying to tackle a real problem? Yep. The pain: ask any enterprise report writer, and they’ll tell you that they wish they had more real-time access to data stored everywhere. The instant they have to start dealing with extract-transform-load processes, bam, there goes weeks or months of lead time. Add a new column to a source system? Forget it. It’ll take forever to get into enterprise-level reports. Microsoft is recognizing a very serious pain.

Do I like their solution? I don’t play in this playground – moving data around between databases – so I’m not qualified to judge it. However, the documentation for Azure Synapse Link for Azure Cosmos DB includes this gem:

Synapse Link is not recommended if you are looking for traditional data warehouse requirements such as high concurrency, workload management, and persistence of aggregates across multiple data sources.

Well, that doesn’t sound good.

But as a DBA, if someone told me they wanted to enable this feature, I think I’d be okay with it. The alternatives are things like Change Data Capture, Change Tracking, or rolling your own ETL solution, and frankly, all of those are more work than Azure Synapse Link sounds like. (And all of those solutions have performance overhead.) I see warning signs, but not for the database itself, so … okay with me!

Do I think a lot of users are going to adopt it? I have no idea, but given that limitation above, sure seems like it would backfire hard as it grows. But like I said, I don’t play in this playground.

SQL Server Ledger

Is Microsoft trying to tackle a real problem? Yes. I know, blockchain is a meme by now, but there are legitimate business needs for a bulletproof history of a row. Take airlines: when there’s a crash and a particular part is suspect, they need to know the complete history of that particular part, and they need to know that the history hasn’t been fudged by a supplier or a mechanic.

Do I like their solution? Yes. There’s a slight challenge because any solution here is going to involve keeping ledgers of the transactions involved for a long period of time. Think about that for a second. (And while you’re thinking, make a note of what you think about, and keep that log around for forever, ha ha ho ho.) In our industry, we’ve already got a ton of problems with people trying to shrink databases. Imagine if the transaction log never went away – and think about the space requirements you’d be dealing with. This is going to present a space problem for the coming decades. To understand the implications of how it would work, check out Azure SQL Database ledger.

Do I think it’ll get widespread adoption? For the people who need it, yes. For the people who like this kind of thing, this is the kind of thing they like. Due to the size requirements, this is never gonna be a switch that just gets flipped on everywhere, but where you need it, you need it. I also think this is the kind of feature that can actually increase license sales because if it works to the point where it’s legally defensible in court, then it’s an edge that less expensive databases don’t have. It can increase licensing sales.

Parameter-sensitive plan optimization

Is Microsoft trying to tackle a real problem? Yes, I know dang well that this is a huge problem out in the real world because my Fundamentals of Parameter Sniffing and Mastering Parameter Sniffing classes are amongst my biggest sellers. (There’s a reason I didn’t teach my FPS class for free this month.)

Do I like their solution? I think it’s a great down payment. It’s only focusing on equality searches (like @Country = ‘US’ versus @CountryCode = ‘Iceland’), not range searches like @StartDate and @EndDate. It’s also only looking at outliers that are immediately obvious by examining the statistics histogram, which still only contains 201 buckets – so it doesn’t catch problems with not enough buckets, outdated statistics, joins, etc. But you know what? That’s totally okay! It’s a hell of a down payment, and I think it’s gonna help way, way more than it hurts. You can learn more about it by watching Pedro Lopes’ demo-filled session on it and the other query plan improvements.

Do I think it’ll get widespread adoption? Yes, because it kicks in automatically at compat level 160 (2022) and it doesn’t require Query Store to be turned on. It doesn’t need to fix every parameter sniffing problem – even if it just fixes 1 in 10 parameter sniffing problems, that’s forward progress, and I’ll totally take it.

This is the one feature that by itself won’t directly increase licensing or Azure revenue. (Indirectly, it might: it proves that Microsoft can continue to iterate over the engine to make queries faster, and that might make someone more likely to host their new application in Microsoft databases. I wouldn’t count on that to happen quickly, though.) I’m happy with the mix of down payment features and features that will make folks like us happy.

I like the direction of all of these.

And these are only some of the highlights – there are more features, and I’m sure we’ll hear more about ’em over time.

Are they groundbreaking or revolutionary? No, I’d say they’re more evolutionary – they represent logical next steps, even if they’re big next steps, for a big, stable relational database platform.

And that’s totally okay.

SQL Server isn’t supposed to implement dramatically different behavior by default, because default behavior is risky. SQL Server is massively popular, and when you’ve got a database platform this big, you need to make conservative bets rather than betting the farm on something risky. I, for one, am glad that Microsoft continues to make the right conservative bets.

Updated First Responder Kit and Consultant Toolkit for November 2021

First Responder Kit Updates

Just like you, the First Responder Kit has been pretty much stable for the last couple of months. There have been a few small changes, but nothing too big. There are a few pending pull requests that folks are working on around an open source setup checklist, better support for Ola Hallengren’s scripts, and a new stored procedure to get cluster information. If you want to influence how the production versions of those shape up, head over to the pending pull requests, look at the related issues, and give your feedback.

How I Use the First Responder Kit
Wanna watch me use it? Take the class.

To get the new version:

Consultant Toolkit Changes

This month’s update does change the spreadsheet. I don’t take that lightly – I’d rather let y’all have very customized versions of the spreadsheet – so the changes are really minimal:

  • In the “Uptime” tab, there are a few automatically generated notes that can show up starting in row 26. They’re designed to be notes that you can copy/paste directly into an email if a prospective client sends you data for a server because, in fact, that’s exactly what I’m doing! For example, one possible message is, “This server was restarted in the last 24 hours. Gather data again when the server has been up for at least one business day because most of the performance and reliability data is erased when the server is restarted.”
  • In the “Databases” tab, there are two new columns for Azure Edition and Azure Service Objective, both taken from sys.database_service_objectives. Just be aware that in Azure SQL DB, cross-database queries aren’t possible with three-part names, so we only get the info on the current database.

sp_Blitz Changes

sp_BlitzCache Changes

  • Fix: when called by sp_BlitzFirst with @OutputType = ‘none’, don’t output a result set. (#2998, thanks Andreas Jordan.)

sp_BlitzIndex Changes

  • Bug: Mode 4 wasn’t scripting out the definitions of unique constraints. (#3006, thanks Erik Darling.)

sp_BlitzWho Changes

  • Fix: new @GetLiveQueryPlan parameter, defaults to 0. On some builds of SQL Server 2019, we’ve seen access violations parsing the XML in live query plans. That’s a bug in SQL Server, and we tried to file an issue for it on Connect, and then we tried to file an issue on Feedback.Azure.com, and then we gave up. If you want the live query plan and you aren’t worried about SQL Server access violations, turn this switch on. (#3011, thanks Greg Dodds.)

For Support

When you have questions about how the tools work, talk with the community in the #FirstResponderKit Slack channel. Be patient: it’s staffed by volunteers with day jobs. If it’s your first time in the community Slack, get started here.

When you find a bug or want something changed, read the contributing.md file.

When you have a question about what the scripts found, first make sure you read the “More Details” URL for any warning you find. We put a lot of work into documentation, and we wouldn’t want someone to yell at you to go read the fine manual. After that, when you’ve still got questions about how something works in SQL Server, post a question at DBA.StackExchange.com and the community (that includes me!) will help. Include exact errors and any applicable screenshots, your SQL Server version number (including the build #), and the version of the tool you’re working with.

Contest: SQL Server 2022 Needs a Slogan.

SQL Server

Now that Microsoft has announced that SQL Server 2022 is coming, it’s time for us to talk about a shameful truth:

SQL Server doesn’t sell itself.

Remember when everybody was talking about how MongoDB was Web Scale™? Or when Oracle was Unbreakable™? Even if you were making fun of it, the point was that you were talking about it, and that’s a buzz that SQL Server just doesn’t have.

I believe that you, dear reader, have the answer. I bet that you can come up with a snappy slogan. (The best I could do was “No one out-SQLs the Server,” and let’s be honest, that’s not really good.)

This week, while you’re learning about SQL Server 2022’s cool stuff in all the free training sessions, think about how we can market this thing and sell the heck out of some licenses.

Leave your best slogan in the comments before Saturday morning. Saturday morning, I’ll close the comments down, pick my favorite entries, and publish ’em in a post on Monday. My 3 favorite entries will win a Level 3 Bundle with a Live Class Season Pass, Recorded Class Season Pass, Consultant Toolkit, and SQL ConstantCare.

I’ve got a full cup of coffee here. Make me spit it on my monitor, people.

There is a Ton of Free SQL Server Training Coming Up.

Conferences and Classes

If you don’t get smarter soon, it’s your own damn fault.

Oh sure, I’m running a Black Friday sale, but even if you’re as broke as SSMS’s live query plans, you can still afford this free stuff I’m doing:

Weds-Thurs: Quest Empower 2021 – free conference, and I’ve got a session with Pinal Dave on Wednesday where we’ll cover tips & tricks for SQL Server performance.

Weds-Fri: PASS Data Community Summit – every one of these in the past has been expensive, and this year thanks to Red Gate, it’s totally free. I’m doing the community keynote on Friday, talking about 5 ways the cloud impacts your career. Microsoft has several SQL Server 2022 sessions scheduled already, too – after you’ve registered, open the session catalog and search for 2022.

Nov 16-17: Fundamentals of Columnstore – your report queries are too slow, and regular indexes aren’t enough. You’re on SQL 2016 or newer, and your data’s up over 250GB. Find out if columnstore indexes can help. Find out in this one-day class – I’m doing it on Nov 16th in Europe-friendly times, 8:00-16:00 UTC, and again Nov 17th in Americas-friendly times, 9AM-5PM Eastern.

Nov 23-24: Fundamentals of TempDB – you want to learn how to troubleshoot temp table plan reuse, the version store, latch contention, and where to put the files. Find out in this one-day class – I’m doing it on Nov 23rd in Europe-friendly times, 8:00-16:00 UTC, and again Nov 24th in Americas-friendly times, 9AM-5PM Eastern.

Dec 9: How to Size Storage for SQL Server – In this one-hour session, I will show you how to measure your existing workloads, predict how multiple databases will interact when they’re consolidated, and use all this to gauge what kind of storage you’ll need in your next server.

[Video] Office Hours Manhattan Edition: Ask Me Anything About SQL Server


I sat down on the edge of Brooklyn, overlooking the Manhattan skyline, to take your highest-voted questions from https://pollgab.com/room/brento.

Here’s what we covered:

  • 00:00 Introductions
  • 00:42 Dylan: Do you have any experience with sys.dm_db_tuning_recommendations? To be clear, I am not referring to Azure SQL or Automatic Index Tuning, but rather Query Store Forcing/Unforcing plans automatically. Even if not set to automatic, have you seen clients manually utilize this data?
  • 02:27 Borninthematrix: Hi Brent, we frequently get deadlock on thread errors as opposed to on a specific resource. I can use sp-blitzlock to pull out the query plan, but do you have any tips on how we can pin point the specific part of the query plan that encountered the deadlock on thread error?
  • 03:26 Stranger: Is data warehouse design a DBA skill? Or is it a BI expert skill? 04:31 Stuart: I need to delete millions of rows using a criteria with a join, so I can’t use the CTE/view technique. What do I do?
  • 05:49 DrDrakken: Hi Brent! My friend has a query that consists of several subqueries and takes some minutes to execute. There is multiple usage of getdate() in the where clauses. Does this mean that not all where clauses will use the exact same time? And would it be better do use a variable?
  • 06:38 Accidental DBA: Hi Brent, what is the setting we need to enable in SQL 2019 to see the actual query plan in sp_blitzwho?
  • 07:22 DBA Champion: Hi Brent! Is there a way to see, who upvoted or downvoted my posts on dba.stackexchange.com ? Recent backup of dba.stackexchange.com, is it downloadable somewhere ?
  • 09:12 camaro322hp: Hi Brent. My friend is loading MSSQL tables from Oracle tables with over 100 million records. What’s a good way to determine which records have changed since the last pull? The source data does not have a last update date field and any records can change at any time.
  • 10:04 BorisDB: Hey Brent, what’s the best way to migrate just the SQL schema?
  • 10:50 Stuart: Hi Bent! Totally off topic this but myself and some coworkers were having a wager as to what floor your apartment is on, based on the view from your camera during your recent session intermissions. Most popular is 28 – are we close?!
  • 11:07 Ron: Hello Brent. I see you have a DBATools database. What do YOU keep in there?
  • 12:04 That’s all folks: Hi Brent, our system engineers are considering setting up a single big storage bay with tiering enabled to host all of our VMs (including sql servers). What’s your thought on that given we have ~50 vms and ~100 DBs and high concurrency (reads) 1 week per month “only”?
  • 13:19 Accidental DBA: Hi Brent, do you have any thoughts on using SQL CLR to perform cache invalidation using SQL triggers?
  • 14:10 Wrap-up

[Video] Office Hours Speed Round: Quick Answers

1 Comment

Think fast! Not all of the questions at https://pollgab.com/room/brento have long-winded answers. In this 5-minute session, I crank through a bunch of straightforward questions:

  • 00:00 Introductions
  • 00:32 Mike: Hi Brent, the delete job is blocking for around 20 minutes our update or inserts calls from api which is expected to be completed with few seconds. How can we avoid it?
  • 01:01 Filip Holub: Hi Brant, Do you have any information about release new SQL Server version? Or your opinion?
  • 01:33 Mike: We have an ETL job which loads the data to a temp table table1_x, once the load is completed table_x is renamed to actual prod table table1. As the prod table is connected to multiple jobs it is unable to acquire lock which is needed to rename causing blocking. How do I avoid it?
  • 02:14 Jose: How do you debug Stored Procedures with Azure SQL Databases?
  • 02:38 Gustav Mulder: Hi Brent, have you ever worked with Apache Kafka to move data from a production db to another SQL instance running BI tools?
  • 03:00 JacksonvilleJohn: Hi Brent, what are some good use cases for the new last-writer-wins replication feature in SQL Server 2019? Can we now have tables synch across different geographic locations? Would that be better than an AAG for a DR site?
  • 03:48 Faruk Ami: Hi Brent, is there support for First Responder Kit stored procedures in Azure sql?
  • 04:25 Vengeful Flamingo: Hi Brent! What’s your experience with delayed durability and how would you approach deciding to enable it? It helped address WRITELOGs & ETL perf issues, even though the log is on flash & wasn’t overwhelmed.

Who’s Hiring in the Database Community? November 2021 Edition

Who's Hiring

Is your company hiring for a database position as of November 2021? Do you wanna work with the kinds of people who read this blog? Let’s set up some rapid networking here. If your company is hiring, leave a comment.

The rules:

  • Your comment must include the job title, and either a link to the full job description, or the text of it. It doesn’t have to be a SQL Server DBA job, but it does have to be related to databases. (We get a pretty broad readership here – it can be any database.)
  • An email address to send resumes, or a link to the application process – if I were you, I’d put an email address because you may want to know that applicants are readers here, because they might be more qualified than the applicants you regularly get.
  • Please state the location and include REMOTE and/or VISA when that sort of candidate is welcome. When remote work is not an option, include ONSITE.
  • Please only post if you personally are part of the hiring company—no recruiting firms or job boards. Only one post per company. If it isn’t a household name, please explain what your company does.
  • Commenters: please don’t reply to job posts to complain about something. It’s off topic here.
  • Readers: please only email if you are personally interested in the job.

If your comment isn’t relevant or smells fishy, I’ll delete it. If you have questions about why your comment got deleted, or how to maximize the effectiveness of your comment, contact me.

I close the comments here after 1 week, and publish another post each month in the Who’s Hiring category so y’all can get the latest opportunities.

If you want to be notified as people leave new job comments here, there’s a “subscribe without commenting” option where you leave a comment.