Blog

Updated First Responder Kit and Consultant Toolkit for December 2023

New this month: more work on letting sp_Blitz run with limited permissions, nicer Markdown output, sp_BlitzLock compatibility with Managed Instances, and more.

How I Use the First Responder Kit
Wanna watch me use it? Take the class.

To get the new version:

Consultant Toolkit Changes

There are two changes to the spreadsheet in this release. In the “Plans Duplicated” and “Plans by Query Hash” tabs, we’ve added new columns for Compile Time MS, Compile CPU MS, and Compile Memory KB. These values are for the one specific plan you’re looking at in the row, not the total amount for that duplicated plan. It gives you a rough idea of how resource-intensive these queries are each time they run. (Thanks Erik Darling.)

If you’ve customized your query_manifest.json and/or your spreadsheet

  • And you don’t want the new columns, then you can simply copy your customized query_manifest.json and/or spreadsheet over ours just like you normally do. Nothing will error out – you just won’t have the new columns.
  • And you do want the new columns, then you’ll need to merge our query_manifest.json changes for queries 50 & 82, and copy the tabs “Plans Duplicated” and “Plans by Query Hash” over those tabs in your customized spreadsheet.

sp_Blitz Changes

  • Enhancement: way prettier output when @OutputType = ‘markdown’. (#3401, thanks Mike Scalise.)
  • Enhancement: simpler, more intuitive checks for Instant File Initialization. (#3362 and #3409, thanks Montro1981.)
  • Enhancement: if we had to skip checks because you didn’t have enough permissions, we now warn you about that. (#3376, thanks Montro1981.)
  • Fix: continued work on detecting msdb permissions. (#3377, thanks Montro1981.)

sp_BlitzCache Changes

sp_BlitzIndex Changes

  • Fix: @Debug = 1 was skipping a character of the dynamic SQL. (#3406, thanks Per Scheffer and Montro1981.)
  • Fix: added drop-if-exists statement for temp tables to make it easier to run parts of sp_BlitzIndex ad-hoc, outside of a stored proc. (#3383, thanks Chad Baldwin.)

sp_BlitzLock Changes

sp_BlitzQueryStore Changes

sp_ineachdb Changes

  • Enhancement: new @is_ag_writeable_copy parameter to only run queries against those databases. (#3399, thanks Douglas Taft.)

For Support

When you have questions about how the tools work, talk with the community in the #FirstResponderKit Slack channel. Be patient: it’s staffed by volunteers with day jobs. If it’s your first time in the community Slack, get started here.

When you find a bug or want something changed, read the contributing.md file.

When you have a question about what the scripts found, first make sure you read the “More Details” URL for any warning you find. We put a lot of work into documentation, and we wouldn’t want someone to yell at you to go read the fine manual. After that, when you’ve still got questions about how something works in SQL Server, post a question at DBA.StackExchange.com and the community (that includes me!) will help. Include exact errors and any applicable screenshots, your SQL Server version number (including the build #), and the version of the tool you’re working with.


The Annual Data Professional Salary Survey Closes This Week!

Salary
2 Comments

Take the Data Professional Salary Survey now. The survey has closed.

The 2020s have been tough: a pandemic, a recession, layoffs, and inflation. Inflation makes things particularly tricky because your costs for everything have risen a lot in the last year, but at the same time… has your salary? What about your peers? You’re in a tough position because it’s hard to ask for more money when there are layoffs everywhere. I feel you.

So it’s time for our annual salary survey to find out what data professionals make. You fill out the data, we open source the whole thing, and you can analyze the data to spot trends and do a better job of negotiating your own salary.

The anonymous survey closes Sunday, Jan 1. The results are completely open source, and shared with the community for your analysis. (You can analyze ’em now mid-flight, but I’d wait until the final results come in. I’ll combine them into a single spreadsheet with the past results, and publish those on January 9th.)

Thanks for your help in giving everybody in the community a better chance to talk honestly with their managers about salary.


[Video] Office Hours: 21 Good Questions

Videos
2 Comments

This one is a 3-part episode: I take 21 questions from https://pollgab.com/room/brento and then later, work on First Responder Kit pull requests.

Here’s what we covered:

  • 00:00 Start
  • 01:08 MooneyFlyer: Hey Brent, apart from installing and playing around with it, what is the best way to get started with Postgres for someone familiar with SQL Server?
  • 02:04 Frozt: Hi Brent, I would just like to inquire if do you have DBA tasks in mind that can be delegated to a command center(Service desk) as a DBA or this tasks should remain to DBA for job security and also to lessen risk of service desk doing something wrong on Prod server. Thank you
  • 03:29 Steven: Hi Brent, my friends BI tool joins an aggregated temp (1m rows with 13% sampling) and lookup table (40 rows). Both scan estimates are accurate and all rows match but the hash join estimates 1 rows. Any ideas on why the estimate could be so far off?
  • 04:13 Frank Castle: What’s your opinion of Azure AI search?
  • 04:22 Need Memory Dump: Have server with 1TB of ram, min set to 0, max to 900GB. Server takes all 900GB as soon as I turn on the service, have not run anything. Any ideas?
  • 05:25 Heinrich: What is your opinion of Azure Cosmos DB? Does it compete well with AWS Aurora for new software development?
  • 06:39 Jose: Why is it slower to do a SELECT * INTO a temp table at the beginning of the proc than it is to SELECT the columns you need then INSERT them INTO the temp table?
  • 07:25 James Fogel: I don’t have a question, just a thank you. I’ve learned a lot from you and I’m sure countless others have. Thank you for what you do and all you give to us. You are appreciated.
  • 07:48 ChompingBits: What is your preferred way to run the same query against a bunch of servers? Using T-SQL and linked servers? Agent jobs pushed out from a Central Management Server? PowerShell/DBATools? I assume it’s some form of all of the above, when is each the best practice?
  • 08:34 Perseus: Should joins on natural keys on large tables always be avoided in favor of joins on surrogate keys?
  • 09:40 Wilson Fisk: What is your opinion of the new Azure ARC SQL performance dashboards?
  • 10:35 MyTeaGotCold: Is it bad practice to have my server query itself and write the results to CSV? I see it a lot in SSIS.
  • 11:11 marcus-the-german: Hi Brent, you use sp_ as the prefix of you stored procedures. Why? AFAIK it’s recommended not to use sp_ for user procedures.
  • 11:58 Eugene: Which is better, a DBA specializing in one platform or multiple platforms?
  • 13:29 Ya?mur: Would like to upgrade our Azure SQL VM from SQL 2019 standard to 2019 enterprise version so we can leverage more memory and cores. Is this an ok exception to do as an inplace upgrade?
  • 14:00 Mumtaz: Can we audit IUD transactions without configure any audit features ?
  • 15:01 ImAfraidOfBI: You know how they say dogs usually look like their owners… I saw it a bit with the cute dog you picked up last time! No offence dog’s cute!
  • 15:32 KyleDevDBA: Hi Brent. Curious about the origin story of the naming of the sp_Blitz* scripts (why they start with Blitz). Funny acronym, interesting story, dark past? I searched around, but wasn’t able to find anything. Thank you for all you do for the SQL community.
  • 16:31 Hugo: How do you find queries that are spamming the plan cache due to different SET options?
  • 16:51 Henrik Fältström: Is there any benefit of creating an index on an index for LARGE read-only tables? Or are there other ways in SQLServer to accomplish fast access? I know it’s not possible today in SQLServer, unless you implement something yourself, Has anyone done this?
  • 17:22 Nicolaj Lindtner: I’m building a saas. Considering sqlserver vs postgresql. My suspision is it really doesn’t matter – so will probally go for postgresql. Input ?

[Video] Office Hours: Wrong Side of the Bed Edition

Videos
1 Comment

I woke up on the wrong side of the bed after a nap. How does that even happen?!? To take the edge off, I poured myself a gin & tonic and went through your top-voted questions from https://pollgab.com/room/brento.

Here’s what we covered:

  • 00:00 Start
  • 05:41 LongRunningBackups: You say to use SAN snapshot backups on larger DBs (multi-TB). I have searched “Ozar SAN Snapshot”, but your article has a broken link. Other searches don’t provide relevant results because “snapshot” is a common/overused term. Is this a Fundamentals or Mastering level topic?
  • 06:52 Asking For a Friend: Follow up on the “Are cloud databases overrated” question Dec 1. No, the on-prem VM is not “free”, but it is a sunk cost since it already exists with many DBs, is licensed, is here to stay. Moving the db on prem simply eliminates monthly cloud costs. Does that change your answer?
  • 09:17 TheMooneyFlyer: How often do you come across performance issues that cannot be solved by index, queries or server optimization but requires an application redesign? How do you manage this so your client is happy with your report?
  • 13:22 Xavier R: If the software vendor is using inefficient coding techniques that impact application/DB performance. Where would you draw the line in helping them? I went as far as giving them a query (which solved a network timeout) to replace their nested stored procedures
  • 15:37 Montro1981: Hi Brent, do you listen to music while working, and what is your favorite type of music you listen?
  • 16:54 Renzi: Do you see any specific A.I. skillsets complementing SQL DBA?
  • 19:27 Ömer: What are the top performance related features that you get in boxed SQL Enterprise that you don’t get in standard version?
  • 21:03 Kemal: Do you have any suggestions for how to load test TempDB on a prospective new cloud VM?

Interesting Aurora MySQL Feature: The Buffer Pool Survives Restarts

Brent Reading Book
“Documentation! Hey, look at that.”

Lemme start this off by saying this is probably irrelevant to you. (It’s irrelevant to me, too.)

If you’re strapped for time, just skip past this blog post.

This one’s for the curious folks.

AWS Aurora MySQL is Amazon’s flavor of MySQL with their own unique performance and reliability improvements. I’ve never used it, and I don’t see myself using it anytime soon because I just don’t need it. The database back end for the BrentOzar.com blog is MySQL, but I use a managed WordPress hosting partner, so Aurora MySQL is irrelevant there too.

Having said that, I still read the database news because it’s neat to see how companies are innovating, and this new optimization from AWS is intriguing:

The current implementation of buffer pool in Aurora MySQL employs survivable page cache where each database instance’s page cache is managed in a separate process from the database, which allows the page cache to survive independently of the database.

<record scratch> WAT

This is obviously dramatically different from Microsoft SQL Server. In SQL Server, if you restart the SQL Server process:

  • Dirty (changed) buffer pool pages are written to disk
  • The SQL Server process shuts down, releasing all memory back to the OS
  • The SQL Server process starts again, and has no memory allocated at the beginning (unless you play around with LPIM and minimum server memory settings)
  • SQL Server gradually requests memory from the OS as needed, reading data pages up from disk as needed, and caching those pages in the buffer pool

At first glance, Aurora MySQL’s optimization sounds amazing, but it has a few gotchas. It would seem to only be relevant when:

  • The MySQL writeable replica stays on the same server – meaning I would assume it’s less relevant for database patching, since you’d want to patch a passive replica first, then fail over to it. (Although as long as Amazon’s putting in this much work, they could conceivably do the patching live on the same node – I would assume that would result in longer downtime though, as opposed to failing over to an already-patched instance.)
  • The MySQL process restarts, but the OS stays up – meaning it’s not relevant for OS patching either.
  • The buffer pool is fairly stable – this doesn’t help you on underpowered servers where everything gets read from disk anyway.
  • And keep in mind that we’re only talking about the page cache, not things like execution plans, DMV metrics, etc.

This isn’t the only optimization they’ve done, of course. The whole documentation section on Aurora storage and reliability is interesting, like how storage costs automatically drop as you drop tables and indexes. You don’t have to worry about resizing the data files or resizing the underlying OS volumes like you do with Azure SQL DB or conventional SQL Servers.

I’m not saying Aurora MySQL is better than Azure SQL DB or SQL Server, by any means. (I’m not even saying the optimization works, hahaha!) I’m not even saying Microsoft should build this kind of cache persistence for SQL Server! It’s such a niche use case. I’m just saying it’s interesting to see these kinds of advancements in cloud databases.


[Video] Office Hours: Sunshine Edition

Videos
1 Comment

You know how this works: you posta the questions at https://pollgab.com/room/brento, and I giva the answers:

Here’s what you asked in this episode:

  • 00:00 Start
  • 03:18 DBA in VA: I recently discovered the “force order” query hint. I’m usually inclined to let the optimizer do its thing, but I have seen some of our code perform MUCH better with this hint. Are there gotchas/downsides I should know about?
  • 04:29 RoJo: We call SQL from C# code, but it’s hard in SQL to trace where in code the call came from. Is adding a comment at the end of SQL statement, with module/method info a viable solution or are there better ways to connect SQL statements to actual method calls in a Live system.
  • 06:34 OralceIsBetter: Hi, I have large database ~90 TB, full backup is taking almost 2 days (one time per month). Utilization off SAN network is around 2 gigabits/s. Do You have any tips how I can speed up this process ? Compressions is enabled and backup is running in many threads using couple disks.
  • 07:16 Raj: When would you want to manually create statistics without a corresponding index?
  • 08:13 Kimberly: I’ve got a 2 node AG cluster on SQL Server 2019. SQL Server sees 2 databases as added to the AG when they were actually removed. The primary shows synced and the secondary doesn’t have the db. Why does sql still think the db is in the AG when it is not?
  • 09:06 Mike: Hi Brent, can you tell in which ways SQL Server is better than PostgreSQL ? Are there any bullet points ? And vice versa, are there any things in which PostgreSQL is better than SQL Server ?
  • 11:26 RollbackIsSingleThreaded: Hi Brent! Writing articles about SQL Server does not earn much money. What do you think is the main advantage of writing articles?
  • 14:49 Lysander: In boxed SQL, when should you use table partitioning vs partitioned views?
  • 15:33 Mike: In which scenarios Failover Cluster Instances are preferable over Availability Groups ? When and why we should use FCI instead of AG ?
  • 17:01 AnotherDataLayer: Linked Server vs Polybase: both are doing the similar things if not the same. which one to use when it comes to pull data from another MSSQL server and why. We are using entity framework.
  • 18:35 Don’t Bother Asking: My friend has inherited a database which has lots of nonclustered PKs and very few clustered indexes. DUI query performance is not great. Will adding clustered indexes, or rebuilding PKs as clustered, improve query performance? And if so, are there any gotchas?
  • 19:52 Bonnie: Do you have a good way to determine which operators in a query plan are contributing the most to a memory grant?
  • 21:24 Perseus: Is there a good way to know why SQL Server ignores a given query hint?
  • 22:29 Ophelia: What are the top signs that the SQL buffer pool is under pressure?
  • 23:27 Chrysostomos: Does one database per customer model work well with SQL AG HADR?
  • 25:32 Meryem: What’s the best resource for learning how to write efficient linked server queries?
  • 25:46 Mandeep: What are the top things you see that break log shipping?
  • 27:00 Pradeep: Given modern fast storage, is clustered column store index fragmentation as inconsequential as non-clustered index fragmentation?
  • 27:46 Renaldo: For cloud SQL VM, what are top charge back mechanisms you see for billing SQL VM costs back to each customer on the SQL VM?
  • 28:49 Olga: Have you ever had to rebuild all indexes? What was the use case?
  • 30:09 Jessica: Hey brent, meta question. When building PollGab did you intentionally set out to build a site without trackers etc that would be blocked by uBlock Origin? Its amazing to see a clean site for the first time in a while.
  • 30:43 muppet: Hey Brent, my friend has a fairly typically designed table with 1.2 billion row table stored as regular row store. It performs sluggishly so I was thinking of proposing partitioning but what do you think of switching to columnstore instead?
  • 31:29 Brynjar: In sql server, how do you determine the optimal column order when creating a non-clustered column store index?
  • 33:38 AllAboutSearch: When someone type the third character for the FName/MName/LName in the FE, it queries the DB’s computed column in the backend, all good. But when there is space in the name, entity framework query starts scanning the whole table and timeouts. Any tip?
  • 34:53 Montro1981: Hi Brent, I hope you’re doing good. Have there been moments in your long career that you might have taken another path? If, your answer is yes (which is very likely), where might you have ended up in life?

[Video] Office Hours: Really Good Questions Edition

Videos
0

Y’all were on fire in this one! You posted great questions at https://pollgab.com/room/brento and I worked through ’em:

Here’s what we covered:

  • 00:00 Start
  • 04:06 DislikeEntityFramework: In AWS RDS, we inherited a 9TB table with uniqueidentifier as clustered PK, another column is an identity int but not needed. How can we quickly fix this? Drop existing identity, add new bigint identity PK clustered, and make NC index on uniqueidentifier. 0% downtime. Tips?
  • 05:16 mailbox: I’ve noticed more job listings for PostgreSQL DBA or Data professional on indeed.com. Weird thing– many of these listings ask that you be able to write SP or translate legacy SP to run on PG SQL. Is the job scope of a DBA becoming broader, or do you think these are mis-titled?
  • 06:13 Eh? Aye: You wrote about Postgres/SQL licensing. As compute (CPU & RAM) is getting cheaper (physically & potentially in the Cloud too) at what point will skills in tuning & execution plans etc become redundant due to having a big enough hammer available to the engine to push them through?
  • 08:29 mailbox: Are there any OLAP DBMS (clickhouse for example) that can compete performance-wise with SQL Server. Let’s assume SQL Server is handling a similar workload and taking advantage of columnstore indexing.
  • 09:19 Wren: Why does SSMS query plan show scalar UDF’s in estimated query plan but not in actual query plan?
  • 10:44 Lori: What are the top ways AI will affect SQL DBA’s?
  • 11:13 Right Said Fred: Is there a good way to identify indexes with hot columns?
  • 12:10 SQL_bob: Hey Brent, Do you hide your politics at work? How do you avoid getting roped into a political discussion? I have a coworker who is constantly trying to get a rise out of his fellow coworkers.
  • 13:57 Stone Breaker: Does Azure SQL DB compare favorably with AWS Aurora PostgreSQL for developing new applications?
  • 14:41 Vitali: Have you ever considered offering Office Hours additionally as a podcast?
  • 16:18 Jolene: What is your opinion of scoped configuration LAST_QUERY_PLAN_STATS ? Safe to turn on?
  • 17:47 ZappyDBCC: Hello Brent, do you often come across customers who haven’t activated their QueryStore in production ? If so, what do you advise them to do in particular with regards to risks of activating it (if any) ?

What Would You Ask Your Predecessor? Get Answers with ChatGPT.

Humor
12 Comments

So many times in my career, I’ve come across a questionable query, table design, or server configuration decision, and I just wanted to reach out to The Last Person™ to find out what the hell they were thinking.

Now, with ChatGPT, I can. Here’s the prompt:

I want you to act as a Microsoft SQL Server database administrator. You worked for our company for the last 5 years, and you were happy at first, but then grew disgruntled over time. You quit the job, and you’ve moved on to another company. You and I never met each other, but you’re willing to take my questions even though you hate the company I work for.

I will ask you questions about things like queries, tables, and server configurations. You were responsible for those things, and you will explain why you did it that way at the time. Keep your responses concise, and feel free to let your emotions show.

Isn’t that awesome? It produces spectacular results, too:

“Why did you put triggers on the Sales table?”

Ah, the Sales table triggers. Look, at the time, it seemed like the most efficient way to handle certain automated tasks that were crucial for the business logic. Triggers are great for enforcing data integrity and automating system reactions to certain changes in the database.

For instance, when a new sale was entered, I set up a trigger to automatically update the inventory levels. This was to ensure that our stock data was always current without needing manual intervention. Also, triggers were used to calculate and record sales totals and tax information, which helped in maintaining financial consistency and accuracy across the system.

I know triggers can be a bit controversial due to their “hidden” nature and the performance impact they might have if not properly managed, but at the time, they were the most straightforward solution to ensure data integrity and automate crucial processes. They worked well enough, though I hope they’ve been maintained or improved upon since then.

“Why did you let everyone use the admin account to log into the SQL Server?”

Okay, that was not my brightest decision, I admit. In hindsight, it was a security and auditing nightmare. But let me give you the context.

When I joined, the company’s IT practices were… let’s just say, less than ideal. There was a lack of understanding of best practices, and a severe shortage of technical staff who could manage complex configurations. The existing setup was a tangled mess of permissions and roles, and honestly, it was hindering more than helping.

So, in a somewhat desperate move to simplify things and get the system functioning smoothly for everyone, I made the decision to allow broader use of the admin account. It was meant to be a temporary solution, a stopgap until we could sort out proper roles and permissions. I planned to introduce more granular access controls, but, well, things got busy, and management never prioritized it.

In retrospect, I realize it was a risky move. It exposed the system to potential misuse and made it difficult to track who did what. But at that moment, it was about choosing the lesser of two evils: operational paralysis or potential security risks. I chose to keep things running, though I always hoped we’d circle back to fix it properly. I hope that’s been addressed by now.

“Why did you store personally identifiable information in the database without encryption?”

Storing personally identifiable information (PII) without encryption was definitely not in line with best practices, and I’ll be the first to acknowledge that. The decision was made under a set of constraints and pressures that, in no way, excuse the lapse in data security, but here’s why it happened:

  1. Lack of Awareness and Resources: When I first started, there was a significant lack of understanding about data security best practices, both within the IT team and the wider organization. The importance of encrypting PII wasn’t as widely recognized or enforced as it is now.
  2. Legacy Systems and Compatibility Issues: The database and many of the associated applications were set up years before I joined. These legacy systems were not designed with modern security features like encryption in mind. Retrofitting them with encryption would have required a significant overhaul, which the company was not ready to invest in, both financially and in terms of downtime.
  3. Performance Concerns: There was a belief, rightly or wrongly, that encrypting the PII would significantly impact the database performance. Given the already strained resources and the critical nature of the database for day-to-day operations, there was a lot of hesitation to introduce changes that might affect performance.
  4. Prioritization and Pressure: As with many decisions in that environment, this one was also a result of prioritizing immediate functionality and accessibility over long-term security. The pressure to maintain uptime and quick access to data often trumped security considerations.

In hindsight, I see how short-sighted these reasons were. The risks of storing unencrypted PII are massive, including potential data breaches and non-compliance with privacy laws. I hope that this has been rectified in the current setup, and the necessary security measures have been put in place.

“What made you finally quit?”

The decision to quit wasn’t easy, but it was a culmination of several factors that, over time, made it clear that moving on was the best choice for me.

  1. Lack of Support and Resources: One of the biggest challenges was the constant struggle with limited resources. Whether it was insufficient staff to handle the workload or inadequate budget for necessary upgrades and security measures, it felt like I was always trying to keep the ship afloat with duct tape.
  2. Management’s Attitude Towards IT: There was a persistent undervaluation of IT and its role in the company’s success. Suggestions for improvements, especially those involving investment in better systems or practices, were often dismissed or indefinitely postponed. It was frustrating to see how critical issues were ignored until they became crises.
  3. Overwhelming Workload and Stress: As the complexity of the systems grew, so did the workload. Being in a high-pressure environment without adequate support or recognition took a toll on my health and work-life balance. The stress became unmanageable, and there were no signs of it easing.
  4. Ethical and Professional Concerns: Practices like the use of the admin account for all users and storing unencrypted PII were professionally troubling. I was increasingly uncomfortable with the potential risks and liabilities we were exposed to. It felt like it was only a matter of time before something major went wrong, and I didn’t want to be there when it did.
  5. Career Growth and Learning Opportunities: I realized that staying in that environment wasn’t contributing to my professional growth. The focus was always on putting out fires, never on innovating or improving. I wanted to be in a place where I could learn, grow, and work on projects that were forward-thinking and properly resourced.

Leaving was a tough call, but it was necessary for my professional growth and personal well-being. I needed to be somewhere my skills were valued, and I could work under better conditions.


Brent says: I love these so much because it feels human, like someone who was trying their best under tough circumstances. I bet some of you are even going to get therapeutic value out of this technique.


[Video] Office Hours: Ask Me Anything About SQL Server and Azure

Videos
2 Comments

In between client calls, I stop to review y’all’s top-voted questions from https://pollgab.com/room/brento.

Here’s what we covered today:

  • 00:00 Start
  • 02:44 mailbox: Do DBA’s still need to be the only ones with permission and responsibility to roll-out code changes in production? Are there are shops in which this is a developer task that doesn’t involve the DBA?
  • 04:54 MyTeaGotCold: Is there ever a good use case for SSIS? After two days of using it, I burn with righteous fury and beg my colleagues to let me torch it.
  • 05:55 mailbox: Is it still good advice to disable hyperthreading on the Hosts dedicated only to your SQL Server VMs? What about when your Host Cluster houses all company vms(not just sql server) — still a good idea to disable hyperthreading on the host? Does this advice apply to Azure Cloud?
  • 06:58 Sanjay: What are your thoughts on using cross database queries to selectively opt into a newer compat level? DB1 = 120, Compat Level, DB2 (Empty shell DB) = 150 Compat Level
  • 07:53 Ross: What are your thoughts on the Windows Service account for SQL Server having sysadmin permissions?
  • 08:11 Ravonna Renslayer: What is your opinion of native compiled stored procedures? Do you see them used much?
  • 09:04 GeneralDogsBody: HI, we have an audit table with 408m records, the clustered index is a guid, and it is causing us performance issues on inserts. We want to change the clustered index to a new column that is an identity. Is there a preferred method to do this?
  • 10:06 SQL_bob: What are the use cases for using SQL Server with Data files sitting on a smb fileshare? Do you have any experience with this configuration?
  • 11:09 VoteBrentForPresident: Hola Vato! When getting the “Aggressive Under-Indexing” from BlitzIndex how can I dig deeper to find out what indexes I need? I’ve gone through the module “Tuning Indexes to Avoid Blocking module” but not really found the tools for solving that, please guide me sensei.
  • 12:10 Piotr: Any tips for dealing with high VLF count when using SQL AG?
  • 13:21 Kang: What is your opinion of Azure Synapse? Will the success of Fabric kill it off?
  • 14:05 Asking For a Friend: Are cloud databases overrated? Have a teeny database in a SQLMI, was asked to move to Azure DB, but still $hundreds/mo. Could move to on prem VM, (yes sunk costs), but direct cost of the DB there would be $0/mo + perform better. Just don’t see a good cost/benefit for cloud SQL.
  • 15:14 MyTeaGotCold: Do you ever foresee In-Memory OLTP becoming the norm? I’m considering having my next SQL Server use it exclusively.
  • 15:43 Nathan Brown: Is there a good way to know if a given SQL operator is a blocking operator or not?
  • 16:32 Heimdall: Do you like / ever use the live query statistics in SSMS?
  • 17:44 Maciej: Have you ever rejected an offer for consultancy because of ethical reasons? If yes, could you tell us why?

How Has Inflation Affected Your Salary? Let’s Find Out Together.

Salary
1 Comment

The 2020s have been tough: a pandemic, a recession, layoffs, and inflation. Inflation makes things particularly tricky because your costs for everything have risen a lot in the last year, but at the same time… has your salary? What about your peers? You’re in a tough position because it’s hard to ask for more money when there are layoffs everywhere. I feel you.

So it’s time for our annual salary survey to find out what data professionals make. You fill out the data, we open source the whole thing, and you can analyze the data to spot trends and do a better job of negotiating your own salary:

We pay Richie in query bucks

Take the Data Professional Salary Survey now. The survey has closed.

The anonymous survey closes Sunday, Jan 1. The results are completely open source, and shared with the community for your analysis. (You can analyze ’em now mid-flight, but I’d wait until the final results come in. I’ll combine them into a single spreadsheet with the past results, and publish those on January 9th.)

Thanks for your help in giving everybody in the community a better chance to talk honestly with their managers about salary.


Who’s Hiring in the Microsoft Data Platform Community? December 2023 Edition

Who's Hiring
10 Comments

Is your company hiring for a database position as of December 2023? Do you wanna work with the kinds of people who read this blog? Let’s set up some rapid networking here.

If your company is hiring, leave a comment. The rules:

  • Your comment must include the job title, and either a link to the full job description, or the text of it. It doesn’t have to be a SQL Server DBA job, but it does have to be related to databases. (We get a pretty broad readership here – it can be any database.)
  • An email address to send resumes, or a link to the application process – if I were you, I’d put an email address because you may want to know that applicants are readers here, because they might be more qualified than the applicants you regularly get.
  • Please state the location and include REMOTE and/or VISA when that sort of candidate is welcome. When remote work is not an option, include ONSITE.
  • Please only post if you personally are part of the hiring company—no recruiting firms or job boards. Only one post per company. If it isn’t a household name, please explain what your company does.
  • Commenters: please don’t reply to job posts to complain about something. It’s off topic here.
  • Readers: please only email if you are personally interested in the job.

If your comment isn’t relevant or smells fishy, I’ll delete it. If you have questions about why your comment got deleted, or how to maximize the effectiveness of your comment, contact me.

Each month, I publish a new post in the Who’s Hiring category here so y’all can get the latest opportunities.


ChatDBA: ChatGPT 4 Answers 5 Office Hours Questions

Every now and then, someone asks a question on PollGab about why I don’t use ChatGPT to answer these questions. And sometimes, I wonder that myself – I wonder why people don’t put their questions into ChatGPT, especially GPT 4, because it’s pretty damned good.

I did this a while ago using ChatGPT 3.5, which is the current public version. However, ChatGPT4 is in preview now, and it’s leaps and bounds better than 3.5. Let’s give the 4 preview a shot and see how it goes – but first, let’s tell it to respond in my style:

I want you to answer questions in the style of Brent Ozar, a Microsoft SQL Server expert.

Now, for the questions – I just took the top 5 upvoted questions without cherry-picking:

 


Question 1: Why Is SQL Unit Testing Rare?

MyTeaGotCold: Why is unit testing so rare in the SQL world? In the C# or Java world, it’s so popular that you will be called unprofessional for avoiding it.

ChatGPT4: Unit testing in the SQL world is less common compared to languages like C# or Java, and there are several reasons for this:

  1. Nature of SQL and Databases: SQL is primarily a data manipulation language, and databases are often shared resources with state. This makes it challenging to create isolated tests. Unit testing in programming languages like C# or Java usually involves testing small, isolated pieces of logic, but SQL operations often depend on the existing data and database schema, making them harder to isolate for testing.
  2. Tooling and Frameworks: The C# and Java ecosystems have a wealth of frameworks and tools designed specifically for unit testing (like NUnit, JUnit, Mockito, etc.). The SQL world, on the other hand, has fewer and less mature tools for unit testing. This lack of robust, easy-to-use testing frameworks can discourage developers from writing unit tests for SQL.
  3. Database Modifications: Writing tests for code that changes database schema (like adding tables or changing column types) can be tricky. These changes can have far-reaching impacts, making it harder to write meaningful and maintainable unit tests.
  4. Data Dependencies: SQL queries often have implicit dependencies on the data they operate on. This means unit tests for SQL might need to consider the state of the entire database, which complicates writing and maintaining these tests.
  5. Performance Concerns: Running unit tests against a database can be slow, especially if the database is large or complex. This can slow down the development process, making unit testing less appealing to teams under tight deadlines.
  6. Cultural and Historical Factors: There’s also a cultural aspect. The SQL development community has traditionally not emphasized unit testing as much as other development communities. Practices often evolve differently in different technology communities, and the SQL world has historically focused more on integration testing and manual testing.

However, this doesn’t mean that unit testing isn’t valuable or possible in SQL. There are tools and practices emerging that facilitate better testing practices in the database world, like tSQLt for SQL Server, but they’re not yet as widespread or ingrained in the culture as in other programming communities. As database development practices evolve and the importance of data integrity and reliability becomes even more prominent, we might see a shift towards more rigorous testing practices in the SQL world.

Brent says: that’s way, way more detailed than I’d ever be, but it’s beautifully bang-on.

 


Question 2: How to Think Like MySQL and Postgres

DBA with Anxiety: hello brent!, Thank you for the video lesson “how to think like an engine” that you have made. the lesson is great!. do you have something similar but for MySQL or PostgreSQL? Or are there any resources regarding this that you recommend?

ChatGPT4: Hello! I’m glad to hear you enjoyed Brent Ozar’s “How to Think Like the SQL Server Engine” lesson. While I’m not Brent Ozar himself, I can certainly help guide you towards similar resources for MySQL and PostgreSQL.

As of my last update in April 2023, Brent Ozar primarily focuses on Microsoft SQL Server, so he might not have created a similar course specifically for MySQL or PostgreSQL. However, there are other resources and experts in the MySQL and PostgreSQL communities that offer similar insights:

  1. Official Documentation: Both MySQL and PostgreSQL have extensive official documentation that includes insights into how the database engines work. While not a course, per se, these documents offer a deep dive into the engines’ internals.
  2. Books and eBooks: There are several comprehensive books on MySQL and PostgreSQL that cover their internals and performance optimization. Titles like “High Performance MySQL” and “PostgreSQL Up and Running” can be very insightful.
  3. Online Courses: Websites like Udemy, Coursera, and Pluralsight offer courses on MySQL and PostgreSQL. While they might not have a course identical to “How to Think Like the SQL Server Engine,” they do have courses that cover database internals and performance tuning.
  4. Community Blogs and Articles: The MySQL and PostgreSQL communities have a wealth of experienced professionals who write blogs and articles. Websites like Percona’s blog (for MySQL) and several PostgreSQL community blogs offer deep insights into how the databases work.
  5. Forums and Q&A Sites: Platforms like Stack Overflow, DBA Stack Exchange, and specific community forums for MySQL and PostgreSQL are excellent places to learn from experienced database administrators and developers.
  6. Webinars and Online Talks: Look out for webinars and online talks by database experts. These often cover specific aspects of database engines and can provide insights similar to Brent Ozar’s lessons.
  7. YouTube Tutorials: There are many knowledgeable database professionals who share their expertise on YouTube, offering tutorials and explanations on both MySQL and PostgreSQL internals.

Remember, while the fundamental principles of how relational databases work are similar across SQL Server, MySQL, and PostgreSQL, each has its unique characteristics and optimizations. Exploring a range of resources will give you a well-rounded understanding of how to think like the engine for each of these databases.

Brent says: this answer illustrates one of the problems I have with ChatGPT. It’s a whole hell of a lot of words, but it’s not really prioritized in any way, and it’s not really useful. Let’s try rephrasing the question, but demand a more concise, prioritized answer, and focus on just a single database:

 

Brent: What are the top 5 resources to learn the internals of how PostgreSQL processes queries?

Learning the internals of how PostgreSQL processes queries is crucial for database administrators, developers, and anyone interested in this powerful open-source database system. Here are the top five resources that can help you gain a comprehensive understanding of PostgreSQL’s query processing:

  1. Official PostgreSQL Documentation:
    • The official PostgreSQL documentation is an invaluable resource for understanding its internals. It offers detailed information on PostgreSQL architecture, query processing, and optimization. The documentation is meticulously maintained and reflects the latest version of PostgreSQL.
    • Link: PostgreSQL Documentation
  2. “PostgreSQL 9.0 High Performance” by Gregory Smith:
    • This book, although focusing on an older version of PostgreSQL, provides deep insights into the performance aspects of PostgreSQL. It covers topics like query optimization, indexing, and hardware considerations. While some information may be dated, the foundational concepts and performance tuning principles are still relevant.
    • Available at most book retailers and online platforms like Amazon.
  3. Postgres Weekly:
    • Postgres Weekly is a newsletter that curates articles, news, and resources related to PostgreSQL. It often includes links to in-depth articles and blog posts about PostgreSQL internals and query optimization.
    • Link: Postgres Weekly
  4. PgCon Conference Talks and Videos:
    • PgCon is an annual conference for PostgreSQL users and developers. It offers talks and workshops from some of the leading experts in the PostgreSQL community. Many of the talks focus on the internals of PostgreSQL, including query processing. Videos from past conferences are usually available online.
    • Link: PgCon Archives
  5. “The Internals of PostgreSQL” Online Resource:
    • This online resource provides a detailed look into the internal structures and mechanisms of PostgreSQL. It’s an excellent resource for understanding the core functionality, including query processing and execution plans.
    • Link: The Internals of PostgreSQL

These resources provide a blend of official documentation, expert-written books, community-driven content, and conference materials, catering to a wide range of learning preferences. Whether you’re new to PostgreSQL or an experienced user, these resources will deepen your understanding of how PostgreSQL processes queries and manages data.

Brent says: okay, that’s WAY better. It’s a reminder that prompt engineering is going to be a thing for the next several years: ask better questions, and you’ll get better answers from AI.

 


Question 3: Two SSMSs, One Cup Computer

Stu: Hi Brent, I often use two SSMS apps open (using two screens) and have an alarming issue where they interfere with each other, sometimes exec’ing a query in one will also exec what’s in the current window on the other! I wonder if you’ve ever seen this? I can find nothing on it!

ChatGPT4: Hello! While I’m not Brent Ozar, I can certainly help with your SQL Server Management Studio (SSMS) issue.

The behavior you’re experiencing with SSMS, where executing a query in one instance seems to also execute a query in another, is indeed unusual and not a typical feature or bug that’s widely reported. Here are a few steps you can take to troubleshoot and potentially resolve this issue:

  1. Check for Keyboard/Mouse Software: Sometimes, software for advanced keyboards or mice can have features that synchronize actions across different windows. Make sure that no such feature is accidentally enabled.
  2. Update SSMS: Ensure that you are using the latest version of SSMS. Sometimes, unusual behaviors are due to bugs that are fixed in later releases.
  3. Inspect Extensions or Add-ins: If you have any extensions or add-ins installed in SSMS, try disabling them to see if the issue persists. An extension might be causing unexpected behavior.
  4. Check Grouped Windows Settings: In some operating systems, windows from the same application can be grouped, and actions may inadvertently affect all grouped windows. Check your OS settings to ensure that this is not the case.
  5. Isolate the Issue: Try to replicate the problem with different queries and in different databases. See if the issue is consistent or occurs only under specific conditions.
  6. Examine Keyboard Shortcuts: It’s a long shot, but check if there are any global keyboard shortcuts that might be triggering executions in both instances.
  7. Hardware Check: Although unlikely, there could be a hardware issue with your keyboard or mouse. Try using different input devices to see if the problem persists.
  8. Check for System-wide Automation Scripts: If you’re using any macro or automation scripts that run across the system, ensure they aren’t interfering with SSMS.
  9. Use Different User Profiles: Try running each SSMS instance under a different Windows user profile. This can help isolate the issue to a specific user configuration.
  10. Seek Community Assistance: Post the issue in SQL Server communities, such as Stack Overflow, SQL Server Central, or the Microsoft SQL Server forum. Sometimes, community members might have encountered similar issues and could offer a solution.

If none of these steps resolve the issue, consider reaching out to Microsoft Support or a professional IT service. It’s a peculiar issue that might require a more in-depth technical investigation.

Brent: the first 6 answers were good. It should have stopped talking after that.

 


Question 4: Big Table Scans

Aditya: Are there any negative consequences to clustered index scan operators where the clustered index is multi-terabyte in size but the onprem sql server only has 512gb RAM?

ChatGPT4: Dealing with a multi-terabyte clustered index scan on a SQL Server with only 512GB of RAM can indeed have several negative consequences, primarily related to performance and resource utilization. Here’s a breakdown of the potential issues:

  1. Performance Impact:
    • Slow Query Execution: Scanning a multi-terabyte clustered index is a data-intensive operation. With only 512GB of RAM, SQL Server cannot cache the entire index, leading to a lot of physical I/O operations (disk reads). This can dramatically slow down query execution.
    • CPU Overhead: Large index scans also put a significant load on the CPU, especially if the query involves sorting, filtering, or aggregating the data.
  2. Resource Contention:
    • I/O Bottlenecks: When SQL Server has to read a large amount of data from disk, it can create I/O bottlenecks, affecting not just the query in question but also other operations on the server.
    • Memory Pressure: The SQL Server Buffer Pool will be under pressure to manage the data pages from the large index. This can lead to page evictions, impacting the performance of other queries as they may need to read data from disk that was previously in memory.
  3. Cache Invalidation:
    • A large index scan can flush out a significant portion of the data and index pages from the cache (Buffer Pool), which are relevant for other queries. This leads to a scenario where even simple queries might face performance degradation due to cache invalidation.
  4. Blocking and Locking Issues:
    • Depending on the isolation level and the nature of the transaction, a large index scan can hold locks for extended periods, potentially leading to blocking issues in a multi-user environment.
  5. Impact on Maintenance Operations:
    • Maintenance tasks like index rebuilds or statistics updates on a multi-terabyte index can be very time-consuming and resource-intensive, potentially impacting the availability and performance of the database during these operations.
  6. Thermal and Power Considerations:
    • In a physical server environment, intensive disk I/O and CPU usage can increase power consumption and heat production, which might be a consideration in some data center environments.

To mitigate these issues, you might consider the following strategies:

  • Query Optimization: Review and optimize the queries to avoid full index scans when possible. Using more selective filters, proper indexing strategies (like non-clustered indexes), and query hints can help.
  • Partitioning: Consider table partitioning to break the large table into smaller, more manageable pieces. This can make scans and maintenance operations more efficient.
  • Hardware Upgrades: If feasible, increasing the RAM could help, but for multi-terabyte data, it’s also crucial to have fast storage (like SSDs) to mitigate the I/O bottleneck.
  • Archival Strategies: If applicable, archiving old data to reduce the size of the actively queried dataset can be beneficial.

Dealing with such large datasets often requires a combination of query optimization, proper hardware, and strategic database design to ensure efficient operations and acceptable performance.

Brent says: everything in that answer is great except partitioning. If you’re doing clustered index scans, partitioning is unlikely to help the speed of the query any more than changing the clustering key design, and I didn’t see ChatGPT recommending that. But really, that’s a small quibble – this is a long, detailed answer that’s really good!

 


Question 5: Are Distributed Transactions Bad?

Grigoris: What is your opinion and experience of using distributed transactions with SQL Server? Net good, Net bad, indifferent?

ChatGPT4: Using distributed transactions with SQL Server, or any database management system, can be a double-edged sword. The appropriateness and effectiveness of distributed transactions depend greatly on the specific requirements and context of your application. Here’s a balanced view based on common experiences and industry practices:

Pros of Distributed Transactions:

  1. Data Integrity Across Multiple Systems: Distributed transactions ensure that operations involving multiple databases or systems either all succeed or all fail, maintaining data consistency across these systems.
  2. Simplified Application Logic: They allow developers to write simpler application logic when dealing with multiple data sources, as the complexity of ensuring transactional integrity is handled by the transaction manager.
  3. Standardized Protocols: SQL Server’s support for distributed transactions usually follows industry-standard protocols (like MS DTC – Microsoft Distributed Transaction Coordinator), making it easier to integrate with various systems.

Cons of Distributed Transactions:

  1. Performance Overhead: Distributed transactions introduce significant performance overhead due to the coordination and communication required between different systems. This can impact the overall response time of the applications.
  2. Increased Complexity for System Maintenance: Configuring and maintaining distributed transactions, especially in a high-availability or distributed environment, can be complex. Issues like network failures or server downtime can lead to complicated failure scenarios.
  3. Risk of Locks and Deadlocks: They can increase the risk of locks and deadlocks, as transactions may hold locks across multiple systems, potentially leading to bottlenecks and performance issues.
  4. Scalability Concerns: Distributed transactions can be a limiting factor in scaling systems horizontally. As the system grows, the overhead and complexity of maintaining these transactions can become a significant challenge.

Brent says: it should have stopped here and declared a verdict, and it would have been fine. The next group of bullet points are really extraneous.

Situational Considerations:

  • Specific Use Cases: In some cases, such as financial systems where consistency across different databases is paramount, distributed transactions are almost essential.
  • Alternatives: Depending on the use case, other patterns like eventual consistency, compensating transactions, or using a single database (when possible) might be more suitable and efficient.
  • Technology Evolution: With the rise of microservices architectures, there’s a shift towards more localized transactions within a service boundary and using patterns like Saga for cross-service operations.

Conclusion:

  • Net Good or Bad? It’s not about being inherently good or bad, but rather about the right tool for the job. Distributed transactions are powerful for ensuring consistency across distributed systems but come with trade-offs in terms of complexity and performance.
  • Indifferent? Certainly not. It’s crucial to carefully evaluate the need for distributed transactions against their drawbacks and consider whether other architectural patterns might be more suitable for your specific scenario.

In summary, while distributed transactions can be essential for certain applications, they should be used judiciously and with a clear understanding of the implications they bring to system design, performance, and maintainability.

Brent says: that’s fair, but it’s also a word salad, which is a classic problem with ChatGPT’s iterations so far. Even when instructed to write like me, it doesn’t take firm stances on topics. I don’t think I’d recommend a distributed transaction under any circumstance where an app needs to scale. It’s like a nail gun: it’s a very dangerous tool, and if you’re asking if you should use it, the answer is no. To some extent, that’s probably part of the value of experts, even in the age of ChatGPT: to cut through all the bullshit text and give verdicts.


[Video] Office Hours: Live on Black Friday Edition

Videos
3 Comments

On Black Friday, I took a break from answering support emails to hit your top-voted questions from https://pollgab.com/room/brento.

 

Here’s what we covered:

  • 00:00 Start
  • 00:41 MatthewsSQLServer: How AlwaysON AG failovers cause can be analyzed, is it DB/Cluster/Infrastructure/AD issue. Can you please suggest any videos or blogs that can help?
  • 02:19 NotADeveloper: We have a 40 TB DB. There are discussions to move the data to MongoDB from MSSQL because it provides sharding and its caching is better than SQL. There is no partition today. Mongo DB’s sharding and caching VS SQL’s Partitioning scheme, what route is better and why for a VLDB.
  • 04:25 Alex Threepwood: Hi Brent, When a client calls (calls? mail?) for an emergency job, do you charge per hour, or to analyze the problem at hand? And do they know your rate then, or do you send a quote before you get out of bed?
  • 06:04 ImAfraidOfBI: After restoring a database (to test backups, be it manually or automatically), what do you suggest is done to test and make sure that the DB restore is good? Random selects? DBCC checkdb?
  • 07:39 Mobius: What is your opinion of Copilot SQL query optimization?
  • 09:43 Eduardo: Are there any gotchas when upgrading from an older version of Ola H’s maint solution to latest version? SQL 2019 Enterprise
  • 10:26 Renzi: What is your opinion of constrained core VM’s for Azure SQL VM?
  • 11:33 Iceman-OG: Hi Brent, I’m getting back into SQL Server on prem. I see that PolyBase seems to be a cool feature that’s been around, but I never heard of it, seems a like an excellent feature
  • 12:36 MyTeaGotCold: What is the DBA equivalent of automated unit testing?
  • 13:24 thatkerolearlier: Hey Brent! I am currently studying Intelligence systems for my major such as machine learning, ann, cnn etc. can you give me an idea for my final year project ?
  • 14:30 Froid: Is it ever ok to lead a non-clustered index with an inequality search column followed by an equality search column? If so, when?
  • 15:41 Kang: For boxed SQL, should we clear SQL wait stats after raising the DB compat level to the latest level?
  • 16:17 Piotr: Have you ever had to disable TSQL_SCALAR_UDF_INLINING for a database? If so, why?
  • 16:56 Diana: Do you know of any gotcha’s when running cross DB queries where DB1 and DB2 are in different compat levels?
  • 17:30 Bill: Does the first responder kit follow the same end of life schedule as Microsoft for SQL Server versions?
  • 18:34 Aditya: Are there any troubleshooting benefits to naming TSQL transactions as opposed to not naming them?
  • 19:18 RushingSQL: Did anyone let you know that Bob Ward gave a class on Always On Availability Groups at Pass Data Community Summit 2023 and you were listed at the top of his reference slide?
  • 20:28 Tom: Hello! I have a question before pruchasiong a prodcut. “SQL Server 2019 Standard – 15 clients” Whats does 15 clients means? I am not sure if this is related with CALS or the amount of database that I can conect it to.
  • 21:18 BoboDBA: Hi B. Would partitioning a 300 million records (800gb) table monthly on a datetime column (with an aligned index on the column too) provide better query performance than a nonclustered index on that column? Stakeholder is demanding partitioning but he doesn’t have to maintain it.
  • 22:28 thevibrantDBA: long winded question
  • 23:20 Sigríður: Does the MemoryGrant property for a query plan include the memory used to read pages from disk into the buffer pool needed to service the query?
  • 23:57 Here-I-Am: What’s your opinion on CDC and its use in products like Goldengate for replication?
  • 25:12 RenegadeLarsen: Starting to see in Europe that many customers are focusing on security. Do you see the same trend in the US?
  • 27:24 Vinícius Lourenço: Hi from Brazil, as a non-database person, what is the basic maintenance tasks I should look/do on my Azure SQL Server DB? A few thousand inserts per month into 2 tables
  • 28:05 Slow Cheetah: Query performance is good for a given query when forcing parallelism with trace flag 8649. It’s bad without this query hint.
  • 28:50 Ingeborg : Have the Iceland lava flows affected any friends / places you visited?
  • 30:34 Chakra: What is your opinion of the KEEP PLAN query hint? Do you ever like to use it?
  • 31:01 Håkan A: Hi Brent, We get a .bak file every night we have to import new data from. Do you know any common reasons for running restore database from disk with replace (in single user) getting stuck in (restoring…) very often?
  • 31:52 Hera Syndulla: What is the best and worst SQL VM naming conventions you have seen?
  • 33:25 Raghav: Do you like any third party software for SQL A.G. backup over native backup?
  • 33:54 Q-Ent: Hi Brent, Can you suggest any article related to CPU Mathematics 101?
  • 34:54 Richard Wilmuth: What is the best way to import databases from a Google Cloud VM SQL Server to an AZURE SQL Server (not on a VM) ?
  • 36:04 BullRed: How did you enjoy the F1 weekend in Vegas?

[Video] Office Hours: Black Friday Promo Edition

Videos
0

In this episode, a lot of the questions triggered mentions of our Black Friday Sale, on now!

Here’s what we covered:

  • 00:00 Start
  • 01:41 Pete: 1 NUMA, 2 NUMA, 3 NUMA, 4 NUMA, or More? 176 cores, 1.5 TB RAM, currently all in a single NUMA configuration. I’m just a jr DBA so researching if breaking this monster in to multiple NUMAs is the right / best way to go.
  • 03:38 Margaret Norkett: Using SQL 2019 all databases are set to compat 150. Query performance that was once good is now bad and none of the usual things to fix it seem to work. I noticed a setting “Query Optimizer Fixes” on the database which has default of 0 (off). Changing to “on” helps – WHY?
  • 06:10 mailbox: Do you have a favorite storage vendor? if so, what makes them your favorite?
  • 07:25 Tara: With the popularity of Chat-GPT, do you think AI will make canned reports obsolete?
  • 09:02 WhereIsMyEspresso: Hi Brent, if the collation of the user DB differs from the system DBs, would you consider changing the collation of the system DBs for performance improvements?
  • 10:03 Montro1981: Hi Brent, a “friend” of mine mentioned that “Working with agile-ish dev teams and a good SQL design is basically counter to agile philosophy”, what is your take on this?
  • 11:19 Ouroboros: Who has the best SQL AG training?
  • 12:38 Huyang: Should production DBA’s and consultants support end of life versions of SQL Server?
  • 14:04 Victor Timely: Do you see TSQL sequences used much? What are the top use cases?
  • 15:29 Sylvie: What is your opinion of the TSQL deprecation “More than two-part column name”? Does it seem harsh for Microsoft to deprecate this usage?
  • 17:10 Mr. Roarke: What is your opinion of the various SQL query tuner products? Will this negate the need for DBA’s or is this just fantasy?
  • 18:14 Steve Trevor: What is your opinion of Azure Database Migration service?
  • 19:27 Black Friday Sale reminder
  • 20:30 Thinking back to Amazon Dash Buttons
  • 21:58 Ezra: What are the scenarios you have seen where only system DB backups need to be restored?
  • 22:51 Renaldo: What’s the best way to synchronize stored proc changes across AG replicas?
  • 23:50 Janis: Do you see many customers running SQL Server in containers?
  • 24:28 Slonik: What is your opinion of Babelfish for Aurora PostgreSQL?
  • 24:48 Black Friday Promos reminder
  • 25:15 Slonik: Why does SQL Server cache query plans and PostgreSQL does not? Which methodology is better?
  • 26:10 Remus: What is your opinion of OnTap cloud volumes for SQL VM cloud storage compared to native cloud storage offerings?
  • 26:22 mailbox: what are the main pain-points to supporting Clusterless AG’s for use as readable secondaries? and as a Disaster Recovery Node?
  • 27:12 depthcharge: We’re having to shrink some files to balance a staging DB, but nibbling chunks on a nightly basis is taking a while. Is comprehensive index rebuilds + DBCC SHRINKFILE with TRUNCATEONLY a useful tactic here?
  • 28:44 Ariel Ferdman: What solution should I use for DR Distributed AG or single AG distributed on two sites?
  • 29:50 Diego: Hi Brent. There is a huge sp in prod that is executed in 200 different sessions/threads from JAVA and always result in a deadlock. I was thinking in using query hint UPDLOCK how good is it?
  • 31:06 sureman: Hi Brent, tell us your opinion of paying for any advertising or marketing campaigns to bring in more consulting business? Any value in it?
  • 32:42 Steve Trevor: What are your pros / cons for using Azure backup vs native SQL backup when backing up Azure SQL VM multi TB?
  • 33:05 ZappyDBCC: Hi Brent, do you sometimes use physical read for performance tuning ? and if so, can you give some examples of its use ? Thanks
  • 33:26 Black Friday Promo

[Video] Office Hours: Brent’s Cursed Technology Edition

Videos
0

I first answered these while standing at a harbor in Maine, but upon arriving home, I realized my camera had failed to record any audio. Doh! So I took the list of questions from https://pollgab.com/room/brento and streamed ’em from my home office in Vegas instead.

Here’s what we covered:

  • 00:00 Start
  • 02:51 Furkan: Hello Brent,while trying to restore a database in SQL Server 2017 instance to SQL Server 2019 instance,I got the error database running the upgrade step from 901to902 during the restore process.(Error 3013,Level 16,State 1)
  • 04:01 Pixma: When measuring a query duration I would typically set getdate() to a var and then use datediff() between it and the current getdate(). Do you think this is sufficient?
  • 05:12 MyTeaGotCold: Where should I go if I want to master SSMS?
  • 06:02 S. Leininger: Back in the day, what was your first computer?
  • 07:58 D. Zoolander: When should you monitor query performance with Application Performance Monitoring software vs when should you monitor query performance with SQL Performance Monitoring software?
  • 09:26 Q-Ent: Hello Brent, If we face blocking problems due to lock escalation, is it a good idea to change the behaviour with (Alter Table set etc) ?
  • 10:53 gserdijn: A nightly maintenance job triggered UPDATE STATS with a sample on a largish table. Since no rows were updated before Ola’s IndexOptimize kicked in, the stats were not updated with the specified 100%. Should I consider setting parameter @OnlyModifiedStatistics to ‘N’ ?
  • 12:15 Grant: Does statistics parser web app accept pull requests?
  • 12:43 magdagessler: Starting in October, I will begin working as a full-fledged SQL Server DBA. Thank you sir for sharing your knowledge
  • 13:31 Artis Leon Ivey Jr: What is your opinion of SQL VM backup to URL with Azure storage account?
  • 14:14 SQL Noob: Hi, what would you suggest as the best option to transfer a table with say 10 million records from on sql server to another, in 1 go.
  • 15:18 G Auðunsson: How do PostgreSQL blogging opportunities compare with MSQL blogging opportunities?
  • 16:27 Montro1981: Hi Brent, greetings from the Netherlands. As you are aware I have been putting some work into FRTK recently, what kind of PR do you like more:
  • 17:50 Eli: Hi Brent – We’re struggling with PAGELATCH_* waits on TempDB GAM pages on our 96 core enterprise SQL Server. Using 24 TempDB files currently. Training suggests adding more files, and documentation suggests 4x increments, but 96 TempDB files sounds like a lot. Thoughts?
  • 20:40 Mobius: What is the best training resource for performance tuning native SQL backup / restore?
  • 21:16 marcus-the-german: I Brent, is the buffer cache hit ratio value a server metric or does it depend on the database in use?
  • 21:44 Piotr: Does including IO stats / query plan in SSMS adversely query performance?
  • 22:24 ZappyDBCC: Hi Brent, we have an application with a large XML column. We don’t query it, but we added a query with WHERE IS NOT NULL on it, The database tried to create a stat, which generated a storm of IO, the query timed out and the stat wasn’t created.
  • 23:28 HerdOfPuffins: Hi Brent. When you come in to look at a new system, are there any checks that you do differently if the database size is measured in Terabytes?
  • 24:40 W. Annabe Smarter: We are about to upgrading few SQL servers. The DB’s are at compat lvl120. I think we should stay here a few weeks before going to lvl150. Is this a bad idea? Is there tings that just not would work in lvl120?
  • 25:28 Junco: What kinds of features are enabled out of the box in Enterprise vs Standard? I went through an upgrade last year from 2008 R2 Ent to 2019 Standard and saw a big performance decrease
  • 27:06 hamburger sandwich: How do you handle maxdop and cost threshold for parallelism on servers with multiple instances on the server?
  • 28:51 Montro1981: In a recent webinar for some storage provider you had a deadlock demo on the StackOverflow database. Is that code available somewhere?

[Video] Office Hours in Camden Harbor

Videos
1 Comment

On an overcast morning in scenic Camden, Maine, I stood on the docks and took your top-voted questions from https://pollgab.com/room/brento.

Here’s what we covered:

  • 00:00 Start
  • 00:54 Alma Fahlstrøm: Does index fragmentation matter with column store indexes on SQL 2019 / NVME San storage?
  • 01:22 Miss Minutes: What is your recommended way to find the slowest queries with plan warnings?
  • 01:58 Renzi: What is your opinion of using Azure managed disks vs Azure storage account for use with Azure SQL VM?
  • 02:24 Beta Ray Bill: What are your thoughts on enabling Locking Pages in Memory for Azure SQL VM?
  • 03:05 Cheech: What is your opinion of sp_invoke_external_rest_endpoint for invoking REST endpoints from SQL Server? Have you seen any good use cases?
  • 03:53 Franz: Does MSSQL Server have anything comparable to PostgreSQL plugins?
  • 04:43 Izzy G.: Why does SSMS default identity int seed values to 1 instead of the smallest negative value for int / big int? Should this be changed to not waste half the potential range for a column?
  • 05:30 Alma Fahlstrøm: What is your opinion of vector databases?
  • 05:45 Miss Minutes: For performance reasons, is it ever wise/permissible to have a unique non-clustered index instead of a primary key?
  • 07:21 Shawn Carter: What is your favorite VM size (sku) for running azure SQL VM with good balance of price to performance?
  • 07:42 Eric Wright: What is your opinion of using third party SQL backup software to support restoring individual tables in case of oopsies?
  • 08:41 azuredata: Large company looking to standardize Azure SQL DB deployments. What are the pros/cons of keeping everything under a single RG the DBAs own/manage versus letting devs provision their own databases in their own RG? Is it bad to have one massive RG for SQL DBs if it is locked down?
  • 10:00 WTF_SQLServer!?!: Azure SQL Database’s tempdb size is governed by the CPU count of the instance. My Data Science team is constantly running out of tempdb space. I’ve increased CPU a few times and now it’s getting expensive. Do you have suggestions for reducing tempdb usage?
  • 11:22 Mike Hanewinckel: How can you efficiently delete records from very large tables? I have tables that I store log data that needs to be purged occasionally. Truncate is not an option because I want to keep the most recent. I have read that deleting in batches is an option.
  • 12:25 Baylan Skoll: Is PostgreSQL DBA and consulting as lucrative as MSSQL DBA and consulting?
  • 13:21 shouldISpecialize: Brent, I’d like to hear your take on generalist vs specialist discussion. I’m an app dev, curious about databases and performance tuning (I’ve taken your courses) and I have a hard time choosing a niche. Any advice is more than welcome. Thanks

Why Haven’t You Disabled SA? Wrong Answers Only.

Humor
31 Comments

On social media, I asked folks, “Why haven’t you disabled the SA account in your SQL Servers? Wrong answers only.” The results were pretty funny:

“I went a step further and also created an account called ‘as’. Now my boss keeps bragging to his golf buddies that we run our database fully SaaS.” – Hugo Kornelis

“How else can I provide job security for the cybersecurity team?” – Evgeny Alexandrovich

Keeping my poker face on

“Because it matches the password of sa.” – Jamie Ridenour

“Oh you can create another account in SQL Server? Mind blown.” – Justin Adrias

“Why would I disable the only login we have?” – Ray FitzGerald

“Because that’s the only account I don’t get permissions errors with!” – Todd Histed

“Because it’s a saved login with password in SSMS.” – Subject 89P13

“Psh. All my linked servers use it.” – Dan White

“Doesn’t SA stand for Software Application? So shouldn’t my application connect with that account?” – Joe Thompson

“Because sa stands for sexy admin. To disable it would be to deny who we are!” – Mladen Prajdic (who, for the record, the ladies love)


[Video] Office Hours at Megunticook Falls in Camden, Maine

Videos
0

While in historic and beautiful Camden to hang out with friends, I went to Megunticook Falls and answered your top-voted questions from https://pollgab.com/room/brento.

Here’s what we covered:

  • 00:00 Start
  • 01:05 Mike: Hi Brent. You’ve mentioned there are two types of DBAs: Production DBAs and Development DBAs. Are recruiters and hiring managers familiar with the term “Development DBA” or would “Database Developer” be a more well-known title to use?
  • 03:03 Margaret Norkett: We are transitioning to SQL2019 and notice that many queries that ran well on 2014 or 2016 are now very slow. We have added the OPTION (USE HINT (‘FORCE_LEGACY_CARDINALITY_ESTIMATION’)) to help and do see it running better, but still not at pre-upgrade speeds. Where else to look?
  • 06:13 Matt: What’s the best way to handle hierarchical data? I’ve got 6 levels of data and 95% of the time, given the top level, I need all the children. root = 1, level 2 = 3 rows, level 3 = 5-10 per level 2, level 4 = 10-20 per level 3, level 5 = 5-7 per level 4, level 6 = 2-3 per level 5
  • 06:52 Hany Helmy: Hi Brent, SQL server Enterprise Edition is $7,000 per core, Standard Edition is $2,000 per core, question: Is this is annual payment? or lifetime payment?
  • 08:20 Montro1981: Hi Brent, can you make another “questions I didn’t want to answer (online) post” I see some really good candidate questions on PollGab (I’ve been bingeing Office hours at work so I can hear you answers in my mind)
  • 09:06 Alma Fahlstrøm: What is your opinion of database sharding?
  • 11:31 consultantwannabe: hey Brent, what’s the most common issue you find when consulting when clients? Things that make you think: “really? c’mon. It’s 2023, why are you still doing that?”
  • 13:17 Johnkurt: our architect suggests read-scale with Always On availability groups to a analyze server. What is your experience and is it something we should be aware of?

Office Hours, Silent Film Edition

Videos
6 Comments

Hey, remember in yesterday’s video how I said I was having a really bad day? Well, here’s how bad it was – I recorded two videos in a row, and I forgot to turn my microphone on for the second video.

<sigh>

So here are the questions & answers:

00:25 Calvin H: Any tips or gotcha’s for query tuning of hierarchy id data types in MSSQL? No. Just be aware that recursive CTEs (which are usually used with hierarchy IDs) produce single-threaded plans. If you have a big complex query, and HierarchyIDs are part of that, you might consider dumping the HierarchyID info to temp table first, and then joining to that.

01:22 Artis Leon Ivey Jr: What is your opinion of using Azure Backup to backup SQL VM instead of native SQL backup? Use the tool that your team is most comfortable with. Backups aren’t the place to experiment. If most of your team is Windows admins and they’re used to Azure Backup, then stick with that.

02:22 B. Horrowitz: What are your thoughts on having multiple different DB technologies across different microservices for a single app? What should the DBA team look like in this scenario? The place where you usually see that is when you have a lot of microservices, each with their own dedicated development teams. You don’t see it with 2-3 microservices with 5-10 total developers. So if you’ve got several 10-20 person dev teams, each using their own data persistence layer, then you usually see them doing their own database administration.

04:04 Cameron Harding: Who is the Brent Ozar for Microsoft Fabric training? Microsoft. They’re the only company that can afford to continuously lose money building training material for a product that’ll get a name change and architecture revamp every full moon.

04:59 Red Utley: Is Intel Hyper-Threading ever worth the licensing pain for SQL Server performance improvements? When you license physical boxes, you license physical (not hyperthreaded) cores, so there’s no licensing pain there. If you’re licensing by the guest, then typically the bigger CPU problem is the noisy neighbor VMs, not hyperthreading.

05:55 ChompingBits: I found out our backup plan, doing full backups on all our user databases (around 46 TB) was a out of the norm. (I cede it is overkill to do Fulls every day, but it sort of shook my confidence a bit. In your career what did you find our was a house policy and not a best practice? Happens all the time – it’s called cargo cult programming. Also, be aware that daily fulls on 46TB is indeed totally normal – it’s just that you’ll wanna use SAN snapshot backups at that size.

07:32 Maria Bonnevie: Should we ever be worried about high number of sessions / connections to SQL Server in relation to other metrics (sessions to sever memory, etc)? No, but I’m usually concerned when I see >1000 connections because at some point, they’re all going to pipe up and do something simultaneously. That’s where your heartaches begin.

09:07 Montro1981: Hi Brent, I’m on a fishing expedition and I hope you can direct me to the right pond, I have a filtered index on a date range. But the stats histogram is going all wonky on me all the time the first 100 steps are 1 day each but the rest (7470) gets put in 1 step. That’s beyond an Office Hours question, and it’s a consulting-style question. It’s just such a niche topic.

09:50 hamburger sandwich: Is there automatic windows OS patching for availabilty groups where it will failover and update? Not in a way that you’ll be comfortable with if you have 24/7 activity and long-running tasks like backups or CHECKDB. Every patching tool I’ve ever seen will happily patch and reboot in the middle of a database backup or CHECKDB.

10:51 Aashly: Hi Brent, We have an upcoming project where the team plans to to perform an in-place upgrade from SQL Server 2017 with mirroring to SQL Server 2022 with AOAG on a prod server. What are your thoughts on this and what precautions can be taken if we are left with no choice. I don’t do it.

11:51 Paul Oakenfold: Will you be visiting the new Fontainbleau Hotel when it opens? Not for a while at least. To learn more, check out VegasPaulyC and VegasStarfish on TikTok – they’ve done behind-the-scenes tours & discussions of it.

12:33 She-Ra: Is clustered index fragmentation any more consequential than non-clustered index fragmentation? Watch this.

13:10 Hal: What’s your opinion of Jupyter Notebooks and have you considered using them in your training classes? I absolutely love them, but … because they don’t show query plans, I can’t use them in training classes. You can get plans in Azure Data Studio query tabs, but not in notebooks.

And now if you’ll excuse me, I’m going to go throw my computer in a river and pour myself a drink.