Got questions about the Microsoft data platform? Post ’em at https://pollgab.com/room/brento and upvote the ones you’d like to see me cover. Today’s episode finds me in my home office in Vegas:
- 00:00 Intros
- 02:10 GeorgeDAX: Hello Brent, I’m a dealing with a huge sales table which contains around 500 columns with 50m rows. Business asks to add even more columns in the table. Should I break the table or keep adding columns to the same one? Performance wise, what’s the best approach?
- 04:47 Brynjar: Do you have a suggested max column limit when designing new tables that require fast queries?
- 05:58 Manuel Labour: What is the largest stored proc you ever had to optimize? How long did you spend on it?
- 07:24 Yavin: When analyzing SQL memory use, what are healthy / unhealthy utilization ratios for MEMORYCLERK_SQLBUFFERPOOL, CACHESTORE_SQLCP, MEMORYCLERK_SQLGENERAL?
- 08:21 Eduardo: Is it safe to run sp_blitzindex on a busy OLTP server (3000+ batches per second) during peak business hours or should this be done after hours?
- 10:22 3rdNormalForm: HEAPS & Vendor Packages. I have a 43 million row table called transactionxlog with no keys and no CX. sp_blitzcache shows 7 queries against this table with costs of 883. I took the recommend composite index and made it into a clustered index. [UNIQUIFIER] what is it?
- 12:05 DB_Architect_Chick: Hi Brent. I’m designing a new database for 50k inserts/selects (minimal updates) per hour. I am doing my best to make mature design decisions, but with no actual data to test against, do you have any suggestions for establishing an effective initial index strategy?
- 15:22 Eduardo: What is your opinion of Intel QAT backup compression in SQL 2022?
- 17:34 Vilhelms: Is there a good way to know when a query caused an auto stats update and how long the auto stats update took?
- 19:39 Pete S: What is your opinion\thoughts on the “SQL Graph Database” feature that was introduced in SQL Server 2017? Have you worked with any clients that use it in production OR is this another “copy-cat” feature to keep up with the database Jones’?
- 20:57 RaduDBA: Mr. Brent, with all the cloud solutions and cloud databases that take over the data landscape, like datawarehouses, lakehouses etc. is there still room for SQL Server and traditional on-prem databases? I work as a DBA and do a lot of performance tuning. Is my career doomed?
- 23:43 cyrpl: Brent, can you suggest a stored procedure source control tool. We have gitlab for code source control but we specifically want to track all changes to SQL Server stored procedures.
- 25:56 YouGottaDoWhatYouGottaDo: Hi Brent! Have you ever encountered a very toxic work environment (dictator-boss, widespread discontent etc.) that created problems for you during your consulting work (even 1-day consultancy)? How did you handle this?
5 Comments. Leave new
Sorry I missed this OH and backups don’t appear to have been a topic but wondering if there is any insight into why DB restores are unbelievable painful within Azure SQL, we seem to be averaging between 20-80 minutes with < 50% success on a less than 200mb db and results are still mixed regardless of if we do point in time or full restores.
So I think you’re saying you have a question, and you’re wondering where to post it for me to talk through it.
I wonder if there are instructions in the post.
It’s a long post, but I bet you can figure it out. Start at the very beginning. It’s a very good place to start.
touche, I deserved that and have reposted
@cyrpl – I use SQL Server Data Tools in Visual Studio (https://docs.microsoft.com/en-us/sql/ssdt/download-sql-server-data-tools-ssdt?view=sql-server-ver16) Also RedGate’s SQL Source Control is good; although, I am not familiar with it.
@Dan, what makes you say db restores are so painful in Azure SQL DB?