On a cross-country trip, I stopped in Moab, Utah and answered your top-voted questions from https://pollgab.com/room/brento.
Here’s what we covered:
- 00:00 Introductions
- 01:20 YourbiggestFan: Do you have any plans to write an Azure first version of the first responder kit (sp_blitzfirst and sp_bitzcache)which can get the metrics from query store and thus overcome the issue of dmv stats resets on Azure platform? On PAAS platforms need a reliable method to analyse perf.
- 03:06 T.C.: What are the most under utilized features for SQL monitoring software?
- 04:24 Drew Furgiuele: What’s the most common “complaint” you hear from DBAs these days? Is it a lack of funding for good hardware/higher cloud performance tiers? Training budgets? Developers?
- 07:04 Zee: Hi Brent, How can a ORM developer (hibernate, spring) get better at sending queries to databases. And how a DBA can convince ORM developers that a 15k characters query with 29k output every 30k times is not a very good thing and can be done better in a stored procedure.
- 10:01 Yousef: What are the top unforgiveable DBA mistakes?
- 11:25 Ryan: Where do you see the DBA profession going in the next 5-10-15 years?
- 15:35 Hany Helmy: Hello Brent, my friend is searching for your article on “Your first day as a DBA in a new company”, couldn’t find it can you help him?
- 16:05 Dave Dustin: Of all the database project CI/CD pipelines you’ve seen in your career, do you have any recommendations or tips?
- 17:16 Does Basically Anything: I know your standard recommendation for storing BlitzFirst outputs are every 15 minutes, retaining for 7 days. What would be the maximum you’d recommend if a friend were interested in retaining more, or collecting all the data at more frequent intervals? (Using ALL output tables)
- 17:52 Harika: Do you have have an opinion for when the SQL Log backup job should be temporarily stopped on a busy SQL2019 OLTP server (i.e. index maintenance, full backups, diff backups, etc)?
- 18:52 Gülnaz: What post update sanity checks do you like to perform after applying a SQL cumulative update?
- 19:58 Midwest DBA: Have you ever worked with a Very large company? Target, Walmart, GMC, etc. Are their DBA team’s full of highly skilled DBAs, or are they much like everywhere else ,i.e. one expert and few regular folks?
- 22:11 Pessel: Who is the community’s “Brent Ozar” for all things SQL Security related?
- 22:51 NullPointer: My friend wants to know the best way to get some type of high availability (willing to have 30m-1hr of downtime). They have 1 database per client (sitting at 50 dbs but will grow) and use std edition with no dedicated DBAs (just devs). Suggestions (always on, log shipping, etc.)?
- 23:20 Alec Roques: Upon checking a table with sp_BlitzIndex, do you ever drop indexes with a low amount of reads to a high amount of writes (for example, Reads: 80 (56 seek 24 scan) Writes: 670,011)? How do you make that determination? Or will you wait until you see locking and blocking problems?
- 24:05 Preben: Are you open to visit belgium?
- 25:06 Latka: How do you determine the optimal time interval for backing up the transaction log, (30, 15, 10, 5 minutes, etc) if transaction log size / growth is primary concern ?
- 26:55 MancDBA: Non-SQL question – You have talked previously about some of your awesome cars (Helmut, Ferrari etc). What do you use as a daily driver? Cheers!
15 Comments. Leave new
Maybe you could make some intro courses for using other people’s monitoring softwares 😀 I know they change all the time and whatnot, but at least cover the basic things like you said in this video – go look at the biggest wait stats for that timeframe or whatever.
That’s a tricky one. Would you pay $89 for an intro class to one specific monitoring app? If so, which one?
You could do the class for free, and get the vendor of the monitoring tool to sponsor it.
Why would they pay me to do it? Why wouldn’t they have their own staff do it for free?
LOL.. Like their own staff would have the following you have.
Oh, I might have misunderstood – I think HappyDBA is saying, “we have this tool, and we need to know how to use it.”
I think you’re saying, “We don’t have the tool, and we want to sit in a Brent webcast on how to use it, and then get convinced to buy it.”
Yeah, I don’t really want to get into the selling-other-peoples-tools business. That doesn’t really make sense for my direction for my career – but I can see why you’d want to have me demo other companies’ tools for free. 😉
Well, for me, I know how to use mine but I’ve heard a number of times now that people have monitoring they don’t know how to use (not just from you). After listening to this office hours, it just seemed like a totally helpful thing to make a training on if you have to tell clients how to do it frequently anyways. Just a simple intro on how to use your monitoring app to find why your server fell over last night. Since we all know everyone is more likely to attend your trainings than read the documentation 😀
As a response to your question though, if I didn’t know how to use mine already, yes I would pay $89 to learn it from you instead! I might anyways just because you usually point out some of the less commonly known things, so I would figure I would still learn something from you regardless of what I think I already know 😀 RedGate btw.
Also, I learned from poking around and reading the documentation over a long period of time. A training from you would have been a serious time saver! So there’s that too.
TBH, I can’t see how such a training course would be any less valuable than any of the other courses Brent Offers.
Right, but that’s not the question – it’s who specifically would pay $89 for one of those courses, and on which monitoring tool? HappyDBA’s already said “if I didn’t know how to use mine already” – so that’s where it gets tricky.
I gave them something in writing once, they fired me.
I think the issue with such large queries from an ORM is the fact that IME, ORM queries aren’t reusable, and require recompiling each time they’re executed. I’m not sure how often that’s happening for him, and his question looks like it has something missing. But if “30k” is an indication of how often this is happening a second, that would be quite an unnecessary load on the CPU.
In the past, I’ve told devs “Why should you use a sproc? Well, if a query has performance issues, and it’s in a sproc, then I have to fix it. But if it’s an ORM query, then you have to fix it”. This has returned mixed results. 🙂
Why wouldn’t an ORM’s queries be reusable? (Which specific ORM are you talking about, and what issue?)
I’ve seen Entity Framework create very complex SQL statement, with hard coded parameters, that generate an new execution plan each time they’re called, as one (unusally many) of the parameters has changed. I have also seen this on another ORM, but I can’t remember what it was called.
I’ve also seen them generate illogical SQL Statements, like joining to a table multiple times, simply because it needed to return multiple columns from that table, and using a ridiculous number of nested select statements. Fact is, ORMs generate bad SQL, and while devs who feel they must use them also generate bad SQL, at least when they put that SQL in a sproc, it gives me a chance of fixing it.