Normally, after I do a round of Office Hours of going through the questions that got posted at https://pollgab.com/room/brento, I answer the highly upvoted ones and then clear the queue. If y’all didn’t upvote questions, then I don’t answer ’em.
However, today I’m trying something different: I’ll post the non-upvoted questions here for y’all to see if there’s something you want to answer in the comments. I’ll number ’em so y’all can refer to ’em easier in the comments.
- GI Joe DBA: Thoughts on contract to hire jobs? I don’t like them, I want to be hired FT immediately. I think agencies use them to squeeze $$ from the placement but, there’s no guarantee I will be hired and it’s a hassle switching benefits. I’ll only consider it if pay is higher than average.
- lockDown: Hi brent, we work on a multitenancy model sql DB and we need to split the data to 2 Dbs, since there multiple apps writing to the DB & the Devs are asking for a schema lock on the (2k) schemas that will be moved. I feel like stoping writes should be done on the App am i right?
- Jarkko: Should there be any concerns when a query runs in 2 seconds but consumes 16 seconds worth of CPU time (SQL 2019 Enterprise, 64 cores)?
- RonS: Hello Brent, Moving to 2019 SS Standard/HA. 7 databases that need to talk with each other. Standard edition requires each DB to have its own IP address. Current environment all the DB’s live in one server and can see each other (db.schema.table). Linked servers right direction?
- CPL: Hi Brent .. What is the process that actually updates indexes? e.g. table with 3 NC indexes & all three indexes contain Col1. If one of these indexes is used where we then update Col1 how are the other 2 indexes subsequently updated? Is it still via the transaction log?
- Wenenu: What are the top SQL index naming schemes you see in the field? How did the various schemes originate?
- Anatoli: CONTAINSTABLE and FREETEXTTABLE seem to hide the logical reads when statistics io is enabled. Is there an alternate way to see the logical reads incurred by these TVF’s? Using SQL 2019 – 2014 compat mode
- Sebastian: How would you change SQL Sentry Plan Explorer to make it more usable / friendly?
- DBA_preparing_for_jobsearch: Could you suggest a good strategy on improving left anti semi joins?
- Least Significant Who In Whoville: Hola mi amigo. I am the sole DBA at a recently acquired adult beverage manufacturer and our new controlling company. Do you have any suggestion on how to break the news that their adhoc architecture is one mistake away from disaster when their shadetree DBAs know more than me?
- Itching to Emigrate: Is it true that Americans in Europe working remotely for American companies fall under GDPR? Can you point us to resources for this question? Everything I see only looks at the consumer side of GDPR.
- reps_for_bulk_deletes: For “Full” recovery model, how much bigger than a row of data is the log entries associated with the deletion of that record? Trying to come up with a heuristic for when to approach alternate deletion methods by looking at remaining space in log and on log disk.
- BlitzFan: Hi Brent, At our shop we make all kind of backups, SQL backups, Avamar backups of SQL backup shares and of entire databaseservers. But we never test those Avamar backups, so no disaster recovery tests and no disaster recovery plan. What would you say to management?
29 Comments. Leave new
3) If a query runs in 2 seconds but consumes 16 seconds worth of CPU time then it’s probably gone parallel. To confirm this, run the query manually in SQL Server Management Studio with the Include Actual Execution Plan option turned on to see the execution plan. This will show you if parallelism has taken place. If you’re happy that it takes 2 seconds but uses a bit more CPU then don’t be concerned.
1. Yes. You’d be amazed at how much money those agencies make, not just 10% off the top but sometimes ridiculous amounts like 300%. But hey if you need the job…
3. Not necessarily. It depends on the plan and also if you still have enough CPU headroom for the rest of your queries. “It’s only a problem if it’s a problem.”
10. I think we’ve all been here. To quote a certain synthetic, “I can’t tell you your odds but you have my sympathies.”
12. Whenever you’re trying to do something that complicated… I’d probably reconsider. With that said you can fn_dump_db log to see for yourself (though they say it’s not good to do on production).
13. “Can we get a spare server to test backups on? Without it, this is the impact… and you will have to accept that risk.” If the answer is no, start a logbook in your documentation system named “critical decisions”, and keep a record of the request and date. Ask again once a year I guess.
1. I don’t like these thoughts too.
2. Yes, you are.
3. My concern is “where are my 14 seconds, cpu?”
4. Yeah, left oriented linked servers usually perform worse.
5. That’s a mystery for me too
6. I named my indexes after my exs’ so now I’m not afraid to drop ’em
7. I’ve no idea these functions existed
8. Did you try booze? Or money?
9. Anti semi-ism is wrong
10. What’s an adult beverage? Kefir? Kids theese days …
11. http://www.usa.gov
12. You really have a lot of free time. Like, A LOT
13. Well, they shud allow you to do all kind of restores, SQL restores, Avamar restores of SQL restore shares and of entire databaseservers
Very entertaining 😀
I’ve tried to channel my inner Brent with a mix of snark and helpful comments.
3. That’s called “parallelism” and is generally a good thing.
5. Check the actual execution plan, and you’ll see.
6. Be consistent, avoid Hungarian notation, let the system generate them and take it as an inspiration
7. Query store or Extended Events
8. Enable toggle to skip the UDF plan collections, improve the History panel
9. Switch the order around and use the right anti semi join.
10. Use facts and logic
12. What problem are you trying to solve? The recovery model isn’t chosen for the transaction log space usage but the RPO.
13. https://www.sqlskills.com/blogs/paul/importance-of-having-a-good-disaster-recovery-plan/
I think that Brett, instead of crying, said ‘well I know that people that have actually paid attention to my comments can answer these easily so I don’t have to repeat myself for the 10000th time’ or something like that. Especially for questions like #13, but really most of these questions.
I’ll challenge that answer to #3…. I think Brent would first ask “Is this a problem that someone is complaining about?” If yes, then you might look to see if parallelism is actually hurting or helping you this or is neutral. If your parallelism is not an issue, then you get into whether you are doing redundant things in the query, or have stupid functions, etc.
11. The GDPR concerns the protection of residents* (citation needed, I am not a lawyer) of the EU, so I guess your company would effectively need to be GDPR compliant. I would bet that everything your company does with your employment-related personal information is perfectly legal as long as they can reasonably say that it is essential for them in order to fulfil their obligations like contacts, laws & regulations, etc.
Strictly speaking, they would perhaps need to do some legal/documentation work, like creating/obtaining data processing/subprocessing agreements (DPAs) and having vendors that deal with employee information sign those. Think outsourced salary admin, insurance companies, IT services, anyone who deals with personal information.
I would personally not worry at all about the GDPR, but rather the potential taxation implications for you as a US citizen working abroad. For you personally, that feels much more relevant and tangible. 🙂
1. Some companies love contract-to-hire. You need to figure out the scenario behind it to tell if it’s a shitty company or not–so you need to ask some questions. Here are 2 scenarios, and what I think of them.
A) Contract employees are sometimes paid out of the department’s operational budget, where full time employees (FTEs) are paid out of a different budget. They need help ASAP, but don’t have a headcount approved by HR, or it’s approved but not until next year. In this case, contract-to-hire makes sense. You get in sooner, get to start helping while the company sorts out their budget stuff. That position might never be approved, but you know they want you there. This isn’t a bad thing–the decision is just about personal risk.
B) They hire everyone as contract-to-perm. They might renew your contract, they might hire you as an FTE, or they might cut you loose after the initial few months of contract. They effectively use the contract term as a probationary period. This is a huge red flag, and I’ll never work at a company like this. It usually is an indicator that there is a culture where poor performers aren’t dealt with properly. Rather than have meaningful feedback & coaching conversations during your first few months, they prefer to just tell you “Sorry your contract wasn’t renewed” and get rid of you. But even if you are hired full time, expect there to be poor performers who aren’t getting proper performance management–that can be even more frustrating than not being brought on as an FTE.
C) I’m not actually going to keep going… you get the idea. Ask them about why any given role is contract-to-hire, and think about whether the hiring company is being selfish, or nimble in solving staffing problems. Then decide if it fits your personal risk tolerance and how you feel about that kind of employer/employee relationship. Outside recruiting firms might influence the employer’s decision making process, but it’s the company’s actions (not the recruiter) who you need to really be critical of
^ for the win.
1. The way I see it is the company is suspicious of new hires and wants to make it easy to drop them if they don’t work out. That tells me that either A. they are not confident in their hiring process or B. they have a fairly toxic culture, either is a red flag to me.
3. This probably just means it’s going parallel, I would not be concerned based on this number by itself.
6. IX_number is very common. Actually IX_anything is very common. It’s common for the same reason so many people make string inputs as N”, one person did it, the next person saw it and published a script using it and before you know it everyone is doing it without fully understanding why. IMO there’s no need for a standard generally, it’s a good idea to put at least the first key column in the name and your company name if there’s a need to separate from vendor objects but how you format that doesn’t matter.
8. Viewing plans in SSMS 18 is good enough I don’t feel like I need anything else at least for my job.
10. Ask questions, something like “I see that you’re doing this process this way, that appears to contradict [insert expert article or first party source], can you help me understand why this decision was made or am I missing something important” if those DBAs know more than you they may have made things the way they did because they know some cool trick behind the scenes you can learn.
12. DBA time is a lot more expensive than disk space, just make sure your logs are big enough and getting backed up frequently enough. I don’t see how working out this process you’re proposing could possibly be worth the effort.
13. This is the sort of thing that can get you promoted. Assemble all the information: what does your process look like now, what are the drawbacks, make a proposal to make a real disaster recovery plan and offer to lead that effort.
3. Watch the Green Lantern movie (yes, the one with Ryan Reynolds), the main baddie is Parallax. This is like that, but Parallelism is not a villain and the CGI is better.
6. Initials_Field_Includes(if used): Example CU_Users_Votes or CU_Users. So who created it, its main indexing feature, and if it has an include.
10. Present your observations, make them prove your observations wrong. But do so nicely, and bring snacks. But if they can’t argue out of it: Change it!
13. If it is your job on the line, test those backups yourself. If you aren’t allowed to do so and it is still your job on the line, find a new one ASAP.
I just rolled my eyes at the fact that most of these are google level questions. Its like folks that post on reddit about a leaking water heater asking what to do first.
A big chunk of Brent’s audience are early career and/or non-DBAs, and don’t necessarily know how to tell the difference between good & bad advice on the internet. I’ve read a lot of bad advice from google search results–its easy to tell the difference when I’ve used SQL Server for 20 years. But I assure you that in 2003, I didn’t even know the right search terms for most of my questions.
Office Hours is about learning new stuff. Please don’t shame people for bringing Brent some “easy” questions. What’s easy for you isn’t easy for everyone. I really wish I had Brent’s office hours when I was just starting out & struggling because there wasn’t enough intro-level content.
Well said and totally agreed!
1. seing is believing. same goes for doing.
4. Linked servers right direction? H*ll, NO!
5. 1. https://www.sqlskills.com/blogs/paul/inside-the-storage-engine-anatomy-of-a-page/ 2. https://www.sqlskills.com/blogs/paul/inside-the-storage-engine-anatomy-of-a-record/ 3. https://www.youtube.com/watch?v=fxfxTD5EmSw. And yes, it’s still via the transaction log, just like (almost) everything else.
7. trace or extended events.
9. left anti semi joins? not a Microsoft thing, baby. ANSI SQL yes, but not MS. EXCEPT is what you are looking for
10. any proof of that statement?
12. https://rusanu.com/2014/03/10/how-to-read-and-interpret-the-sql-server-log/. to answer your question, you need to delete in batches: https://sqlperformance.com/2013/03/io-subsystem/chunk-deletes.
13. they need to be aware of the consequences. and they need to acknowledge that in writing. everybody talks about RPO and RTO, but nobody talks (anymore) about CYA. Used to be quite a good series of books back in the day, from Syngress, I believe.
4) Just go buy Enterprise Edition. Your time figuring out how to synchronize failovers of 7 BAGs so your linked servers don’t kill your IO performance (network I/O is a killer) is worth more than the licensing fees. And you will get better utilization out of your RAM on the existing servers. Unless of course your boss treats your time as if it is free. Then go get another job and let the next guy tell him the same thing.
4) You don’t have to create listeners for BAGs. You can still use the mirroring type of connection string to each database and when a BAG fails over the connection string will still work — Our devs didn’t want to go the listener route. A big problem with BAGs (plural as in multiple databases each in its own BAG) is one can failover but the leave the others behind and cross database queries will fail. IMHO, if you can budget for the enterprise licenses do so (the ability to use more than the 128 GB RAM may be helpful too). Keeping on top of database/server synchronizing can be a headache. Going down the linked server rabbit hole is something I would avoid and would also use that as a justification for getting the enterprise licenses
13) Old proverb — If you fail to plan, plan to fail.
I am going to assume the SQL backups are only available online for a certain time (i.e. routinely deleted). How long are the backups available online? How often are those backups tested? Ask what the RPO is for the business and if they believe it can be met if you are not regularly testing your backups and confirming that you can meet it. You can search and find stories of people not testing their backups and when it came time to actually need to do a recovery, the backups were corrupt. Ask management what the cost to business would be if your most recent backup was bad? What if all your backups from that system were bad? These are reasons to present to management for testing backups and making sure you have the necessary equipment to test those backups. If your job depends on being able to do that recovery and the company won’t give you the tools to do what you need to protect business continuity, I would look elsewhere while you can.
1. I like to think it’s a smart way to keep only good people. It’s hard to hire only really solid people.
2. I would think the app could only do a full stop (reads and writes). Can you not just put the whole DB into read only easily using the rollback immediate technique?
3. Depends if 2 secs is acceptable and whether you have overall CPU pressure; otherwise, parallelism seems to be a nice help to you.
6. Brent seems to hate answering questions like this. Most of my clients go with “IX_TableName_1stColname”. Some will do “CIX…” for the clustered. Some will suffix it with “…INCL#” for the number of included columns. My guess would be that many naming conventions originated in the earliest programming languages that used prefixes to describe variable data types. I love prefixes for most object names, but not databases, tables or columns. My religion says column names should always have a suffix to describe the datat type (aka class word).
9. I wish I had been present when the 2st developer called an outer join a “left join”.
10. I think all architectures are 1 mistake away from disaster. Be kind. Nicely ask them if they’ve ever run sp_Blitz, and/or have a conversation that starts with “Have you ever thought about [x], because in my experience [y]…”
Correcting typos: 6 – datat = data, 9 – 2st = 1st
I actually pronounced 2st as Twist
2. The more I read this, I keep coming back to one question, “how is the data for each tenant delineated today?” Regardless of the answer, setup a testing environment to find dead-ends and pain points. There’s a lot of potential for disaster so make sure your whole team understands the plan, downtime, make sure you have backups and a recovery plan. My experience has been some sort of scheduled downtime during which the application or the affected portion of the application is taken offline. Then work is performed and the application is verified as operational before being made available to end users.
13. Personally, I wouldn’t run to management straight away as they prefer people who make less work for them, not more… which maybe you did this already. I would see what you can do first to understand what the pain points in restoring by asking the backup admins to do a file recovery of a few bak files. Talk with them about requirements for restoring the image of a small server into an isolated test environment and then actually do so if the resources are available. Start with a small scope and work bigger.
My first total disaster recovery plan included a credit card, a list of hardware specifications and vendor phone numbers. I also had a copy of installation media necessary to run restores and the order in which restores would need to occur. I then estimated the RPO and RTO to management which at that point it was up to management to decide if it was adequate.
If your backup admins are not willing to work on this, even at a low priority, then I would go to management and explain what you want to do, in Brent’s words “the problem you are trying to solve”… but if possible, come up with a plan with your backup admins.
1. CTH does suck and I’ve done it before with an employer where EVERYONE had to pass an interview with the CEO/Co-Founder who had one of the most unique methods of interviewing I’ve ever encountered. We were coached by internal recruiting on how to pass the interviews as there were typical questions he asked so you had to have your answered rehearsed and able to come out naturally without stuttering or getting too emotional. In that respect then it was good for CTH to be paid while going through all this. They lose candidates who take other jobs while going through the long interview training process. I know others who like CTF because sometimes they decide that they really don’t like the employer after all and can bail without guilt after a few months while making a contract rate.
10. Update your resume and look for another job. You do not want to be on that Titanic when it goes down. Otherwise you know who is going to get blamed and have to do the massive Hurricane Katrina cleanup? You. Run don’t walk.
13. You need to test your backup and recovery strategy. You need to have a tested recovery plan. Period. Game Day isn’t the time to figure it out while the company is losing X dollars per hour you’re down. Worse even is if turns out you just lost ALL of your data because the backups aren’t recoverable. I’d say to management how much money and customers are they willing to lose for every hour that they’re down during a disaster or outage or worse, they lose all their data because the backups are corrupt? Management listens to $$$ talk so if you can put it into financial terms, that’ll make them sit up and listen! And if they don’t care about the money or business loss, then find somewhere else to work since that company isn’t going to be around in the long run.
3.This simply means that the query is using multiple CPU cores to do the work.
The way I would approach this is to find what my server’s top waits are (using sp_blitz). If top waits are CXPACKET or SOS_SCHEDULER_YIELD, then High CPU usage could be a problem that I would look to solve. I would start my checking my server’s maxdop and CTFP are set to sane values. Then I would drill into the top CPU consuming queries (using sp_blitzcache) and try to optimize those.
Refer these for more info:
https://www.brentozar.com/archive/2017/03/why-cost-threshold-for-parallelism-shouldnt-be-set-to-5/
https://www.brentozar.com/blitz/configuring-parallelism/
13. I’d maybe start by asking their RTO (recovery time objective), i.e. what’s the maximum time they are willing to tolerate things not working in the event of a disaster. Then you can start to talk about how you have no way of knowing if you can hit that number because you never test your restore process (this is also a good time to mention that you don’t actually have any proof the restore process even works without some testing). Also be prepared with at least some information about what you need to test the recovery process (how many servers, who’s going to be involved, that kind of thing) otherwise management can’t make any kind of infomed decision of the cost/benefit of running these tests.
If they say no, that’s fine, you’ve raised the risk and they’ve accepted it. Just make sure you have a record of your summary of the risks involved, and the management decision to take those risks.
5. yes whenever that column changes via insert / update / delete all indexes with that column are updated. You can see it in the execution plan and when you add loads and loads of indexes to a table you will see transactions slowing down.
8. (Improving SQL Sentry Plan Explorer):
– make the plan moveable as in SSMS. Its always annoying, if you switch over and fail to simply click onto free space and move it
– I hate it to see “Expr 1234” in the plan and have to manually go onto the Expression tab to find out, what it could be. Would be nice if I could at least hover it with the mouse (I know, its difficult in a tooltip) and see the formula behind it