Playing The Odds
Assuming that life carries on as usual for a couple more weeks, I’ll have been a consultant here for three years.
In that time, I’ve had some really interesting conversations with people, mostly about why things are the way they are on their servers.
My favorite response is “isn’t that the default?”, because it’s never the default.
I think that’s called an unforced error.
Here are some of my favorites:
- Nonclustered Primary Keys
- Fill factor < 100
- MAXDOP 1
- Fifteen minute log backups
- Single column indexes
- Indexes including every column
- Database shrink tasks at the end of maintenance plans
- Read uncommitted
- Priority Boost
Thanks for reading!
Brent says: Ooo, yeah, my favorite: “SQL Server has built-in emails that alert me when there’s a problem, right?”
Autoshrink & Autoclose
Once I found “Automatically create statistics” off, they said it was the default ¯\_(?)_/¯
1mb data file growth… oh wait.
10% file growth… oh wait.
What’s wrong with 15 minute log backups. Not a default, but what I use on one server only.
There’s nothing WRONG with it – just like there’s nothing WRONG with a single column index, necessarily. It’s just not the DEFAULT. That’s what this post is about.
I will say though, that when we’re with clients, and I ask, “Why not 10 minutes? Or 5 minutes? Do you want to lose less data?” Then management usually suddenly turns to the DBA and says, “Wait – yeah, why would you pick 15 when 5 or 10 minutes would let us lose less data? Why 15?”
so… what’s an Optimial selection for this?
@Wing it’s entirely dependant on the system, for example if you have a CRM system you probably want more frequent t-log backups than some ETL/BI System. imo you cannot have a 1rule fits all for your backup solution.
We use CommVault. The discussion with our backup engineer when I said, “Oh crap no. We’re using Ola Hallengren scripts and a file share. This once-a-day backup crap is not going to cut it,” was… interesting.
Pitch this idea to a Central Backup Solution like lets say, Commvault – they would cry haha
(Really replying to Luke Salter, but it seems I can’t)
Yeah, I’m forced to have our server team backup my databases with Commvault, it has a hard enough time getting my 2x weekly fulls done before the differentials the next day…
I can’t even imagine it keeping up with 15 minute log backups…
DBA back to Management – Because that’s what you told my predecessor you wanted.
Manager – You mean my processor told your predecessor…. 😉
More seriously, keep reviewing your infrastructure & settings, its never set in stone.
Full backups truncate the transaction log, don’t they?
Curtis – ooo, that’s a good one!
Maximum server memory
database settings I love to stumble across in Production:
– < 5 MB Autogrowth settings.
– All the database's files in the C:\Program Files\… folder path. log files and backup files included!
– databases left in Simple recovery.
– Last backup? We backup the Server / VM weekly or daily or… isn't that good enough?!?
And then there are the wonderful vendor-specific database designs that I constantly see. And die a little inside each time:
– We don't need to define Primary Keys in the database. The application knows what the keys are!
– Foreign Keys? See above!!
– Clustered indexes? What's that? We LOVE big HEAPS of data!!
– Naming schemes? No way! It's so much more fun when those undefined foreign key fields change names with each and every table throughout the system! Like a Where's Waldo or a Choose your Own Adventure anytime you try to write that custom report!
– And we may be using five or six varchar fields to make that implied Primary Key. Or not. You'll never be certain!
I swear, just the clustered index thing? Yeah, the Blitz scripts let me turn around a business-critical system at a previous job that was crashing weekly. To the point of rebooting the database each time and logging tickets with the vendor. Simply by creating about 3 indexes, per Blitz's excellent recommendations. Those clustered indexes ENDED the problem. But the vendor didn't have sense enough to do any of that. "It's your data, not ours!" yeah. But it's YOUR schema, vendor. Yours. Sigh.
“Isn’t Instant File Initialization a default for SQL Server? ”
“Isn’t backup compression a default?”
lightweight pooling = 1
Oh man I had one client argue about backup time – they insisted it had to be noon; “thats what microsoft recommends” and were wondering why the system was slow then.
“Our last DBA said to save on space we should delete records past X days, and if we need to query them we just restore the last backup and apply the logs prior to the deletion… That’s what everyone does, right? By the way, what are logs? And why does the database continue to grow?”
This was on a ~300GB mission critical database…
I didn’t work there long…
I love this. It’s so awful.
4kb Disk Blocksize… Hey… Why the restore took so much time?
Clustered index (or any index) on a GUID identity column aka NEWID() with a fill factor of 100. This was on very busy session state database for a web app. Index maintenance nightly and app was responsive first thing in the morning, but was at a crawl by EOD. Banana splits good. Page splits not so good.
Windows Fibers Lightweight Pooling!! Because this is lightweight and brings fiber for my SQL Server diet!!!! LOL.
… all new features are enabled by default, especially Compressed backups
Max server memory makes my list. Following that is the default for putting all database files on the same disk.
Collation of SQL Server Reporting Services databases (ReportServer and ReportServerTempdb) is Latin1_General_CI_AS.
Collation of Integration Services database SSISDB is Latin1_General_CI_AS.
I feel fine totally admitting this…. I built an agent job that I have running in all environments every week that turns auto shrink OFF…..
REORGANIZE and “Best Practice Index Maintenance”, both of which are “defaults” in the minds of most and should never be. ‘Nuff said.