How to Suck at Database Administration

This isn’t about having a sucky job, this is about you sucking at your job. There’s a big difference. You could be sucking at your job because you hate your job and just don’t care. I get that. In that case though, you know better but choose not to do better.

I’ve encountered some people that have Senior DBA as their job title but don’t really have senior-level experience. I’ve seen some very questionable things in my career.

Signs you aren’t a Senior DBA

  1. Not checking the databases for corruption
  2. Shrinking databases on a schedule or using auto-shrink
  3. Not knowing what wait stats are or how to use them
  4. Relying on the Database Tuning Advisor to determine which indexes are needed
  5. Using Activity Monitor – I might get some flack on that one, but it’s true
  6. Thinking that 1 FULL backup and 1 LOG backup per day is sufficient for an RPO goal of anything less than 24 hours
  7. Thinking you should set MAXDOP to 1 to avoid CXPACKET waits
  8. Updating statistics after the respective index was rebuilt
  9. Not monitoring your systems – you don’t need expensive monitoring tools to monitor your systems!
  10. Manually doing a task over and over again when it can be automated
  11. Ignoring job failures and warnings/errors in the Error Log
  12. Not checking for high-value missing indexes periodically
  13. Using maintenance plans – I’m okay, though I still dislike them, if you use them for backups or DBCC CHECKDB but not for index maintenance
  14. Not patching your servers
  15. Not trying to improve your skills – don’t rely on your company to send you to training

What would you add to this list? What are the questionable things you’ve seen a Senior DBA doing or not doing?

How do you stack up?

I’ve done many of these things, but I was either a Junior DBA or an Intermediate DBA.

How many of these are you doing? The higher your number is, the less of a Senior DBA you are. It’s all fixable! Spend some time figuring out why you shouldn’t be doing these.

Previous Post
#TSQL2sday: What I’m Passionate About – Sas Christian’s Art
Next Post
Memory Grants: SQL Server’s Other Public Toilet

157 Comments. Leave new

  • Not being able to interpret a basic execution plan, and understand the output of Set Statistics IO On

    • Well remembered!

    • I love stats IO. I saw my boss use it a few years ago and went “what is that sorcery you just did?!” Have been using it ever since.

      • I recently took a 29K page read down to 3 – by fixing an Implicit Conversion warning. They think I’m a genius 🙂

        • I feel like implicit conversions are a huge thing right now. It comes up with most of my clients. I discovered the issue about 10 years ago (an old blog post of mine: http://weblogs.sqlteam.com/tarad/archive/2007/11/16/data-types-of-parameters-and-execution-plans.aspx). With some of the clients, it is the #1 root cause of their issues. How is it that companies with DBA teams aren’t finding/fixing these? Take away that “Senior” in your title!

          • They don’t look at the warnings in the Execution Plan icons or the XML 🙂 I spent a week chasing one recently (see above)

          • In my company at least we are only empowered to tell development an issue exists and maybe suggest solutions. We are not able to actually fix anything. and *whatever* development wants they usually get.

          • My home grown database maintenance tooling is set to check for Implict Conversions weekly, record them to a table on each instance. They then roll up to a central inventory system where we can use the data to try to guide our developers. We can also notify vendors, but are much more limited on options there.

            The implicit conversion check queries dm_exec_cached_plans, dm_exec_query_plan, dm_exec_query_plan.query_plan.nodes and shreds the XML to get the statement, schemas, tables, columns, data types, and convert FROM and TO types.

          • Ziemowit Jankowski
            February 26, 2018 8:29 am

            Unfortunately, the word “senior” is used to describe the number of years somebody has been paid for doing a job. Sometimes it has nothing to do with the skills of the individual.

            In my work I had an outsourced/offshored DBA that did a beginner’s mistake when re-partitioning a quite sizeable table, with the result of bringing work for a few thousand people to a screeching halt. When I asked the service provider to supply me with a “senior DBA”, I got the reply: “but he IS senior, he has been working with this for the past 10 years”.

            I could supply the DBA with the proper procedure after having executed the RTFM procedure for about 5-10 minutes, and I do not even call myself a DBA, as that I am certainly not. (For those wondering: RTFM = “Read The F—ing Manual”.)

        • My God Yes…been there and done that and typically is one of the first things I look for when someone says “this is running really slowly” 🙂

        • I had one of these recently where the join condition between the tables was:
          REPLACE(REPLACE(… , …) , …) = string-1 + string-2
          … and the customer shrugged and said that it worked and that it shouldn’t change and that I should look for another reason as to why this script “…which has run for 7 months is now failing….” because “…nothing has changed…” – ignoring the fact that the database grows by 1Tb a year and the stats & index jobs have been turned off and are being done manually between breaks in processing.
          I’m not allowed to touch it because every other team are the “experts” and “…we can only do what the vendor says we can…” (even though they are being paid a LOT of money for bad advice).

          • I’m willing to bet good money you work for some sort of financial services company or a bank. Hitting that same issue here. 🙂

          • Ziemowit Jankowski
            February 26, 2018 8:33 am

            Unfortunately, when using various business suites, you are as DBA limited by the vendor’s “do”s and “don’t”s. Some time ago I had a similar issue with a customer using SAP: I identified the problem with slow performance and needed to apply a quick fix. Got stopped.

            The proper way was to report the issues to SAP together with the proposed fix, wait, and finally act upon the instruction where my fix was included. Legal quirks… 🙂

        • Randy in Marin
          February 14, 2018 6:20 pm

          We changed N’B’ to ‘B’ in a where clause of a query in a stored procedure used to poll for something to process. The result was that the procedure took advantage of a filtered index and took 0 logical IO most of the time. (This case was rare.) Prior to the change, the procedure took more logical IO than everything else combined on a very busy server. Billions a day in logical IO to dropped to almost 0 by removing 1 character from the query. (I noticed this in a trace looking for the worst of the worst in CPU or logical IO. I used another tool that aggregated the values by query. I wish it was as easy to find a bad cursor.)

          • And that there is an implicit conversion issue. N’B’ is nvarchar/nchar. ‘B’ is varchar/char. It’s critical that you match the data type of the column to the data type of the value/variable. If the column’s data type is lower in precedence, it has to be upconverted which is why it has to scan.

    • Yes, this happens sometimes.

  • Good post, these are technical skills, i suggest you to talk about some personal/professional skills too in the future, a lot of technical senior DBA’s aren’t senior professionals.

    • Good idea! The problem with the IT industry is that a lot of people don’t have good people skills. I got docked on my annual reviews for not having a softer/nicer way to say “no” to people. It came from 3rd party reviews of people that I worked with. And it didn’t just show up on one review, it was at least twice. I never did learn that skill, or maybe a better way of saying it is that I didn’t care. 😉 The answer is still “no” even if I spin it in a way that makes people feel good about the answer.

      • This is super duper easy.

        I just tell the requester that the decision lies with my boss, then explain to my boss why it is a stupid request. My boss tells them no, then I get to commiserate with the requester on how nothin’ ever gets done around here.

      • My boss says it like this… You need to learn how to tell people to “Go to hell” in such a way they look forward to the trip. 🙂

      • I’m a people person, dammit! Why can’t you people understand that ?1?

  • Thinking that an AG cluster is a next-next-finish solution.

    • Indeed! And that’s how you get downtime in a 2-node AG that is set for node majority when you reboot either node.

  • * Using sa all over the place
    * Assigning system administrator rights to all service accounts
    * Allowing everyone and their mother to access the production server and deploy to that server as well
    * Not setting-up a decent development/test server for developers
    * Not giving devs showplan permissions

    • Along the same lines of your points, I’d add RDP’ing to the server to use SSMS. And it gets worse when the user is part of the LPIM policy. OUCH. This one I actually saw fairly recently with a client.

      • I do RDP into the server to use SSMS. 😀
        But only because my laptop is not in the customers domain and I need windows authentication to log into Analysis Services.

        • That’s what Citrix is for. You can publish SSMS to the farm and close that gap.

        • I have used the windows credential manager to get around windows authentication issues in the past. My use was slightly different, I had an admin account that I wasn’t allowed to login to my desktop with, but I was required to use for sql access. Storing the credentials for the remote server, using : for the server name, allowed me to have ssms connect to the sql instance using my admin account, while logged into my PC with my non admin account.

      • Well sometimes this is the only practical way to manage an instance located in an isolated DMZ, without even being able to RDP. I have some VMs in that case where I have to use VMware console to connect and then run local SSMS because Security guys wont open the rights ports. We have to deal with a lot of other things to run our job.

    • Nice additions to the list, I would also add not testing your DR plan on a regular basis or ever…

      • So true! My last 3 jobs did those tests 1-2 times per year and ran production out of the DR site for 1-6 weeks, depending on what maintenance they wanted to do at the primary site. Each test we did made us even more efficient for the next time.

  • Installing a new SQL Server by just clicking “next, next, next” with all defaults…

    • How about not having a standard SQL deployment script.

    • *Groan*…. that is what I am contending with now… in a site with hundreds of instances all done the same (BAD!) way.

  • I’m starting to believe the bar for senior dba is a lot lower than I’ve been led to believe. that or I need to get a new job. I don’t do any of that stuff besides the patching, which I fight for, but is out of my hands.

    • It depends on the company of course. At a company with over 10,000 employees, it took me several years to hit Senior DBA even though I had the skills and knowledge of Senior for a while. I’ve noticed that it seems to be easier to get at smaller shops based on the Senior DBAs I’ve encountered the last 2 years of being a consultant.

      • 1k employees, 6 dbas, I’ve been dba for 3 years, most tenured on team and had 3 dbas hired in as “senior” solely because the company wouldn’t sponsor h1b for a non-senior title. thus, there is no budget to promote me. thems the breaks.

        ^not my supposition, It’s what I’ve been told.

    • Trav — Senior is a weird thing. Some technologies have been around longer than others.

      Someone who’s a Senior DBA with AGs has only been using them since 2012 — ditto someone using ColumnStore.

      Hardly anyone wants a Senior DBA with Log Shipping or Mirroring, but they’ve been around for a long enough time for someone to have that level of expertise with them.

      Lots of people who have senior level skills with query and index tuning are going to feel demoted if they’ve been using older versions for a while (even prior to 2014, now).

      Outside of SQL Server, imagine looking for a Senior Docker admin or Senior Cloud Developer for [some new cloud functionality].

      Senior levels of expertise just don’t exist yet, or they become outdated before they do.

      • Great points.

        Another sign of a junior DBA is making lists of things you think all DBA’s are supposed to know!
        What a DBA knows is very dependent on what their job entails. And the DBA job means many different things.
        In a book ‘SQL SERVER MVP Deep Dives’ I enjoyed the chapter ‘What does it mean to be a DBA?’ which explored this topic.
        I am told now that I have to be innovative but all suggestions are shot down. LOL!
        I might be able to meet that expectation by gaining knowledge on DynamoDB…without knowing if we will ever use it.
        Lots of time spent learning things that seem interesting and then you don’t get a chance to ever use.

  • I feel validated.

    • Ha ha ha! Me too. I would most likely *never* fit into one of the buckets as-is. But, I sure can figure out how to resolve issues. I am starting to think ‘Senior’ means “remember to automate” and “kept the (please excuse the language – artistic license for effect) little pricks from breaking things by running a Cartesian join’. 🙂 There hasn’t been one single day where I think there isn’t something more to learn. Not one.

      For fairness’ sake, I have also done many dumb things. So while I gripe about others I do remember I am one.

      And, read these lists and blogs! So mush to learn and know.

  • Hi,
    I would add:
    Installing SQLServer via GUI,
    executing googled scripts which they hardly understand,
    not beeing able to troubleshoot AlwaysOn AGs,
    writing TSQL Code which is not CS aware,
    not being able to write Powershell Scripts,
    taking backups/doing restores via GUI
    BR
    Gerald

    • I disagree with a several of these, Gerald…

      {Most} GUI’s are fine to use as long as you have the understanding of behind the scenes and you know what the GUI isn’t showing you… Installing SQL Server via GUI is totally fine in most cases, unless you are deploying many identical instances (in which case, of course scripting it is better…)

      I love PowerShell… I might be the only person in my group (of not just DBA’s) who really relies on it and uses it… I use it a lot, too… That said, I couldn’t write anything terribly complex w/out consulting other scripts and documentation CONSTANTLY… The key is that YOU CAN BUILD SOMETHING USEFUL AND RELIABLE with it, not that you can write it like it’s VB/C#/Java.

      I’m on the fence about you backup and restore point. If you have a backup maintenance plan, or 3rd party utility, you wouldn’t have to take any with the GUI… But I feel totally OK with performing ad hoc backups using it. I feel OK with restores, too… Why do you feel that it is a sign of a non-senior DBA?

      • Hi,
        I’m only talking abt. ad hoc backups, because restoring databases with moving files
        via gui takes ages vs. restoring with a custom tsql script.

        Installations should be scripted when you want standardized installations.
        br
        gerald

    • I totally use the GUI for installations. The only time I’ve done it via command line is when I was doing a POC for Windows Server Core.

      • For sure! What do they think we are… Oracle on Red Hat DBA’s???

      • I found it useful to script when I had about 60 developers come in from IBM to work on a PeopleSoft implementation and many of them needed SSMS. Definitely was a good use case!

      • What I did for my recent round of installs (well, OK, last year,) was to create a command line install to put all my files in the locations I wanted (system databases go here, user databases there, user log files over there, etc) but not to the point where I could use the /silent switch.

        So I still went through the GUI, but the majority of the fields were already filled in, making it harder to put something in the wrong place. Now, all my installs are set up the same, with the data files in the same locations, and the only differences being what services are installed (this one needed SSRS, that one didn’t but did need SSIS, etc.)

        Even better, I can keep it for my next batch of installs, when we migrate to new servers down the road…

        • I have standardized the drive layout. This makes some of the stuff easier. I used to have 1000s of lines of powershell to manage everything. It was too much to maintain…had to add metadata for each new version. Now I’m looking at putting all the consistent stuff in the DefaultSetup.ini file and writing a short commandline setup cmd for each instance that is required. The cmd batch updates the path values to include the instance name where needed. I tried the sysprep approach, but it’s a joke. I really think that there should be a MS smart commandline installer of some sort that can standardize the install layout. Perhaps specify the drives to use for what purpose and let the install handle the rest, including the pathname.

  • Until and unless we get professional licensing in our industry job titles are pretty meaningless

  • I would add knowing the difference between when to reindex and when not to an a busy production system or if it really has the effect you are hoping for.

  • I see a lot of index rebuilds being attempted just “because”. It would be good for a senior DBA to consider if a particular index should be rebuilt, being mindful of data access patterns and HW details rather than just reacting to what looks like a bad value for fragmentation percent. Things will vary based on environment and specifics, but so many times we see no real benefit and that the attempted maintenance causes more problems than it solves for a variety of reasons. Of course, there are some valid reasons for such maintenance processes.

    Another one is “fixing” an issue like high CPU/slow performance from a bad query plan by simply recompiling it. If there are data distribution or parameter sniffing issues, then a deeper investigation might need to be considered. Code refactoring, data model adjustments, etc. Otherwise, you can expect the same problem to return.

    • When you have a case where it is important to carefully consider if to rebuild an index, Ola Hallengren’s scripts are very useful for maintenance and flexible as well.

  • I do not suck at my job because I read these blog posts regularly but I must know if that extra “n” in your blog title was intentional? 😉

    • The odd thing is that the post doesn’t have the typo. I noticed the typo 2 weeks ago when I wrote it. I fixed the title and link. I don’t see it anywhere on the page now, but I do see the typo in the emails I’m getting every time someone comments. Not sure where to fix that. lol I’m guessing you’re reading it from the RSS feed and somehow WordPress maintained the typo even after fixed 2 weeks ago. Not sure where to fix that.

      • I see it on the blog post and the e-mail. I got to the blog post via the “Today’s Post…” e-mail. Weird.

        • It turns out that Brent fixed it this morning. He said the link was correct but not the title. I know for sure I fixed both 2 weeks ago. So something got messed up 2 weeks ago. Oh well, it’s fixed for good now. Until next time. haha

      • LOL! Technology!

  • As I was in Jr.DBA
    1. Forgot to remove buildin\administrator account from sysadmin
    2. Keep mdf, ldf and backup in one single volume.
    3. No PS scripts or Job to maintain a copy of DB backups to SAN or any cloud storage.
    4. No email configuration.
    5. Assigned 100% memory as Max memory.
    6. Asking a silly question on StackOverflow.
    7. Run SQL Profiler at anytime.
    8. Taking backups in a busy production hour to refresh Dev Server.

    And so many stupid things I did when I have started as a DBA.

  • The Horror!! I’ll add a few that bother me to see:
    * not taking backups or taking backups but never testing them
    * putting all the SQL Server data files (mdf, ndf, ldf, tempdb, etc.) on the same disk
    * tempdb configurations

  • I do zero of these things so I’m satisfied.

    Though – I’m no wait stats guru and haven’t found many situations where going down that rabbit hole really helped anything. And in large AG environments the error log fills with poorly documented garbage and you have to pick your battles about what you’ll do about it – if anything ?

  • Michael L John
    February 14, 2018 1:05 pm

    Not providing the development teams meaningful data on index usage, blocking, procedure performance, and dead locks.
    Recommending NOLOCK.

  • sorry unless you are just a senior at a hosting site you need to understand the data and the affected applications and uses, nothing worse than a DBA who sits on his thrown without understanding this, most if this would be bad attitude or laziness. You can google a lot of sql answers there is no google for these items

    • It depends on what type of DBA we are talking about though. A Production DBA does not need to know those things as that’s the Development DBA’s job. Now some positions have to do both, but there’s definitely a distinct line in larger corporations that have both prod and dev DBAs.

  • The following two are quite high on the list for me:
    1. Documentation – schemas, relationships, security, object dependencies, etc must be fully and properly documented
    2. Sound change control – Anything that needs to be deployed must change the documentation and should be preceded by questions like – Why is the change required?, What is the change? Who is requesting and owning the change?

    If a DBA also places these points on the pedestal, they are good!!

    • I’m with you on documentation. I’ve had more than a few arguments with people who think it is unnecessary even though I point out that if I cannot find the info in the middle of the night when I am called out for a system that I don’t know much about (easy done when there are hundreds) – that person is the first one I am going to wake at 3am.

    • and how would you do that when you are responsible for > 7000 databases on hundrets id instances with a team of 4 DBAs? I’m just curious *g*

      • Yeah and that’s not possible for production DBAs. He’s referring to a development DBA. Some people do both jobs, but not when you are supporting hundreds of servers like you and I have. One of my jobs had almost 1000 servers. I knew only a handful of those applications, and only 1 of them did I know well.

  • Tara: BRILLIANT post!
    I see far too many people who claim to be “senior” although it seems to be deemed on them because they’ve been in the job for 10 years more-so than anything else.
    My additions to the list:
    – Failure to share information to the team.
    – Failure to mentor juniors with correct process instead pushing the way the company currently does it as “best practice”.
    – Failure to pull up DBAs and Junior-DBAs for doing things in a less-than-competent way (including not documenting code; not updating database/application information in the repository; etc.)

    Being a “senior” involves far more than just having the knowledge – it’s how it is used and applied.

  • Not being honest and fair and also being too judgmental and arrogant.
    Exercising people skills is just as important if not more so than technical skills these days.
    Not being easy to work with or work for.
    Managing perceptions is very important.

  • ” 1 FULL backup and 1 LOG backup per day”: This sounds quite odd to me…well the one log backup you take will give you the ability to do a point in time restore if you are lucky and your oops-moment is before the backup was taken and you are lucky enough to have no disaster before its taken….Let’s assume for one moment that two backups a day fulfill RTO requirements…then I would rather suggest doing 1 full and 1 diff backup instead of taking the log backup not leveraging simple recovery model.

    • Just depends.
      Brent recommended in an article taking a log backup every minute which I thought was a little crazed with good intentions. I took them every 5 minutes though for one app after reading that article. But sometimes log backup really will not work for you so a differential will have to do. Case in point:
      Noticed the disk had filled up.
      Hmmmm, the transaction log got really large.
      So I clear that up , put it back in full and run a backup and suddenly the transaction log is once again huge and filling the disk.
      What happened? Trace reveal what looked to be an update loop of some kind someone had put in.
      The developer wanted to be absolutely certain the database was running so he created a loop where he updated the database constantly. This activity caused the transaction log to immediately fill the disk.
      Suggested he at least put it on a timer and update less frequently. Got no reply. Tried to discuss. Political poison.
      So no choice but to use differential.
      He was promoted eventually up to VP and still developing. What can you do?

      • I’d still be doing LOG backups very frequently if the business says RPO is a small number. Differential is not sufficient for a very low RPO goal. So to achieve it, I’d be requesting sufficient storage. A big transaction log isn’t a problem and neither are the log backups. You just need adequate storage. If the business wants a different RPO goal where a differential is okay, then yeah I’d go with diffs. But it’s the business that decides on RPO/RTO goals, and then DBAs must implement the backup jobs and features to achieve that.

        • Within a minute the disk was full. That loop was reading/updating so fast it just wasn’t feasible to do full with log backups. And I knew I wouldn’t be supported politically pushing the issue.
          At the time it may have been local disk as well. (showing my age here) SQL Server 2000.
          Think he was also updating a varchar(7000) field.
          As many DBA’s like to say, ‘it depends!’.
          Aside from such an outlier of a case, I agree.

    • It was definitely very odd when I saw it. 2 of the people were “Senior” DBAs. I was flabbergasted. And their RPO wasn’t even 24 hours. It was much lower than that, though I don’t specifically recall the number.

  • “Updating statistics after the respective index was rebuilt”: Well I have to admit that I made that mistake once…however I am no senior DBA…just one question: Do you consider this to be harmful in any way? In my view it is just a waste of time and resources but less dangerous than the other things you have mentioned.

    • It’s just a waste of time. Some people just don’t understand that you get updated stats with an index rebuild. I’d rather spend those cycles doing other work. So that one was more about the Senior DBA not understanding update stats and index rebuilds.

  • Enjoyed this post, especially the point about “Setting MAXDOP to 1 to fix CXPACKET waits”… I’d put in a caveat tho. We had an issue with a stored proc timing out (after the system had grown to an unsustainable level). Having done the analysis and hunting to locate the problem (a really poorly written delete statement which was using parallism and for some reason timing out)… I rewrote the procedure, and fixed the issue, making it insanely more efficient…. and here’s the problem. This was a third party product. So, I submitted the fix to the provider, who promptly said “sorry, this only affects you”… so, the only option I had was to set MAXDOP to 1 because we could not afford to run on an upsupported system.
    I think a big part of being a senior DBA is investigating all avenues, and being able to offer different tiered solutions. Saying well, we can do this (gold level), this (silver level) or this (bronze level) and THESE are the risks and reasons for each. If we cannot use the gold solution for a specific reason, then we’ll have to go for one of the lesser solutions, but raise awareness that this is far from the optimal way.

    • Depending on the version of SQL Server, you could have instead used a plan guide so that you didn’t have to change the server setting. The server setting affects everything. Plan guides give you the flexibility to change settings and add hints as needed at the query level. You can set maxdop to 1 at the query level even if it is a vendor system. That’s the main reason for plan guides.

  • This is a great post, I have learned something today regarding Implicit Conversions, thank you all for being great!
    I’m now going to start checking our Stored Procs to see if this is an issue.

    • It’ll become obvious if you review the execution plans. sp_BlitzCache can help with that. Use cpu for @SortOrder and look for the Implicit Conversion warning in the Warnings column.

      • Implicit conversions became a huge issue a while back when a major site upgrade used more dynamic SQL instead of procs, and for the life of the developers, they weren’t sure why everything was so slow. 15 minutes of examining a profiler trace (that is fine, because it was during a maintenance window, heh heh) and yup, the queries coming in were N’…’ and of course those columns were regular varchar. I may have almost gotten a developer or two fired (inadvertently I assure you), when I pointed out how much faster those queries were if they just made sure they updated their C# to use varchar. We rolled back the changes, and about a month or two later, re-deployed with that and some other application changes, and voila…much better performance.

  • Really good points. I’m happy to see that I’m not doing most of your called points, BUT I don’t call myself as a “Senior DBA” 🙂 In two of them I really like to dig in: “Thinking you should set MAXDOP to 1 to avoid CXPACKET waits” and “Using Activity Monitor – I might get some flack on that one, but it’s true” – do you’ve some resource tips for read?

    • MAXDOP=1 prevents parallelism. Parallelism is a good thing for query’s with high cost. See what can be done with the queries. Index tuning and refactoring queries is a MUCH better solution than setting MAXDOP to 1. Be sure you set Cost Threshold for Parallelism to 50 or something like that instead of the default value of 5! Also, for current activity, many of us very experience DBAs are using sp_WhoIsActive. Check out whoisactive.com for to get your hands on it. I haven’t used Activity Monitor in many, many years.

  • – Multiple TEMP DB files, correct sizing and figuring out the number needed
    – Trace flags that are best practices or high risk
    – Multiple backup files and why you might want them for large databases
    – Extended events and and their advantages
    – Not knowing the answer to the question, “Who Brent Ozar and Tara Kizer are?” ;>)

  • Not restoring backups to a test server. Your job isn’t done when the backup completes. You gotta make sure it works and you actually know how to recover quickly, without hesitation, when it’s needed.

  • If you think a hash match is some kind of college Meetup.

  • Thinking that license tracking for SQL Servers depends upon only the activation key used in an install.
    Not knowing the difference between a service SID and a service account.
    Not knowing what permissions the service SID must have been granted.
    Not knowing what domain policy can do to local permissions, including SIDs.
    Not knowing why a SPN is needed and how to detect if NTLM is used for remote connections.
    Not knowing how NTLM affects group access to SQL Server (e.g., SSRS) in a 1-way trust.

  • […] How to Suck at Database Administration […]

  • You think the auto close feature is useful. Ever.

    I was unfortunate to work under a senior that insisted our databases have auto close turned on and every night a maintenance plan would shrink them. I kid you not.

  • Not having all servers in a consistent state for ease of supportability. So all drives the same letters for data, log, system DBs, jobs the same, same naming conventions, settings etc. This means once you show a new DBA one server all other servers should be easy to pick up and support. This should be the case for any system really, not just SQL servers.

  • Updating statistics after the respective index was rebuilt – you will actually need to do this though if you want a FULLSCAN though right?

  • Long ago my team was trying to push not backing up Model database. I didn’t go along with it and they weren’t happy with me. It’s 2MB and a system database. And in some cases it has specific settings for that instance. Why the heck wouldn’t you back it up?!!

    • That is awesome. How long ago did the senior dba leave your company? I think he works where I am now.

      • LOL! Well there were three of them.
        It was quite a while ago. Maybe 2005 or so time frame.
        They have all moved quite a bit since then I am sure. Hopefully they changed their mind about backing up model database. SMH!

  • I took over from a “Senior” DBA at a company a few years ago. Disk space was tight. He had a SQL job on every server, in every environment that shrunk every database file to 0 free space every night. No re-indexing, no update stats, no maintenance except for a backup.
    Within 2 days of taking over, I removed that and put in some real maintenance.
    Magically, the disk space issues went away, and even more magically, the users started saying how fast the system was.
    That “Senior” DBA has moved on, and is still shrinking databases every night at his new company.

    • That’s a gripe of mine too. Some people like to run tight on space which inevitably causes issues.
      I like to say, our job is to make sure the database is running at all times. Not save space.
      Now you try not to be wasteful. Maybe even use some table compression or other mechanisms or clearing out garbage (error tables no one is looking at that sometimes are the bulk of the space4 in a database) to save some space.
      But the priority is keeping the database running at all times.
      And of course shrinking is something to be avoided if possible:
      https://www.sqlskills.com/blogs/paul/why-you-should-not-shrink-your-data-files/

  • If we are going to gripe about ‘Senior DBAs’ I am going to add one.
    We got in a new Senior DBA that they were making our team lead. The new smarter more ‘blessed one’.
    He was a bit of a jerk and then he went out unexpectedly on 6 months family leave.
    Then he came back for a while and then went to another company.
    Later another ‘Senior DBA’ (another jerk) that had left went to work for another company (he kept in touch with some people) . He was finishing his interview and he asked if he could meet the current DBA. They said “strange thing. he came to work for a while and then went out on family leave for 6 months”.
    Turns out that’s his MO. He regularly changes companies puts in some time , and then goes out for 6 months. He claimed that gave him enough money to survive on and he got to be home with his kids!
    Meanwhile those of us in the trenches take care of things and clean up other people’s messes that are supposedly Senior or better somehow.

  • Hi. My name is Paul, and I’ve been using Activity Monitor for several years now. [hangs head in shame]

    I use Activity Monitor primarily for pretty graphs for high-level screenshots. I find it useful to show our customers and our VP’s a visual representation of Bad Stuff happening. I also like the view of I/O response times.

    For actual research, I have my own scripts that show the Important Stuff.

    9. Not monitoring your systems – you don’t need expensive monitoring tools to monitor your systems!

    As an ISV developer working on performance support cases, I feel this pain every day. Sing it loudly!

    • Paul – well, I’d just be a little careful there. Someday, the customer’s going to have a DBA (or, uh, a consultant) who points out that Activity Monitor hides wait types and camouflages the real bottleneck. I wouldn’t want you to have your pants pulled down live on the call when the customer uses a better tool like sp_BlitzFirst or sp_WhoIsActive, and shows you that an entirely different query is the problem.

  • You are not a senior DBA if you think your job is to stop developers doing their job instead of assisting developers in doing their job.

    • Found the developer!

    • Some seem to forget that it’s important for the company projects to succeed.
      I like to say our job isn’t to just say no. It’s to provide options and help get things done.
      It requires no brains to just say no. I feel it’s a cop out.
      Example: My app needs ‘sa’ to access the tables and perform updates.
      Reponse: Actually we can satisfy that need by granting permissions to do just that and you don’t need ‘sa’. Let’s check it out by applying these permissions and verifying the functionality of your application.
      And yes I know sometimes getting this cooperation from the application resources is much more difficult than it should be.

  • Allan S. Hansen
    February 19, 2018 1:06 am

    How to suck as a DBA?
    Because you’re actually a developer who simply learned to read execution plans and optimize indexes and everybody now thinks you know database internals by heart and is dumped into every database related problem 😀

    • Really, I know a few DBAs. They don’t seem to be that knowledgeable about the internals. I would love for them to do all of the database tuning so I wouldn’t have to.

  • I have seen this problem in the past and it seems to be more and more prevalent. Interviewing candidates for a Senior Level DBA position and finding most of the candidates are still junior level. The Tell when interviewing has always been if I see “proficient in using SQL Server query tuning advisor” then this is not a senior person. You either know how to read execution plans or you have limited knowledge of tuning queries.

  • and have all database connections use the SA login

  • This comment thread proves that there is no definitive list of must have skills or required tasks. For instance, the performance tuning tasks are all well and good, but, as at least one person noted, the DBA is not always in control of how and what gets fixed. Plus, I lead a massive team of 2 DBAs supporting hundreds of database servers, many of which are legacy systems the business refuses to invest in. So, unless something grinds to a halt or is part of a new project, there’s no way to get code updated.
    Then there is the issue of breadth of knowledge. My company uses almost every SQL Server component there is. We still have a SQL 2000 instance! We are actively developing against SQL 2016 and Azure SQL. Our BI teams work with SSIS, SSAS, even MDS! Add Power BI to the mix now. We have a very important system running merge replication. We are trying to get the company to use more HADR options like SQL’s AG.
    So, honestly, although we closely monitor our most mission critical systems, you’ll have to excuse the occasionally missed “SSPI handshake failed” error. Especially since our security team’s constant PEN tests are throwing hundreds of false positives every day.
    That said, the original list is a good one. There are some skills and there is certain knowledge that is required of anyone calling themselves a senior DBA.

    • alen teplitsky
      February 26, 2018 8:41 am

      powershell is your best friend. when i moved jobs to where the servers actually support powershell first thing I did was write a few functions to automate basic log checking. make up a hash table for your servers and a few simple functions to check logs and filter out annoying non-critical events.

      • garrett myName
        February 26, 2018 8:44 am

        I’d recommend using DbaTools, that is some amazing stuff.

      • We’ve actually filtered out the IP of the security PEN test server, but we still get lots of false positives.

  • 0. putting “senior” in your e-mail signature, Linked in, or resume.

  • 1. Be accountable and responsible to your team and at your job
    2. Be able to articulate and implement data strategy in your company
    3. Practice restoring system databases and set that DAC option to 1

  • Sometimes “senior” just means you get the discount.

    • Best comment yet.
      Truth is many managers have little to no knowledge about what you need to know as a DBA. Let alone a Senior DBA. So the actual criteria is obviously not really knowledge based in the vase majority of cases. And in many times it’s more political or just expediency based. For example “we are saving that slot so we can have the bargaining power to hire someone good as a new DBA”.

    • Tony – I am totally framing that for the wall. Nicely said.

  • Great post Tara.

  • Theodore Locke
    March 2, 2018 9:39 am

    I started in my current position back in August. I am their first DBA. I walked into a mess that is beyond scope, we are running SQL 2000 still, majority of our Databases are in SQL 2008, the DEV databases are on the same servers as PROD, and there are no scheduled patch outages (some servers have no SQL patches whatsoever). The IT manager, who has been with the company for 20 years, setup the backups (DBs are Simple and VSS once a day) years before I started. No matter what I say, the proof I show, or the issues that I bring to their attention, they refuse to change the backup methodologies. They shrug their shoulders on all of the issues I have pointed out and say “Hey, they work”. The company has the money to adjust, but the staff just doesn’t think it’s priority. I have documented to her and my boss (he is both of ours) the issues with the current backups and server setups and after getting rebuffed even told both of them that I cannot support the current setups. They just shrug their shoulders, but when there is a problem they can’t fix, they come running to me to solve them. What more can I do so I don’t fall within the category of DBA’s Who Suck, especially because the reason I suck is because the lack of support and the internal politics are so ingrained that no one is willing to do what is right??? I come from a job where I had a monthly restore process setup, only a few DBs were on 2008R2 (they are currently being migrated to 2016 now), we had regular patches, I had three levels of SQL environments for development purposes, and designed a pretty rigorous DR plan using AG and Log Shipping (yes, I am that guy from your Log Shipping Blog) between two sites. I almost feel like Wyatt Earp coming into Tombstone for the same time, but I think he had an easier time cleaning up the place… 🙂

    • You’re such a newbie! To them that is. Twenty years versus six months. It’s a new job and you need to prove yourself to them. It doesn’t matter that you have tons more experience than them. It’s just the way it is. It also isn’t clear what is stopping you from making some incremental evolutionary improvements. Some of which can be presented as solutions to current problems or providing capabilities they would really value such as point in time restore versus losing data. A DR plan versus losing everything perhaps. They must have something important they want to protect!
      Anyway, you took the job for a reason. Change can be hard, difficult and frustrating. But in this case mostly for you!
      A possible book for you: DBA Survivor: Become a Rock Star DBA
      Hey it even has some input from someone named Brent Ozar (page 7).

      • So true, Ron! I was a rock star 2 jobs ago, well at the 3rd job ago too. Then I left and joined the company at my last job. They had a well-established SQL Server DBA team. Though I had more years of experience, I was no longer a rock star as I didn’t have the business knowledge that they all did. I was there 18 months before Brent called and said there was an opening at BOU. I still hadn’t reached rock star status at my last job. It takes a while to become that person as you really need the business knowledge, which is hard when you’re mostly doing production DBA tasks. You’ve got to dip into the development DBA side to really get that knowledge.

        • Yes it can be hard to go from being respected and trusted to being….not! Even a transfer within the same company or just a new boss, you still have to prove yourself all over again! It’s a distinction that is a little tricky at times and for me one that has been a bit discouraging. And you have to prove yourself while also getting to know your new boss as well!
          I’ll bet that Brent is a real slave driver! Come on, you can tell us! He’ll never know! 😉 (j/k!)

          • He’s totally being a big meanie this week: making me blog. He knows blogging isn’t my thing! I’ve got 3 coming out next week as a result. 1 more in the works. I’m going to need worker’s compensation after this!

          • LESS COMMENTING, MORE BLOGGING

  • Michael John
    March 2, 2018 10:15 am

    Sounds like updating your resume is the next step.

    • Always have your Linkedin profile and resume updated. You never know what’s coming next. Even when I feel 100% secure where I’m at, I keep it up to date mostly.

  • Richard Polichetti
    March 7, 2018 2:28 pm

    Straight and to the point! I like it!

  • Not understanding the environment the Server is being built in. Understanding VMs v Hardware and Sans v Disks.
    I have seen servers with a disk for every file and the opposite of every file on one disk.
    Even found 2 core SQL servers in production while paying licensing for 4 and complaining the CPU is struggling to run the Report Server load and SQL.

    • I think we need people who are more generalist. What I mean is that We need people who know hardware and software, who know the OS and the DB. Too many specialist and too many recipe book cooks.

      • Peter – interesting, so in the age of the cloud, you think it will be more important to know hardware and operating systems? Interesting.

        • We’re required to know how our cloud is built how to troubleshoot the issues within our cloud infrastructure in addition to DBA and development work.

      • I usually use the term “jack of all trades” for a generalist. A lot of our clients have a generalist, and they aren’t sure what to do to fix performance issues.

        I think a generalist makes sense for small shops where you can’t afford to hire a DBA, a sysadmin, etc. They can hire consultants when they run into problems that they can’t solve. Bigger shops need some specialists though.

        • Peter John
          May 3, 2018 1:32 pm

          Consultants aren’t what they use to be. Most are just temp programmers. Very few are experts and they aren’t easy to filter from the rest.

      • I think it’s great to be able to see all aspects of a problem.
        Such as the SQL VM doesn’t even ping well it is performing so poorly. You give it more memory and the problem gets worse. Why? Someone put a hard limit of 4GB on the memory and so the rest is ballooning out and causing havoc with the O/S and SQL Server. Nice to be able to see and understand all parts.
        Today things are moving towards the cloud. And that means you will In probably most cases not have access to the underlying hardware if ever. I am not sure in the case of a dedicated machine but probably not even then. Many services today are what is called ‘serverless’ as well. The cloud greatly changes things.
        Going forward time may be better spent learning about the cloud offerings and architecture.

  • Came to work feeling inadequate in my job role. This definitely confirmed my feelings. Thank you for the lift I needed to make it through the day.

  • David Dwyer
    May 30, 2018 5:54 am

    Daily recovery model switch and shrink.. also running weekly full daily transaction log backups.

    They wanted to know why the transaction log backup failed…

  • Samantha Lattieri
    January 7, 2019 1:32 pm

    Hoarding organizational knowledge because you believe it provides you with “job security”.

  • John Langston
    May 2, 2019 11:50 am

    Late to this party. SQL Database is horse of a different color but some things are still relevant. Having set up fail over groups or geo-replication in case one’s primary data center tanks is very important. Still trying to find out from MS what events in one database will cause the entire group to fail over so we are manual. Query plans are still important (Query Store is nice in this regard). Auto-tuning is not perfect (letting SQL Database determine if an index needs to be added/removed), have had to intervene and add an index. Certainly knowing sku differences and the impact of one level versus another on your databases and application. At the end of the day organizational culture can a big impact on what one can/cannot do.

Menu
{"cart_token":"","hash":"","cart_data":""}