Questions About Automation & Patch Management

Opinion poll time!  I got asked a few questions, and I’m curious about what the rest of you think:

Question 1: How do you feel about Run Book Automation for databases?

Have you used it?  Would you want to?  Why or why not?  If you’ve never done it and never would, please respond too – don’t just be scared off by terms you haven’t heard before.  (I hadn’t heard of it before I came to work for Quest.)

Question 2: Do you have multiple copies of the same database?

Do you have several copies of the same database that you need to keep in sync, whether it’s schema or data?

Question 3: If yes, how do you feel about automated patch and configuration management of databases?

Do you have enough databases that you would consider building a “gold” standard, deploying it, watching for changes, catching out-of-band scenarios, syncing them back to the gold standard, etc.?  Why or why not?

Let me know what you think in the comments.  I’ll hold back my own answers for a couple of days.

Previous Post
SPWho2.com: StackOverflow user and tag statistics
Next Post
Top 10 Developer Interview Questions About SQL Server

12 Comments. Leave new

  • Ed Leighton-Dick
    June 16, 2009 6:27 pm

    1. We have run books, but we do not automate any solutions. I could see a potential use for automating solutions, but only in circumstances when I knew for certain that the results would be predictable and controllable with no unwanted side effects.

    2. Yes.

    3. If by automated patch and configuration management, you’re referring to the schema and data within the database, sure, assuming that the databases are identical except for the changes you’re syncing. I wouldn’t automate patches or config changes at a server level, though; there are too many variables (various versions of Windows, various hardware configs, etc.) and too much at stake if something were to go wrong.

    Reply
  • Gotta get a survey engine like Paul and Kimberly L. have over on their site 😛

    Anyway:

    1.) I have heard of and am in favor of run books. I had to hit the wiki link for the RBA part though. Looks interesting.

    Reply
  • Gotta get a survey engine like Paul and Kimberly L. have over on their site 😛

    Anyway:

    1.) I have heard of and am in favor of run books. I had to hit the wiki link for the RBA part though. Looks interesting. I could see a definite need for investigating the functionality. Especially in this kind of budget cycle where I am just not going to get additional DBAs. Not sure I fully understood the concept but I also would like to see workflow optimization for monthly/quarterly processing involving a lot of handoffs (don’t think that is what RBA is for, more like a K2, Oracle Workflow Builder, etc.). Too many manual notifications and steps clog up processes that otherwise would be faster (and let us sleep longer)

    2.) You betcha (like it or not)

    3.) Something has to give. I like the “Gold Standard” approach for heavy dev environments… I like having a gold standard to a certain point, restoring it, applying change scripts as part of a build (monthly, weekly, nightly, etc.. depends on the dev goals of the team and if it’s a dev use or prod use) in an automated method from source control. When it comes to a warehouse type environment where (don’t get me started by asking why) multiple copies are shared with multiple teams (I said don’t ask…) then I think the prod version is your prod gold standard and you just push that out from there.

    About the watching for changes,synching them back I can see the need for that but it depends on how the copies are being used. In a prod WH one copy should represent the truth and other teams just get pushed “the truth” (because the teams don’t place nicely together in one server… didn’t I ask you not to ask me why?). For Development teams, I could see the gold standard being built/deployed on an integration server with each developer potentially working on their own disconnected environment. In that case a check-in would be the signal that the change is ready to be merged. Then the gold standard is either just updated (if you are just having that integration server be the gold standard) or at some point the gold standard backup (if you are simply backing up a DB to play the role of “gold standard) is just overwritten for every major or minor version number as a cumulative rollup (or something)

    Anyway not sure if I helped or rambled confusion. Tired, going to bed.

    Reply
    • Ed Leighton-Dick
      June 17, 2009 10:00 am

      @Mike: “When it comes to a warehouse type environment where (don’t get me started by asking why) multiple copies are shared with multiple teams (I said don’t ask…)”

      I don’t have to ask… we do the same thing. 🙂

      Reply
  • Yeah, I do have a survey plugin, but I wanted text for this one just because there’s so much extra to say.

    For both of you guys – are you using any tools to make it easier like version control, a schema comparison product, or PowerShell scripts? What’s stopping you from using one of those?

    Reply
    • Ed Leighton-Dick
      June 17, 2009 10:04 am

      @Brent: “For both of you guys – are you using any tools to make it easier like version control, a schema comparison product, or PowerShell scripts? What’s stopping you from using one of those?”

      We aren’t using version control yet, but it’s on the radar. Where it’s practical, we use SQL Compare, but some of our databases have so much obsolete cruft in some copies of them that it’s not practical to use automated tools. In those cases, I’m doing any schema changes with handwritten scripts for now. We also still use quite a few DTS packages, especially for our warehouse, so any changes to those are also manual.

      As far as what’s stopping us from using the tools? Time, plain and simple.

      Reply
  • 1) RBA sounds great. I’d love to be able to automate away as much repetitive blah as I possibly can. Not because it’s dull, but because dull things are the places where we all make mistakes because, frankly, we’re bored and thinking about other things like dinner and pies and whatever else is important.

    2) Yes. I need to keep production in sync with development (schema) and development in sync with production (data). Sounds like fun, eh?

    3) I’d like to see automation of scripts to move from one version of a database to another, of course this would require something more than a simple version control system – some kind of database property listing the version, perhaps ;). The more that I can automate and review for correctness, the happier I am. I don’t want to T-SQL to move between versions and somehow make a mistake – I want to review T-SQL to move between versions and make a fix here or there.

    Reply
  • Well me and my wordiness appreciate this style of voting 😉

    To answer your question:

    In my current gig there is no great process to keep things in synch and it isn’t happening like I would want. I would probably end up using a schema comparison product (Have Red Gate’s tools, Visual Studio TS for DB Pro) with source control. I have done it with these tools in the past, also have done it with just requiring developers to check everything in (as they should anyway) and programatically get through source control. To be honest every approach I have tried has been informal and almost hacked together.

    Which leads to the next question: “What’s stopping you from using one of those?”

    The biggest frustration with the release process/change process is time. In most environments I have been at the DBA role has not been an understood/respected role at the organization prior to me coming on board (probably less respected after ;-0 ). So I find myself chasing my tail trying to get things setup right, supporting prod, getting stuck in design and architecture meetings (not complaining, planning is KEY) but I have never spent considerable time researching and figuring out the best approach.

    Actually excited about your survey and hope more people answer and hope this turns into some insight from you and maybe a product from Quest someday 😉

    Reply
  • Joel Coehoorn
    June 17, 2009 10:49 am

    > “I’ll hold back my own answers for a couple of days.”

    But stating that, you just answered one of the questions 😉 You admit you have answers to all three, and that implies that your answer to #2 is “Yes.” Of course, I’m sure you’ll put a lot more detail behind it later, but I kind of thought that was funny.

    Reply
  • Brent,
    Not sure if what we do is called Run Book Automation but am guessing it is close. We can take a SQL Server from crate/rack to live in under 3 hours. Configured for the domain, surface area configs, SAN, standard sql jobs (backups, cycling logs, index mgmt,…). It is working for standard SQL installs but need to get one setup for our Analysis Svcs setups. It’s nice but took some time to setup but well worth it. For example, I was able to crank out 12 new servers this morning during our maintenance window.

    Yes, we have multiple copies of the same image/core db. In one environment, due to security and confidentialiy of client HR types of data, we have to do this so that client data isn’t comingled. So each project is it’s own data and we start with a common schema and default data but from there it can customized by the implementation teams.

    The change management is a bugger with this one. So when we release updates or bug fixes, it is a major headache because we may have as many as 400 databases to test and update. What I would love to see is some sort of VSS within the DBMS itself. We have tried VSS whereby the developers script the objects but it just doesn’t work when you have a large number developing db objects.

    We have one DB that is just for the developers to code against and test with but we have an image database that is read only and the Sr DBA can apply scripts/changes to it after all the kinks have been worked out. That in essence is our gold standard (or build version as we call it).

    As for change management, I prefer automated not automatic. It’s that whole trust but verify thing.

    Reply
  • Alright, time to add my own answers:

    Question 1: How do you feel about Run Book Automation for databases?

    In theory, I love Run Book Automation. In practice, the time it takes to implement a successful solution like this (with today’s real-world tools) outweighs the time savings of automation. One of the things I love about database administration is it’s never the same day twice – and that makes for tough automation.

    I’ve had this discussion with a few people around PowerShell. There’s a growing community of PowerShell scripts, and some people think we should be able to automate ourselves out of a job. However, when I look at the staggeringly large community of T-SQL scripts, and I look at how we’re all still working, that doesn’t add up for me. Just because the scripts and tools are out there doesn’t mean our jobs are over. It takes time to save time, and sometimes it takes even more time than you save. The ROI isn’t there for me in Run Book Automation until you get to the enterprise where you’ve got thousands of SQL Server instances. Even there, you’ll need full-time staff to manage the automation solution, and you’d better be *really* certain it’s going to work before you automagically change a SQL Server’s configuration.

    Like Paul Randal says, the answer is always It Depends, and it’s hard to code all those rules into a tool. Heck, we’ve got simple backup and recovery tools that aren’t even able to handle every permutation of error messages and configuration problems. Doing Run Book Automation scares the pants off me. (Literally. I’m in bed with no pants on. It’s 10pm on a Sunday, though, cut me some slack.)

    Question 2: Do you have multiple copies of the same database?

    As Joel suspected, my answer is yes, heh. My needs, though, only revolved around homegrown applications. We had several in-house-built apps that we needed to sync between production, QA/test, and development environments. We did not do this syncing (either data or schema) with third party applications – we left that up to the third party apps to manage. Whenever we touched a third party app’s database, we voided the warranty, as they say.

    Question 3: If yes, how do you feel about automated patch and configuration management of databases?

    In our homegrown apps, we had two separate problems: we needed to migrate data from prod to QA to dev, and we needed to migrate schemas from dev to QA to prod. Whenever we did either of these, it was managed in combination with the developers. We never migrated data or schemas in a vacuum: we did it hand in hand with code changes by the development team.

    Even when we had a gold standard schema, we couldn’t handle variations in an automated fashion. If a schema strayed from the gold standard, it’s because somebody changed it – whether it was an added index, an added field, additional child tables, whatever. These changes are made by humans, and we wanted a human to get involved to decide what to do next. We never wanted to lose someone’s schema changes, and we never, ever, ever wanted to lose someone’s data.

    I love automated patch and config management for Windows servers because developers don’t put time and effort into Windows configuration settings. People just accidentally (or purposely) break configurations, and we want to get them back to a known standard ASAP to prevent hack attacks. Databases aren’t as easy to change, so when changes are made, we need to be more careful about automagically undoing them.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.

Menu
{"cart_token":"","hash":"","cart_data":""}