Who’s Really Doing Continuous Integration of SQL Server Databases?

One of our training class attendees asked a great question: who’s really doing this stuff? We talked about it in class, and I figured it’d make a great blog post question too.

So if you’re doing CI (automatic deployment of database changes), answer in the comments with:

  1. How many database developers you have?
  2. How many person-hours did it take to get CI working?
  3. What tools do you use?
  4. What benefits have you achieved by using CI?
  5. (New) Is the application deployed in only one database, or across many nearly-identical copies of the same database? (Could be internal or external.)

(Update 12/24 – I added question #5 because there’s a bit of a separate topic there. If you manage an application that gets deployed to customers – either hosted internally or externally – like if you call yourself an ISV, or sell access to these databases & applications to multiple clients – mention that in your answer.)

Previous Post
What happens to transaction log backups during full backups?
Next Post
What’s Covered in the Senior DBA Class of 2016?

49 Comments. Leave new

  • So we’re just talking continuous integration not continuous delivery. Correct?

  • Ok you hit a nerve (although I imagine you knew that already), so apologies if a soap box comes out:
    We have had automated SQL Server Database Continuous Integration and Continuous Deployment in Dev and Test since 2011. Personally I’ve been an avid DataDude user since it came out in 2006 (within VS 2005 UI).

    1.) 5 database developers working closely with 2 DBAs (separate reporting structure)

    2.) Was myself and 1 App Developer about six weeks – but that’s because we did SSIS, SSRS and SSAS as well.

    3,) TFS, SSDT, SQL Server – Since 2011 we have been using a custom TFS build – the way it work is on checkin it deploys a brand new database from scratch, runs all the database unit tests (SSDT ones not tSQLt), the tests create the data and destroy the data after each test. Data is managed through SPs within a Unit Test schema (SSDT partial project). Assuming tests pass do an deployment to a database with data already in it. We can also target multiple versions of the same database.
    Recently we’ve been working on overhauling it so that on git checking we’ll spin up a new SQL Server instance via Chef, deploying code via Jenkins and testing.

    4,) Countless – majority of our work is database heavy and significant logic within the db – so having procs be unit tested on check ins, or creating new unit tests for bugs has been invaluable. Saying that the tools are still so raw that keeping it going is an ongoing investment – for example Service Broker introduced new annoyances to ensuring it worked. And outside of MS technologies where SSDT doesn’t exist….such a pain.

    Have always wanted to an actually presentation on all of this, but…life. However have to say that it is frustrating as someone who has been doing this for years to still have to make the argument of Why and then have to still get the duct tape out to get it done. And in the non MS world it’s even worse – options seem to be –
    a. manage DB outside of development
    b. just tables/data store – LINQ and ORM everything
    c. each change is a separate script so to get the current version you always have to run through all the operations in order, no idea of current state in source

    • What about the CI/CD system we have running with a combo of Go, Chef, and PowerShell? That took a few months to get up and running, maybe? Benefits from it were huge – just finished v1 of an application and I think I had to write a total of three deployment scripts for the entire thing, each for destructive changes not getting handled by our incremental publishes. This also integrates SSDT unit testing.

    • We’re using CI for databases. I’m a huge advocate for it and evangelize for it’s adoption at every opportunity. We’re also moving towards automated deployment, but not quite ready for continuous deployment.

      How many database developers you have?
      [7 using CI on my team alone. There are others at my company.]

      How many person-hours did it take to get CI working?
      [CI only? Almost no time. If you already have TFS, it’s a trivial set up. If you don’t have TFS or TeamCity set up, that’s where the time comes into play.]

      What tools do you use?
      [Visual Studio and TFS]

      What benefits have you achieved by using CI?
      [We combined TDD with CI. We have dramatically reduced the number of defects found by QA and have achieved significant time savings by having automated unit test suites that allow us to validate changes extremely quickly. It has also forced us into much better source control practices. Since we are dependent upon CI for our development processes there is no more ‘developer forgot to check in the code’ happening.]

    • Hi Aaron,
      as you describe, that you have a lot of logic within the database (so do we have…), how do you use version control those database objects? Are you using migration scripts for CI?

      We tried RedGate toolbelts, but with some database tables with necessary static data we always ran into memory exceptions within SSMS – also an integration from GIT within TFS (in 2017) is still not possible (we would have to change from TFS Server to GIT Server to make this work)
      The loading processes when committing or comparing takes forever with SQL Source Control (due to ~4500 database objects).

      I’m highly curious how you/your company is achieving a seamless CI with your database.

      • Timon,

        At the time I wrote this we were heavily invested in SSDT with TFS. SSDT is declarative and the release/build script is done at deploy time or compare time. We automated Development and Test deployments, however when it was time for Production we would create the script manually and review/test it manually. We also found memory issues with SSDT Compare and found that RedGate was better at it. Actually wrote some PowerShell to automate the RedGate compare command line since we had multiple “client” databases with the same schema.

        We eventually moved away from TFS source control and builds to GitHub Enterprise and Jenkins, still using SSDT. We also incorporated Chef to build the SQL Servers themselves.

        Hope that helps,

        –Aaron

  • If you’re not doing this then you should be. It’s overdue for databases to come in from the cold adhoc world of manual scripts and “fiddling” to make things work.
    Yes there is pain setting it up. Especially if you have lots of 3 or 4 parts names, replication, CDC etc. Yes you have to hand crank almost all of the BI stack, sql agent jobs, server settings etc.
    But it is worth it.
    The ability to create an environment on demand is priceless. Reliable releases is by far the biggest win from a business/visibility stance. But also the days/months I’ve seen wasted with DBA’s restoring databases, sync’ing data back to dev/test. I’ve even seen places with DBA’s dedicated to non prod environments for this stuff!
    SSDT and TFS/VSO work a treat. TeamCity works a treat.

  • How many database developers you have? about 100, 300 developers total
    How many person-hours did it take to get CI working? about 80
    What tools do you use? homegrown. It’s actually a quite simple process using powershell.
    What benefits have you achieved by using CI? For ISVs that support multiple “versions” of databases and software this is absolutely critical to ensuring changes work with each of the different versions. For table DDL this is EXTREMELY critical. Biggest benefits are customer satisfaction (less bugs, quicker time to market) and less developer rework.

    We apply our database scripts in toto for every build, meaning the scripts are idempotent. We execute them against each release of our database found in the wild. We grab lots of metrics such as timings for each build. At the end of each “build” we run a database diff utility to ensure that each upgraded version matches the “net new” version of the database schematically. We also run our tsqlt unit tests at this time as well as data compares for lookup/model/system data.

    Frankly, not all of my customers need CI. If you support just a handful of dbs where there is only one or a handful of supported releases (like one internal website) then you are probably OK not doing CI. If you are an ISV, ie you support many customers each with potentially different versions of your software, different release branches, etc, then I don’t know how you can possibly manage WITHOUT CI.

    At the ISV I consulted for we put together a lot of metrics around reductions in rework, bugs, etc. I could share those stats, they are eye-opening…so much so that people think the metrics are bullshit. The real metric, the only one that matters, is that within 2 release cycles, just 2, our customers were raving about how much more stable the software was and how it no longer took entire weekends to do upgrades. Internally our developers loved it because they could immediately spot and fix bugs when the build broke, vs waiting for QA to catch it, potentially weeks later.

    For those that question why CI matters (or tsqlt unit tests, etc) and it’s benefits…just think about it this way…it is proven in the java/C#/etc world that best practices like CI and unit tests make for better code. Why would you think that databases and SQL would be the exception? Certainly db CI is MUCH more difficult to do, but it can be done. Your customers will thank you.

    • “Frankly, not all of my customers need CI. If you support just a handful of dbs where there is only one or a handful of supported releases (like one internal website) then you are probably OK not doing CI.”

      I found this not to be the case. I was using a limited set of CI tools when I was a one-man dev team (business analyst, architect, dev, test, etc.). The CI tools made it possible to keep up with the workload by taking away most of the pain of deployment: even just using the Generate Script option of the Publish feature in SSDT saved me boatloads of time, and I was a rebel in the organization for using SSDT for database development at all.

    • Which metrics did you gather surrounding process improvements through CI? Did you already have repeatable processes documented before diving into Powershell, or process and scripts evolved simultaneously?

      I’m the only DBA for the organization (with a team of report developers and end users of multiple other production databases as internal clients), so having a way to measure improvements in my own processes beyond database performance gains would help me see where I’m hobbling my own best efforts!

      I agree about the difficulty of manually maintaining multiple flavors and variations of servers, that’s one of the issues here: almost every SQL Server instance is a different version and/or edition, and a few are the only hosts for specific features.

  • How many database developers you have?

    We have 3 people that will check in DB code.

    How many person-hours did it take to get CI working?

    I’m not totally sure since it was set up before I started, but I would guess probably 40.

    What tools do you use?

    We have Teamcity, and powershell scripts that perform checkins. Once the code is checked in, it’s automatically deployed in a special copy of the database that the CI site runs on. Basically if it’s checked into the trunk, it’s being tested. We also added hooks to Teamcity to run our TSQLT unit tests with every build.

    What benefits have you achieved by using CI?

    Peace of mind, mainly. We know that all of our integration tests (running hundreds of web service calls and other things) pass with the new code. We also know when we break something pretty much right away, so for complicated code/schema changes it’s extremely valuable. We’ve had scenarios where what we thought was a small change (adding a column to a seldom-used table) broke things in a bad way because of some older inline SQL in a web service that used SELECT * or didn’t qualify column names with aliases.

  • At a previous company…

    1. How many database developers you have?

    There were 5 full stack developers who daily did T-SQL development, one dedicated database developer, and one report writer.

    2. How many person-hours did it take to get CI working?

    This is a difficult question to answer because when it was “working” is hard to define. Was it working when we began using it for production releases or not until we reduced issues to a certain level? Also, are person-hours which were required because of the CI’s maturing stage counted or just those directly working on the CI process counted?

    What I can say is that it took about 18 months to go from the infant stage to a mature process where we would perform automated production releases during our peak weeks of the year with reasonable levels of confidence. During that 18 months there were 3-8 people directly involved, but not dedicated, in developing the process or dealing with problems related to it / tweaking. We are definitely talking above 1000 person-hours.

    3. What tools do you use?

    We used TFS, Microsoft Release Manager, PowerShell, SSIS, and SSDT.

    4. What benefits have you achieved by using CI?

    Above all else, reduction in human error and a repeatable process. We also reduced deployment times significantly. We could push a bug fix with light touch testing (automated only) through three non-production environments in about an hour. Then push to production with downtime equaling the minimum time it took to execute SQL commands. For example, a stored procedure change would result in seconds of downtime in the single digits but a table refactor would take longer depending upon the data size.

    The organization takes in the majority of their revenue in just two weeks of the year. Our confidence level with successful deployments was high enough to push production releases during those two weeks.

    • “The organization takes in the majority of their revenue in just two weeks of the year. Our confidence level with successful deployments was high enough to push production releases during those two weeks.”

      !!!!

      Case closed.

  • Tools used: Red-Gate CI, years now of product development and a “mature” solution. Integrates with the other components of our toolchain, many of which mentioned by other posters. I do not work for MSFT or Red-Gate, but I am glad I do not have to (as in a previous life) work with Oracle, or MySQL / MariaDB glue-it-all-together-yourself-if-you’re-a-real-man options 😉 for those interested: red-gate then /products/dlm/dlm-automation-suite/sql-ci

    Gr8 and timely post Brent! Thank you! Really good and helpful to hear where others are in their AWK Server CI and “rationalization” of database development journeys.

  • Did this at my previous job.

    #1. 20-25
    #2. About a month to get this working. We started with brand new databases for new projects.
    #3. TFS for source control. Custom code wrapper around RedGate SQL Compare API to deploy code from code repository.
    #4. Source control was source of truth. Easy deployment. Faster development. Less deployment error.

    Even though schema/code deployment was handled by RedGate we still had to use script for data changes. This was done as the part of post deployment process.

  • We are using a homegrown CI. We have ~10 developers, who funnel their db request through me, I wrap them in a script and make sure it is re-runnable, and inserts a record in our audit table. They are checked into TFS, and a nightly process runs an executable (source: c#) that pulls down any new scripts and applies them to the target environments. It emails us with results (success / failure + reason).

    We use the same scripts to deploy out to clients and have the same classes / internals in an executable to apply the scripts.

    As part of our product build we query the audit table and compile into it the db changes it needs to run.

    Benefits : minimal human intervention, way improved accuracy [some releases may have 500 scripts; ask someone to apply all that in order and stop when an error occurs – good luck] and guaranteed audit trail. Also guaranteed that all scripts were applied upon product startup.

    We have approx. 30-40 hours involved in the process that has evolved over a couple years.

    Worth its weight in gold at this point.

  • We use Db-Up + Octopus Deploy.

    We have a step that backs up the database pre-migration step.

    Things have been going well with our new setup for over a half year.

    Database migration steps are well tested in dev/qa staging environments before production is ever touched.

    So, with careful scripts that are well exercised before production, DB CI is obtainable!

    • Oops forgot to answer the questions….

      How many database developers you have? 3

      How many person-hours did it take to get CI working? Whew…. that’s a tough one. A lot of time in our case went into evaluating tools, setting up tools for the entire environment (not just database)… since I can’t answer, I won’t try…. but I’ll say it wasn’t a trivial process.

      What tools do you use? Answered this part above.

      What benefits have you achieved by using CI? Deployments are fast and easy. Get stuff to end-users quicker.

  • I’ve done this a few times before.

    “How many database developers you have?”

    5-50, depends on the project. In some cases those were not dedicated SQL devs, but engineers that can also do SQL (topic for a different discussion)

    “How many person-hours did it take to get CI working?”
    I don’t have a rough estimate, however, it gets better the more projects you implement. Learning curve stuff…

    “What tools do you use?”
    SSDT, Jenkins/AntHill and a tool I wrote that blocks major table re-creation (explained here on the upper part of the post – http://sqlstripes.blogspot.com/2014/12/introducing-labs-our-app-playground.html)

    “What benefits have you achieved by using CI?”
    Basically, the same ones you get with any type of CI. Also, consider this: If your organization software runs in a CI process, you should be part of it rather than the manual, delayed deployment approach (well, at least in the lower environments, although I have done CI that goes all the way to production)

  • 1. There are approximately 11 database developers here.
    2. About 80 hours to get CI working.
    3. TFS, PowerShell, and SQL Agent are the only tools. SSDT has been considered, and we might go that way in the near future.
    4. Deployment of a moderately complex data transformation project was executed flawlessly because we had in effect rehearsed the deployment in test environments for months during every phase of development. The success of this deployment was very important to “the business” because “the business” is a 24/7 services organization that requires continuous access to constantly changing data. In spite of the pressure to succeed, the deployment was actually a little bit boring because it had been tested so thoroughly! Credit goes to the project manager who was able negotiate enough calendar days and server resources for us to do it the right way (no, that wasn’t me).

  • 1. How many database developers you have?

    70+ full-stack devs, nearly all of whom do some degree of database development. We have no dedicated database-only developers.

    2. How many person-hours did it take to get CI working?

    Less than a week to get the initial capability in place, but we’ve made many improvements to it over the years — primarily better tooling for automated database tests and more flexible deployment/publish options. We’ve been doing it in some form since the first SSDT release.

    3. What tools do you use?

    SSDT, some SQLite (largely replaced by LocalDB), acres of custom MSBuild scripts, Jenkins.

    4. What benefits have you achieved by using CI?

    Automated database-integrated “unit” tests, ease of maintaining database scripts under version control (with all the benefits that that entails), locked down deployments (only the build servers have schema modification rights), single-click deployment of the entire product… I can’t imagine doing without it now.

  • 1. A dozen or so database people.
    2. Much was already in place because of the app devs, maybe 10 days or so of effort but constantly evolving.
    3. SSDT, tsqlt, PowerShell, Jenkins, git, nuget, artifactory.
    4. Warm feeling of smugness when answering polls such as this.

  • I’m not using SQL Server anymore, I’m using a Hadoop distro with Cronacle for job orchestration (aka workflow), but yes, we’re doing CI. It was the first thing I put in place when I joined the team. Tooling for Hadoop (and Cronacle come to that) is light years behind SQL Server yet we still saw massive value in CI so have rolled our own CI and deployment mechanism for Hadoop and Cronacle.

    1 how many DB developers – 9
    2 countless, and we’re still evolving it. The investment pays off in spades though.
    3 a homegrown CI framework built in PowerShell and upon psake (google it) and nuget. We run our CI builds on TeamCity
    4 numbered builds, enabling us to track deployed code all the way back to CI and then further back to source control. CI is a Pre requisite to automated testing. Find problems as early as possible. Countless more benefits but mostly it’s brought a sea change in our culture to one focused on continuous improvement.

  • How many database developers you have? we have ~50 onshore, and nearshore resources that will check in database changes. We have had CI on dev servers for 4 years give or take. Recently we have made the leap to all environments. We currently have database changes across ~6 projects being CI’d to ~10 environments across ~75 sql instances.

    How many person-hours did it take to get CI working? approx 45-145 depending on the version. The original version of our CI was strictly publishing database projects to target servers via Team City, we developed a more complicated but ultimately better solution after we made efforts to tenant our application.

    What tools do you use? Visual Studio Database Projects, TeamCity, Powershell, Nuget, OctopusDeploy

    What benefits have you achieved by using CI? Being able to guarantee that databases are in a similar state with little to no manual intervention. Automatic testing/validation of database changes. Simplify management and the release process. Code is always bundled with the correct version of the database

  • 1. How many database developers you have?
    Yours truly… but like Stack Overflow, most of our developers are very skilled

    2. How many person-hours did it take to get CI working?
    A team of 10 full time for a couple years. Plus full buy-in from the entire company. But this is automating all software, not just db changes.

    3. What tools do you use?
    Jenkins, VSphere, Redgate SQL Compare, tons of other custom built solutions

    4. What benefits have you achieved by using CI?
    Features released at least monthly instead of yearly. Every DB is now a known state rather than a special flower per client. No more planned maintenance windows. No downtime.

    High volume OLTP systems are really hard change online. We’ve achieved amazing benefits and it would be wonderful to brag about it soon.

  • I’m not working in a dev team, but I work with many of them.

    I’m a pre-sales engineer at Redgate and over the last 5 years i’ve supported hundreds of teams across all business sectors on every continent except Antarctica to roll out CI processes. Sometimes it’s large teams, sometimes it’s a single developer.

    I’ve worked with Redgate tools, SSDT, DBup and Flyway. All are pretty good.

    Examples:
    Farm credit services of America (US financial services)
    Skyscanner (price comparison website: various case study docs available if you Google it)
    Wehkamp (2nd largest online retailer in Netherlands)
    AlbumPrinter (and the DBA started blogging about it: http://www.devopsdba.com)
    All examples have dozens of developers at least.

    Also, I blog about common practices and POCs I put together here: workingwithdevs.com

  • 1. How many database developers you have?
    Just me, but like Michael J Stewart and Stack Overflow, most of our developers are very skilled

    2. How many person-hours did it take to get CI working?
    It took me a month or two, but I’m not sure I’m happy with the results just yet.

    3. What tools do you use?
    TeamCity and a lot of custom powershell scripts using SMO

    4. What benefits have you achieved by using CI?
    Other than smugness when answering similar questions. We now the state of every database, no more diffs required.

  • How many database developers you have?
    3 developers full stop. All involved in the db, no specific “dba” role.

    How many person-hours did it take to get CI working?
    Created it in my lunch hours 3 years ago to prove a point, but I’d say 2 weeks to set up CI on the existing TFS, and another 3 months to set up the auto-deployment web services back end, WIX Installers and database bits.

    What tools do you use?
    TFS, TeamCity, WiX, SSDT.

    What benefits have you achieved by using CI?
    For something that I set up as a proof of concept in my own time at lunches it has gone to be fundamental to the way the company works (dev, support and installation teams anyway, plus sales).

    Assuming that the definition of CD is that every check-in is pushed to customers, I should probably say that we could use CD but we’re not – our customers would not want that. We use CD internally so that all our staff can get used to the new features before customers ring up and want help with it! Also external customers have different profiles depending on support levels so some may only get updated every 3 months, and some not at all if they’re out of support altogether. Some large organisations also have IT departments that want to be in the loop / only update yearly (! 🙁 )

    Support staff used to manually remote in to install updates, now this is done with the flick of a switch, and support staff can watch the various logs for potential issues, of which luckily there seem to be relatively few. The CD infrastructure means we can keep all our supported clients at the same version (other than the large Govt. institutions mentioned) which we didn’t have the manpower to even think of doing everything manually.
    Initially support were never 100% sure what version a client was on until they remoted in. Now they know before hand and can push a given version to a given client if needed, to fix an issue that may only apply to them.
    You could say I’m a fan.

    • Thanks Andy. I’m trying not to comment – I wanna let the audience paint a picture before I chime in – but I’m realizing I should have added another question: is it an internal-only application, or an externally distributed one? Gonna add that to the post now.

  • How many database developers you have?
    The ones that work on the project that uses CI..between 6 and 12

    How many person-hours did it take to get CI working?
    Don’t remember, it was done in phases… from click here to do a build, later on it was automated…but it was not that complicated since our regular builds are done from SVN anyway

    What tools do you use?
    SVN, tSQLt, MStest, powershell, Jenkins

    What benefits have you achieved by using CI?
    We know right away when something broke, before it hits test/staging.
    We have Dev, dev-int, 2 test servers, 1 staging server…per DB instance
    CI is used on Dev, the other ones are build at a schedule set by the build team. So if something is broken we will know immediately (we get an email from Jenkins with failures and unit test failures as well). We then fix the issues before the scheduled builds happen.

    (New) Is the application deployed in only one database, or across many nearly-identical copies of the same database? (Could be internal or external.)
    Several databases on 1 server…but dev, test, staging, dev-int and production look identical

  • Nathan Oglesby
    December 25, 2015 4:07 pm

    1) It took two DB devs to create the CI. It manages over 300 prod and an unknown number of UAT, DEV and local SQL DBs.
    2) The CI is always getting tested and improved with every release. Maybe a few months but it gets improved often and is continuously deployed.
    3) We use a custom in-house script manager built with Visual FoxPro. Source code is within TFS and Vault.
    4) Immensely improved our releases. Internal deployments don’t need to be scheduled with any infrastructure staff and external just needs to be scheduled for ASP code changes. Each app maintains all the scripts needed for that version. The app can point to a new server and set it up from scratch or it uses checksums on its internal data to compare against the target DB’s ScriptData tables. Dependencies are found when a script is executed and marks those as a forced run. Some scripts have dynamic code so there is a comment section to force other scripts to run. We also maintain a dependency map to ensure objects are creating the in the correct order. It works fantastically and we are getting ready to port it over to .Net and include it within the existing apps. Lastly, we only build once and move apps from DEV, UAT and PROD with the single build.
    5) This is internal only development and deployment.

  • This was back in 2006, so quite a bit old. here is what we setup for our start up HubCast, which sadly just closed down (bad marketing – technology got a Gartner tech award back around 2008.)

    dot.net, SVN, CruiseControl for CI, ERwin for database design. The database changes still had to be manually worked out, and setup – ERwin (at least back then) did not handle changes perfectly.

    Worked well

  • • How many database developers you have?
    No dedicated database developers, but 15 .NET developers who also do database development at various degrees.

    • How many person-hours did it take to get CI working?
    To set up a basic POC took me a day, so 8 or less. We already had SVN, TeamCity and Octopus Deploy. To set up CI for all of our databases took a little bit longer, but about 1 month for the first 25 databases, including source controlling them.

    • What tools do you use?
    Redgate’s DLM Automation Pack and SQL Source Control, SVN, GitHub, TeamCity and Octopus Deploy.

    • What benefits have you achieved by using CI?
    To name a few: repeatable, reliable and more frequent deployments, thus faster time to market. More visibility. Shorter feedback loop. More collaboration between co-workers, less ‘throwing over the wall’. A reduction of waste. For example:

    – No more handcrafted change scripts.
    – No more coordination hell to deploy them.
    – Faster and easier reviews.

    • Is the application deployed in only one database, or across many nearly-identical copies of the same database?
    Both. It’s possible to have one source control repository for one database used for different clients with build configurations for every client. Those build configurations can be configured to create client specific builds which can be deployed to only their respective databases.

  • Merrill Aldrich
    December 28, 2015 1:32 pm

    I just completed the first larger, “full blown” warehousing project with a client where we were able to do this end-to-end. Overall it was FANTASTIC, given some gotchas to beware of.

    1.How many database developers you have?

    Six

    2.How many person-hours did it take to get CI working?

    The TFS environment existed already for other teams; I’d say 80 dedicated hours spread over 12 months to really get the team trained and everything running smoothly. They essentially had to learn TFS and SSDT from zero to CI while doing the project because many had not used source control at all before, so there was additional time just in the course of normal work.

    3.What tools do you use?

    SSDT on TFS, supplemented by PowerShell scripts

    4.What benefits have you achieved by using CI?

    Quality of the deployed databases was the biggest benefit. Second was well-integrated code and database schema among the team members, with no ambiguity about “the” source code. Third the ability to reproduce the environment in a few minutes, with high accuracy.

    5.(New) Is the application deployed in only one database, or across many nearly-identical copies of the same database? (Could be internal or external.)

    One production instance plus supporting environments for DR, dev/test and local developer workstation instances.

  • We’re not quite to CI yet, however we have been treating databases as code and automating SQL database deployments for over 10 years.

    A great tool (that I’m not associated with, just a satisfied customer) that hasn’t been mentioned is DB Ghost. It excels at scripting database objects, building scripts into databases and especially making a target database identical to a source (schema and/or data). There is no need for the target to be at any particular schema revision, the tool works out differences on the fly. There are hooks for pre and post update processing, but we haven’t had to use them often.

    It’s been around longer, even though it’s not as glossy or expensive as Red-Gate. It’s addictive to consistently be able to update and deploy databases without worrying whether the update script ran correctly.

  • 1.How many database developers you have?
    We have a few people with significant SQL knowledge, but generally, the entire team (~12 people) are “database developers:” because our database is emergent from our code.

    2.How many person-hours did it take to get CI working?
    On going, but 1000+ easily so far.

    3.What tools do you use?
    Our core is EF Migrations, but with a LOT on top. We have custom tooling written in C#, powershell wrappers, and we use MS Release Management for the actual deployment orchestration.

    4.What benefits have you achieved by using CI?
    The database is *always* automatically deployed – both locally, in test, and in production. It gets tested along with everything else. And our upgrade process is identical on a dev workstation and production, and these means we’re testing that process dozens or hundreds of times a week.

    5.(New) Is the application deployed in only one database, or across many nearly-identical copies of the same database? (Could be internal or external.)
    Depends on the product, but we have some that have dozens or hundreds of databases. Indeed, one of our applications is deployed to SQL Azure and we’ve built our CI process to handled deployment to *thousands* of databases during a rolling upgrade.

    You can download a semi-up to data copy of our tooling here: https://github.com/GalenHealthcare/Galen.Ef.Deployer

    We’ve added a TON of stuff to this over the last few months, and hopefully we’ll get that stuff into the repo soon.

    In closing – I’d never do SQL-based applications without CI again, but it was a LOT of work and continues to be a big part of our overall development effort.

  • Tobias De Marco
    December 30, 2015 4:11 am

    We are developing inhouse application. Mainly there are four applications using three databases which are located on the same server. The biggest database contains about 800 tables and is about 600 GB in size. The databases are partly integrated and access each other using views in a few cases.

    How many database developers you have?
    -> There are 25 full stack developers.

    How many person-hours did it take to get CI working?
    -> The initial setup took about 80 hours. But I guess there were at least two additional months in summary two find solutions for all the challenges which came up during the production use.

    What tools do you use?
    -> After evaluating a lot of tools (e.g. RedGate dand Data Tools) we decided to use Liquibase (Open source) because it offers a change set based approach which gives you full control about the changes and statements which are executed.

    What benefits have you achieved by using CI?
    -> Controled, documented, reproducible and traceable process to apply database changes and (in combination with Source Control) easy way to create a database according to the application version.
    -> Easy database setup
    -> Early identification of errors which makes fixing and releasing much easier and predictable

    Is the application deployed in only one database, or across many nearly-identical copies of the same database?
    -> We have a few different instances. At least a central development (integration) database, different (at least three) diffrent QA datbases. Local developer databases and demo databases.
    We get the luxury to daily restore a copy of our production databases and run the current development scripts against them to ensure that (especially data changes) still work with the current live environment.

  • I’m an accounts manager at DBmaestro where I’ve helped dozens of organizations of all sizes adopt end to end continuous integration for the database.

    It generally take a few hours to get CI working.

    In the continuous integration process, code is verified using automated build functionality on each check-in. Frequent integration combined with ongoing verification and rapid error detection means less back-tracking to figure out where things went wrong, shortening the time between integrations and offering substantial cost-savings in development.

    Continuous integration is achievable for your SQL or Oracle database, bringing the database up to speed with native/application code (C#, C++, Java, etc.) integration by allowing you to leverage the same proven best practices for the database.

  • We are developing and supporting a set of SaaS solutions (ASP.NET MVC with SQL Server 2014 backend). Overall all applications are mostly using 4 OLTP databases and 2 OLAP cubes. OLTP data (1 db): 1.4Tb; OLAP (total): 5+
    OLTP data is completely mirrored in DEV and QA environments.

    1) We have 12 full-stack developers doing database schema changes daily and 2 dedicated database developers.
    2) It took us around 30 hours to do the initial setup, another 10 to customize everythting and we spend around 2 hours weekly maintaining and improving things.
    3) We use Atlassian Bitbucket Server (Git) as our SCM system; we use RedGate SQL Source Control for committing the code; Atlassian Fisheye/Crucible for code reviewes; Atlassian JIRA for workflow and issue tracking; Atlassian Bamboo as a build and deployment server and RedGate SQL CI with Bamboo plugin to run database builds/deployments.
    It worth mentioning that we are using shared DEV database and we’re NOT using Git feature branches for database code (we do for web).
    Instead of it we:
    1) Use custom RedGate commit scripts to actually push to Git immediately after commit.
    2) Enforce referencing active JIRA issue key in a hook in Stash
    3) In custom RedGate commit script we automatically pull and overwrite local repo if push fails (to keep local repo in sync with origin)

    When issue is transitioned through the workflow to testing stage, JIRA triggers Bamboo build/deployment and passes issue key as a parameter. Using custom PowerShell script we call Bitbucket Server REST API and maker sure that RedGate CI sync step updates only those objects in QA database that were changed as part of the issue that is passed through the workflow.

    Bamboo is also just building the schema on every commit to ensure that new change does not move database schema to inconsistent state (deleted column returned by one of SProcs for example). If the change is breaking, Bamboo updates the JIRA issue (custom PowerShell script) and notifies the commit author in Hipchat.

    4) We basically have CD to QA environment. It improved the process of testing dramatically. We use the same mechanism to build a release package; however we’re deploying it manually according to the release schedule. There are 2 problems in the whole process that we’re working on (either adding some rules and procedures or adding customuzations to the technical part):
    – Currently RedGate SC supports git, but does not support immediate push on commit and auto-rollback on push failed, which forces us to use custom commit script. It means that there are no migration scripts that tie to commits. We are writing a custom solution to work around it
    – Code changes have to be non-breaking to make sure that when something fails QA the code that was updated in QA database and left there will not break anything else. We enforce it during code review step.

    • Hi Phil,

      I don’t want to take up tonnes of this thread since I’ve already had my say, but I think I might be able to help with some of the issues you are having.

      I’m nervous to give out email addresses publicly but if you’d like to have a conversation you can reach me on twitter and we can take it from there: @_AlexYates_. Alternatively I’m pretty easy to find on LinkedIn if you search by name and company.

      Regards,
      Alex Yates – Pre-sales engineer at Redgate

    • Corey J. Gilson
      April 19, 2016 12:09 pm

      Have you developed any tricks for making large table changes? Obviously, if you rename a column and then run a change script from SQL Compare, it’d take the table (and your SaaS service/your company!) offline.

      I’m wondering about things like using parallel release methodologies like feature flags that control how the application interacts with different versions of the DB, “canary” releases to make sure a major performance flaw wasn’t missed in testing, etc.

      In terms of code/release management, how are these scripts – the ones that implement the special handling for keeping the data available during release – handled differently than quicker changes, like a stored procedure or view definition (not that those can’t be problematic, as well)?

      • Corey J. Gilson
        April 19, 2016 12:10 pm

        .

      • Disclaimer: I work at Redgate.

        If you use SQL Source Control with SQL Compare, then you can use Migration Scripts to handle cases that involve data or that you need direct control over. These Migration Scripts will be source controlled alongside your schema, reference data, and other code and will be used during the CI/CD processes so you have complete control over what runs in the deployment.

        Please contact Redgate if you’d like any more help.

  • Charles Randall
    March 26, 2016 3:01 am

    I was doing full bore continuous integration of SQL Server databases in 2007 using RedGate tools and Pulse. This was SaaS company with about a half-dozen web application products, and developer teams ranging in size from 5-20 (all of whom had database change authorization). There were both multi-tenant single-server configurations, as well as single-tenant servers – the deployment differences are fairly trivial once the whole process is automated.

  • I found this conversation b/c I’m trying to implement SQL CI in my company, building internal applications. I’m really, really depressed about how complicated all of this is. Many people on this thread mention all the custom scripts they have to use. I talked to a RedGate support guy and he was showing me how “easy” their solution is. Holy crap, it was like launching a rocket just to get some changes into the DB. I’m surprised someone hasn’t yet cracked this nut simply. It’s crazy.

    • Yea, database CI is totally different from the code site. Code you just replace. It’s simple. Databases hold stuff that has to remain – and still work. I’ve seen several approaches, and they all require lots of hassle. But, I believe it is still better than not doing CI.

Menu
{"cart_token":"","hash":"","cart_data":""}