Automated Database Changes

Share This Article:

We’ve talked elsewhere about automating application code changes.  That tech is pretty stable and easy to understand (in principle, if not in the details).  App code changes on web servers or PaaS sites are intuitive. Commit code, build code, package build, deploy package.  Even if you have to do it on a hundred servers, tools like Octopus Deploy are here for you and happy to do the grunt work for you.  Do it right, and you can get very close to zero downtime along with less complicated and less painful deployments. Application code updates are easy and there are great, well-advertised systems to help manage those processes.

But a large part of why those changes are easy is because every site is independent of every other site.  If I have five web servers and need to update them with the latest and greatest, I can take any one of them down, update it, and put it back up before I pull the next one down.  If one of the web servers mysteriously vanishes or explodes in a digital mushroom cloud, it’s okay. The other servers don’t even know it’s there and will keep happily serving up the application.  The nature of the environment gives us some flexibility in how we serve up the application.

Not so, the database.  All of those web servers are talking to the same database, and any change there necessarily affects all of the web servers, all at the same time.  A good, quality mistake here torpedoes the entire application and may or may not be recoverable without unacceptable levels of effort and pain. The centralized nature of that subsystem makes it the riskiest part of any application update.  To go along with that, once an organization gets to a certain size, it will inevitably have database administrators watching every change to their databases with keen eyes and sharp claws. DBA’s famously (and correctly) stand guard over their databases and database servers to make sure that every change is well-understood, necessary, and as safe as it can be.  I’ve never encountered a DBA that was comfortable with developers or even sysadmins making database changes.

With all of that working against you, it’s easy to wonder why you should bother automating database changes.  Write them up, submit them to the database team, wait for approval, and let a DBA make the change when it’s time to perform the actual deployment.  It’s a fairly common deployment path and it’s worked for years; why change it?

For a couple of reasons.  

  • Automating database changes means checking them into source control, with all the tracking, auditing, and history source control provides.  If you’re fighting a new bug, you can walk directly back to the last database change and see exactly what ran and what the result was. You can go back to every change from the beginning to right now, and see if any of those shed light on why your app is storing usernames as animated .gifs (or whatever else it’s doing that you don’t like).
  • It also gives your database team an audit trail they might not otherwise have.  They can see exactly what changes ran, when, and it what order. It gives them some insight into performance problems or one-off weirdness that might slip through testing.  
  • Once your database changes are plugged into your deployment pipeline, they become eligible for automated testing in lower environments.  Deploy app, test app, deploy database change, test app, scream if something goes wrong at any of those steps. That saves your database team a ton of work and spotlights problems they might not be looking for; they’re certainly checking to make sure your SQL script doesn’t inflate execution times or beat up their disks, but they might not be testing your changes against the calls your site is making.
  • Once your process is in place and your tests are approved and signed off, you can drastically increase your deployment cadence.  Once you’re confident that testing will catch anything fatal before it gets into production, you can release as often as you care to without feeling that stress that usually goes along with database changes.

With all that in mind, the next obvious question is how to get started.  We’ve talked a little bit about the hurdles and the reasons to jump, but we haven’t addressed the how yet.  Fortunately, the “how” is comparatively simple and more challenging in process than in technology. At a glance:

  1. If you have one, loop your database team in from the very start.  There are datacenter walls lined with the heads of developers and operators that attempted end-runs around the data guardians.  You likely won’t meet with much resistance – this process is good for DBA’s too – but they have to be involved from the word go.  They know the database structure and its strengths and weaknesses better than anyone else, and they’re in the best position to review changes before they’re implemented.
  2. Check all database work into source control.  Whether it’s a powershell script adding rows to a table or a SQL script adding an entirely new table, it has to go into source control.  Otherwise, we lose our audit trail and we also lose our ability to package up the changes for later deployment. “I just need to make this one change real quick to fix this bug, it’s low impact and I can do it faster by hand than by deploying it” is the bane of this system, and depressingly common in some cultures.  Try and remind the “just one change real quick” guys that it isn’t about speed – it’s about coming back a year later and trying to remember what you did to fix this issue because something similar has popped up.
  3. Use your favorite deployment tools to actually make the changes on live servers, starting in your test environment.  Octopus has a number of Redgate steps defined and available – Redgate’s great for this. There are lots of other options, in varying degrees of price and feature richness.  A deep product comparison is a bit out of our scope here, but it’s a widely discussed topic and all the information you could ever need on the topic is available. Don’t deploy by hand; if your tools fail to deploy your database change, that’s a flag and it should be investigated instead of worked around.  

That’s the broad overview.  At step 3, your deployment process should be able to treat database changes as just one more step in the deployment.  It may be a long step, if you’re doing a lot to the database, but once you have a good automated database change process you shouldn’t need to do a lot to the database all at once.  This process is much better suited to small, incremental changes performed frequently than it is to sweeping changes performed intermittently.

Summing Up

Automating your database change process ultimately makes life easier and safer for everyone involved.  Even if your manual process has worked fine for years, smart money says someone is paying for it in stress, late nights, or troubleshooting time, all things you can reduce or eliminate with automation.  The road can be a little bumpy and it can take some time for the “no manual changes” rule to really sink in, but it’s worth it when it does.

Getting started is pretty simple; get some good tools working, put some process changes in place to make those tools useful, work out the kinks in a test environment, and you’re underway.  Review it every so often to make sure the process is working for everyone concerned, change the process if you need to. (I realize I’m underselling the possible complexity of “get some good tools working”.  I presume here that we already have application code working under automation and if that’s true, extending it to handle databases isn’t tough. If that’s not true, then this process necessarily suffers from the same setup complexity as any other CI/CD pipeline.)

Related Articles

Need Help with an Upcoming Project