Fully Automated Deployment – UAT
22 Nov 2018 | Tony Fauss
Every once in a while, some tech comes along that really makes you feel like you’re living in the future. GPS was one of those things. It’s not quite the Enterprise computer announcing your route and ETA in a friendly voice, but it’s close enough to feel like magic if you’re old enough to remember the Dark Times before it became consumer accessible. Automation, as cool as it is, doesn’t usually feel like that. It mostly feels like getting a tedious chore off your plate so you can find some other tedious chore to automate.
Fully automated deployment really does, though. When it’s in place and running smooth, you just push your code back to your repo and poof, it appears in your test environment. If you’ve been struggling with the various stages of coding, from committing to building to packaging to scheduling downtime to deployment, the first successful pass of a fully automated pipe feels like watching the Armstrong walk on the moon. There are, of course, concrete and valid practical reasons for full automation – it never would have gotten this far if it was just a cool toy. Some of the more common reasons for taking deployment automation all the way:
- Mitigates human error. Sure, it’s still possible to deploy code that doesn’t work quite right, or works but causes some unanticipated consequences elsewhere. What the automation mitigates is human error in the deployment process – it drops the chances of copying the wrong file to the wrong folder, or mistyping a site/server option on which the application relies, or missing a step and thereby not starting the web server back up after the deploy is finished.
- Speeds up deployments. Since we’re talking about test environments here, this means it’s faster and easier to make changes and test them in a live environment. Developers won’t have to go through a painful manual process to see if changing one line of code was really all it took to fix the problem.
- Ease of rollback. If it turns out that what you’ve deployed just doesn’t work, you can re-deploy the last good deployment and you’re right back in the same state you were in before the failure.
- Tests your tests for you along the way. Automated testing has to be validated like anything else, and if your pipeline is handling that, you can prove out your test cases and results at speed.
It also shows a lot of reduction in downtime and some reduction in cost (even if that cost is just developer time), but those aren’t generally as mission-critical in test environments as they are in production rigs.
So. We’ve got some code repository, a build server, and a couple of web servers. How do we get from there to deploy-on-demand? There are some moving parts involved, and the configuration is demanding if not difficult. It’s intentionally difficult to misconfigure any of these pieces in a really destructive fashion, but very, very easy to misconfigure any one of them in a way that just doesn’t work. You’ve already got some of those pieces in play if you’re doing manual deployments; we just need to add a few more.
- A code repo. Git’s extremely popular for a reason. Bitbucket’s also in pretty common use. Azure DevOps carries its own repo with the obvious ease-of-integration benefits for build/release management. The specifics aren’t terribly important, as long as it’s compatible with the next moving part.
- A CI instance. This is where the real magic is; it’s the bit that triggers a build from a push notification. There are lots of these available, each with its upsides and downsides. The nuts’n’bolts details are a bit out of scope here, but one thing to look for that doesn’t necessarily show up in comparisons and reviews is compatibility. Azure DevOps is obviously compatible with its own repo, for example. If you’re using a standard repo, most CI engines should be fine talking to it. If you’re using something a bit more esoteric, you’ll have to make sure that your CI engine can speak that language. You can lose a lot of long nights scratching your head and wondering why this doesn’t work if it’s a basic (if poorly documented) incompatibility between the systems.
- A deployment engine. Again, there are many available and we’re not going into a point-by-point comparison. It has to be compatible with the CI engine in the same way that the CI engine has to be able to read the repo, and it’s best to plan that out long before anyone gets around to installing or subscribing to something. Octopus Deploy, for example, integrates pretty seamlessly with Azure DevOps on the release side and Azure app services/VM’s on the deploy side, so it’s an obvious choice for a Microsoft pipeline.
- Deployment targets – obviously, you have to have somewhere for the code to run. You doubtless already have web servers and database servers running (in some capacity, either hardware or VM’s or PaaS), but it might provide a little peace of mind to set your new pipeline up in parallel with your existing one. When construction stops and the smoke clears, you’ll feel good about ditching your previous pipeline, but until then you’ll need targets for the parallel.
If we’re also looking to wipe out downtime, there are some extra considerations (multiple deployment targets, traffic management, etc) but that’s rarely strictly necessary for full auto deployment into a test environment. As long as you’ve got all those pieces up there, you can get your automation working fine.
The actual nuts’n’bolts configuration of the CI engine and deployment engine is a big topic, enough to warrant several articles of their own. The particulars vary from product to product, but they’ll all have some verbiage for “build on commit” – that’s almost self-defining for Continuous Integration. Azure DevOps has their process very well documented and easy to read at https://docs.microsoft.com/en-us/azure/devops/pipelines/get-started-designer?view=vsts&tabs=new-nav and similar CI products all have similar documents (although the quality and ease of use are all over the map).
Configuration of the deployment engine is just as picky and in much the same state. Octopus’s extension into Azure DevOps makes this easier than most, but there remain a lot of buttons to push and fields to fill out to get it in tune with AzDO. Documentation is available at https://octopus.com/docs/api-and-integration/tfs-azure-devops/using-octopus-extension and it’s reasonably easy to follow. Most deployment engines are pretty well documented and don’t assume excessive background of their audience.
We’re omitting a good bit, things that aren’t strictly necessary for the success of automating builds. It’s a great place to wire in automated testing, a good place to wire in things like log collection/aggregation, monitoring agents, etc, that you want to be are working cleanly before you promote them into production. All of those things warrant their own topics, and none of them are particularly difficult to integrate after the fact. All we’re trying to do here is get the basics automated from commit to audience-accessible. Once you have a solid handle on the process, you’ll start seeing all kinds of places you can plug into it and make your life easier.
Fully automating your test environment deployments saves time, hassle, and money (if only indirectly). It does have a few moving parts and the configuration can be a bit tricky, but it’s well-documented and we’re always here to help if it feels like a bit too big of a bite to swallow all at once. Hopefully, we’ve provided enough that you can start comparing products to find things that have the right feel for your specific environment; a quick Google search on “continuous integration” and “continuous deployment” will give you a world of information on the nuts and bolts.