Container App? If you’re running a .net/Azure/Azure DevOps shop and this hasn’t happened to you yet, it’s coming:
“We want to move $app into a container and run it on the Container App Microsoft’s so excited about.â€
I was lucky. I got that request as a test case, to see how quick’n’easy it was to do, maybe take some notes on the tough bits to help ease other people through. I was also asked to involve Octopus Deploy, which I admit is not a perfect fit for this use case but I see how it’s life-saving in multi-tenant rigs.
What follows is what I learned from pulling that off. Some of it’ll be painfully obvious to Docker wizards, and some of it will probably make Octopus wizards scratch their heads and wonder why I made some of the decisions I did.
I’m certain there’s someone out there that did this in an afternoon and can’t imagine why it took me so long. This isn’t really for them; this is for the folks that have been running Azure app/web services and git repos without a care in the world until this request hit their desks.
Our App Service Process
Before you stand any chance of successfully migrating a deployment process, you have to understand your start point and your destination. Everything else is engineering work to get from that first thing to that second thing. Broadly speaking, then, this was our build/release process:
- Build the application.
- Use ARM to deploy/update relevant infrastructure (in our case, a web app and a database)
- Deploy the application you built in step one to the infrastructure you built in step two.
- Fetch enough information from the database server to build a connection string for the application.
- Adjust your config files with the connection string you just cat’d together.
Pretty typical. There are lots of variations on the theme but that’s fairly common in the broad strokes. You have some latitude in order of operations – nothing stops you from pushing infrastructure before the app build, or from pulling/injecting connection string values before you deploy. What you can’t do is cleanly translate it for a Container App.
Container App Eccentricities
The first thing you’ll note is that there’s no space or concept for an empty Container App. You can’t deploy it blank and then retrofit a container into it. The portal wizard simply doesn’t allow it, and the ARM engine errors if the container Image is omitted. That by itself means we have to change our deployment process – we’ll have to build the container first, which means building the application first, and then containerizing it, and then pushing to whatever registry we’ll ultimately peek into from the Container App.
Linux Container App on Windows Server
If you’re used to building your applications on Windows, you’re about to hit a second hurdle. Azure Container Apps absolutely do not run containers not based on a Linux/x64 image. ARM checks for this and fails deployment (with an error message that’s clear, simple, accurate, and suggestive of how to resolve it) if it doesn’t like what that check turns up.
Building Linux containers on a Windows server is a big, big topic. It’s out of scope here but a simple search for “LCOW†(Linux Containers on Windows) will produce a lot of information. A lot.
There are adherents to a lot of philosophies, strategies, workarounds, etc. For our purposes here it doesn’t matter. You can’t get a hosted windows agent to build a Linux container (or at least, you can’t do it out of the box or with minimal effort – I confess to not trying to push WSL Docker support into the hosted agent during the build) and you have to have a Linux agent if you’re going to run in a Container App.
The Windows Workaround
The obvious answer is “sure, then build on a Linux agent and this problem goes away†and that’s almost true but not helpful if your build process has Windows dependencies. Not all apps or builds will, but there are plenty of places that have always developed on Windows, for Windows, and “just stop doing that†isn’t realistic. There had to be some way to build on Windows and use that build in Linux, all else being equal. Happily, there is – it lacks for a certain degree of elegance but it does work. The process looks like this:
- Build the application on a Windows agent.
- Package it up with your favorite package manager (I used Nuget but that was an arbitrary decision, driven by trying to stay self-contained in AzDo where the Artifacts feed was just laying around waiting for me) and place it somewhere accessible to your pipeline
- Set up a second AzDo stage for containerizing, and make sure that stage uses a Linux image. You can set the “pool: “ YAML separately per-stage or per-job in an AzDo pipeline and it’s not even difficult.
- Download the packages from step two and extract them. (I am aware that it is technically possible to teach Docker to build a container from a NuGet package. There’s no consensus on the right way to do it and all of them are pretty involved, so I skipped it.)
I had hoped, when I set out, that AzDo would persist storage between agents and spare me the package/upload/download/extract process but it doesn’t. The files you build on your Windows agent disappear when the agent does and your second (Linux) agent will never know they were there at all. It’s unsurprising, if disappointing, but the workaround isn’t disastrously difficult either.
The Next Step Towards Your Container App
So now we’ve got almost all the files we need for our container and we’re close to being able to docker build/docker push it so a Container App can use it later. We’re not quite there yet, though. We still have those connection strings we have to include, but since we started with the app build, we don’t have a database yet or the values we’ll need to connect to it. There are a few ways of going about this:
- Move along like nothing’s changed and write the connection strings into the files in the container after it’s born. I rejected this out of hand – it’s contrary to the design philosophy of containers and while you can make it work, it’s not worth the time or blood to try to force containers to work like you want instead of how they want. Containers should be more or less self-reliant after they’re born.
- Adjust your connection strings in your config files before you start the container build. This was what I did and it works fine – the container goes up with the connection strings already in place and there’s never a need to log into the container or trick it into some complicated awk/sed startup script. It’s not optimal, as it turns out.
- Use Secrets in the Container App to set your connection string as an environment variable, and then teach your application to build the variable value into the config files. This is almost certainly the Microsoft-intended way of approaching this. I just didn’t find it until after I had this working.
Final State: Container App + Octopus Deploy
With all of the above in mind, here’s what my final state looks like:
- Build the application.
- Package it into Azure Artifacts.
- Hand off to Octopus Deploy to build the database via ARM template and update it via Invoke-Sqlcmd
- Swap to a second stage on a Linux agent.
- Download/extract the packages from step 2 into some folder at or below the Dockerfile (this is build context and super-important if you intend to COPY that folder in a dockerfile).
- Fetch the database server, database name, user name, and password to construct a connection string.
- Inject that connection string into the relevant config files.
- Build the container, using COPY in the dockerfile to get the built app files and updated config files into the container.
- Push the container into a registry.
- Hand control back to Octopus to build the Container App, using the container we just built/pushed as containerImage in the ARM.
I’m making this sound easier than it was. There are lots of idiosyncrasies – Azure doesn’t offer an Export Template for Container Apps yet, so you end up fishing through Resource Explorer to divine what fields are required for an ARM template to deploy one, Container Apps depend on Managed App Environments which in turn rely on Log Analytics Workspace and woe be unto you if your template does not account for this – lots of little things in AzDo/Octo syntax to work through, things like that.
What it probably should have looked like instead:
- Build the application
- Package it up into Azure Artifacts.
- Handoff to Octopus for database deployment/update
- Hand back to AzDo on a Linux agent.
- Download the packages from Step Two.
- Build/push the container
- Hand back to Octopus to fetch database variables.
- Deploy the Container App using the container we pushed in step 6 as the containerImage and the variables we grabbed in step 7 as secrets in the ARM template.
That’d be cleaner and faster when the smoke cleared, at no extra cost in development time. I was already going to be debugging ARM and the secrets section of Container App ARM is surprisingly straightforward.
Some Final Thoughts
You may be wondering “why Octopus?†I did. If you’re running a single-tenant shop and running various apps through various pipelines to various destinations but all inside your walls, Octopus probably isn’t a great fit for this. AzDo is perfectly capable of deploying an ARM template and running SQL scripts. Octopus shines if you have multiple tenants. Sure, you can (eventually) teach Azure DevOps to deploy a given thing to a given tenant using a given pipeline but I’ve done it and it’s frustrating almost to the point of madness to set it up.
I’ve never tried maintaining it after the fact but I can’t imagine that it’s any more pleasant. Octopus has a handy interface that makes it less prone to syntax hangups and comes out of the box with a strong concept of tenant for which AzDo wasn’t really designed. (At least, I don’t think it was, and the absence of a Tenants tab argues that way).
Is it worth it, overall?
That’s a harder question to answer. Like almost everything in commercial software, it really depends on your use case. Containers shine for self-contained apps that need a degree of portability – once a developer has a working container on a dev machine, you can feel pretty safe it’ll run just like that in a Container App or anything else that spins containers for a living (Kubernetes (which I’m pretty sure is the under-the-hood tech on a Container App anyway), Azure Container Instances, AWS AKS or ACS, take your pick.
If your code is heavily dependent on Windows, it’s not a great candidate for containers overall and is specifically forbidden in Azure Container Apps. There are solutions that will run Windows containers, but it’s seductively easy to just keep adding one more thing until the container is 2GB and takes forever to launch.
Best Case for a Container App
If you’ve got a fairly lightweight app that runs well on Linux, Container Apps have a number of features that argue for containerizing it to run there. I’m already wildly over my word count and so won’t go into them here, but feature documentation is available here. They’re pretty young and the wealth of community documentation isn’t quite there yet – it’s easy to find a problem and end up convinced you’re the first person that ever saw it – but they’re gaining popularity and support will grow along with them.
And that’s about it. That’s the overview. I didn’t get into specifics (this would have been a hundred times longer if I’d gone into every issue I encountered) but you’ll probably find that most of your headaches aren’t specific to the Container App. If you’re accustomed to troubleshooting ARM, there’s a good deal of that. If docker is new to you, it’s a big world with its own idiosyncrasies. It’s an interesting project, at the least, and hopefully, this gives you some ground to stand on when a manager or developer raises the call.