Cloud Platform Engineering, DevOps, SRE, Kubernetes, KOPS

Containerisation done right: reduce infrastructure TCO and product time to market. Part 1

In a modern age of microservices, cloud based IT infrastructure, continuous integration and deployment, importance of containerising infrastructure is of particular importantance. If done correctly from the outset you will save precious DevOps resources. Here is the key thing to make it work for you: consistency and automation. With a consistent and automated method of defining resources, monitoring, alerting, building and testing, adding a new microservice or testing environment becomes a magnitude easier that in the non-containerised world. In a non-containerised world it is necessary to spin-up virtual machines and do lot’s of interaction with cloud providers over their APIs, something that is both complex and time consuming. Let me take you for a journey and explain the steps along the way. It’s a journey from a single service to a complete IT infrastructure up and running.

Disclaimer: this is my opinionated approach to building scalable, resilient and manageable IT infrastructure. To make the post concise I had to simplify some concepts a little. I hope that simplified explanation is a reasonable explanation of the thinking process behind what I am describing.

To start…

The whole process of containerisation requires certain development/DevOps resources – simply put, it’s not for free! The more microservices you are running or planning to run, the more sense it makes to go ahead with that change. If you are not into microservices yet it’s never too late to start. Let’s start with some whys

Why microservices?

If you decide to switch to microservices architecture you should clearly understand why. Here are few whys for using a microservice based architecture:

  • Building blocks for larger systems
  • Lower cognitive friction over monolithic systems
  • Well defined services, can be developed by smaller team
  • Lower TTM
  • Can be individually scaled (resource wise), so you benefit more from shared infrastructure
  • In essence, it makes sense to define complex system as set of simple and well defined components – microservces

Why containerisation?

Because it beautifully abstracts the service away from additional libraries it needs to run. Let’s say you have two applications, one written in Python and one in Scala. Each requires different sets of libraries/binaries to be available to run. If you add more programming languages into the mix it easily becomes a mess to run all of them on development machines, qa, prod, etc. Then we have problem of version incompatibility, and so on and so forth. Let’s say however that you use Docker as your container engine. Once you package and distribute those applications as Docker images that effectively becomes a unified interface to run all sort of things. Life becomes simpler. docker run and there you go (almost). To summarise the ‘why’ of containerisation:

  • Unified way of abstracting applications/services into executable units
  • Container can be run almost anywhere, including development boxes
  • Easy to reason about and bring in additional 3rd party services

Let’s set for the journey

OK, with some whys answered let’s now go on a journey of containerising a software system – step by step.

The Service

Let’s start at the beginning. We have a service which we want to deploy so that it can serve its purpose. The service can be a Java, Scala, NodeJS, PHP or whatever-other-technology application. With single service and simple setup (dev and prod) you can easily manage build and deploy with little fuss. You can easily have it build and deploy with a subset of Jenkins, Ansible, Puppet, Cloud Formation, etc. The problem in this setup is consistency of execution environments. There may be variations of libraries between your development environment, QA and production – and that’s not a good place to be. You can solve that problem with Puppet or Ansible but that’s a bit of a headache – you will be burning precious DevOps time but at least you can run the thing in some sort of consistent manner. It may get bit more messy when you have few services to manage.

First container

We can make things bit easier by packaging up the service into a Docker image, which for modern services is relatively straightforward and Developers are going to love it! Containerisation will give you few benefits from the start:

  • An easier way of ensuring consistency: if you tag released containers with unique tags you will be able to run exactly same version in dev, qa, prod. Avoid latest tag like fire. It’s a slippery slope!
  • Consistent way of launching a service
  • Ease of running integration/system tests against a service in CI/CD systems – you will be simply able to lunch the ‘thing’ on a host with Docker service.

Basic infrastructure for containers

So, in order to leverage ‘economy of scale’ for containers it makes sense to setup some basic infrastructure. You will need:

  • Docker registry. There are plenty of options: AWS ECR, GCP Container Registry, Docker Hub, Quay, Nexus, GitLab CI, and so on. Those are relatively simple to setup but are a crucial backbone of container dependent infrastructures.
  • CI/CD with tooling for building containers. There are few approaches you can take and will differ based on technology you are using. Whatever you are using, at the end of the day you need to be able to build a container from your CI/CD platform. Worth investing some time in it. The good thing is, the packaging and publishing of the containers will be agnostic to programming languages and technologies you are using to write your services. In a sense it will be much easier to add in new services and build with different programming languages, as releasing and storing deployable artifacts (Docker images) becomes unified. OK, so at this stage, you have your first containerised application, a Docker registry, and some tooling around building and releasing containers. Awesome 🙂

Adding in more services

Since you now have containers basics covered it’s time to move on and add more services. Once you have the first service complete it becomes relatively easy to add in more. Ideally, you would build and release new containers in exactly the same manner as your first container. Consistency, remember that? Re-use tooling and components to save yourself time AND very importantly: encouraging a wider audience, such as developers to actively participate in those DevOps’y flavoured activities. Ideally, once the first service is containerised any software engineer within your group should be able to and be encouraged to containerise the next service. Initially created tooling should make it super easy to turn a PR merge a new Docker image pushed to registry (provided it passes tests 😉 ). I’m a big fan of Jenkins files and defining build pipelines as code. In this way Dockerised applications and pipeline-as-a-code are a great foundation for well containerised infrastructure.

To be continued …