Understanding Containers | What is Software Containerisation?

According to a 2020 “State of the Cloud” report by Flexera, containers are now considered mainstream. Containers offer a better way to create, package, and deploy software across different environments in an easy-to-manage way.

In this blog, we’ll look at what containers are, their history, how they are different from other kinds of virtualisation technologies, and some of the main types of containerisation.

What is a Container?

A container is a packaged application in a standardised unit for development, distribution, and deployment.

Shipping containers are standardised units used for shipping. The same container can be used on a container ship, on a train, or an articulated lorry. The same standardised approach is used for software containers too.

A software container can be used in the cloud or on-premises. It can run on a desktop, in a data centre, or in the cloud. It contains an application and all the dependencies required by the application to run.

Containers are small, fast, and portable. They do not need to include an operating system because they leverage the operating system of a host. A container with an application in it will be smaller than a virtual machine with the same application in it.

Containers are distributed as a binary container image, which includes metadata about the container. Typical metadata would be the container’s name, resource requirements, a description of what it does, the date it was created, and version.

A container ‘engine’ sits on top of an operating system to provide resources (network, storage, compute) to the containers.

Software Containerisation

In this diagram, the containers are shown in green and their dependencies are in yellow. A container requires a “container engine” (shown in light blue) to provide it with the resources it needs. The “container engine” coordinates between the containers and the host operating system to provide compute, storage, and network services to the containers. The container engine can also be called a ‘container runtime’ by some vendors.

Containers are a consistent environment for the application they contain. They give the developer the ability to create a predictable environment that is isolated from other applications. They also include software dependencies needed by the application, such as a specific version of a programming language runtime, or other software libraries. From the developer's perspective, all this is guaranteed to be consistent no matter where the container is deployed.

Once deployed from the container image, in most cases, the application is ready to go with minimal configuration required. This minimises configuration errors as well as deployment time.

The same container image can be used in small to large environments, they are not built for specific environments. For example, the same image can be used for a small test environment as well as a larger production environment.

The history of containers

The history of containers can be traced back to 1979 with the release of Change Root (or “chroot”) in Unix v7. This allowed the root directory of a process to be changed to a new location and was the start of process isolation.

The first time the term container was used with this technology was in 2004 with “Solaris Containers”.

Google launched the “Process Containers” project in 2006, it was designed to limit, account, and isolate resources for a collection of processes. It was later renamed “Control Groups” aka “cgroups”. This was then merged into the Linux kernel.

2008 saw the first and most complete look at containers with LXC (Linux Containers). It used “cgroups” and Linux namespaces, and worked on a standard Linux kernel without the need for patching.

Docker was released in 2013 and container popularity exploded as it was the first solution that included an ecosystem for container management.

Kubernetes was released a year later in 2014 adding orchestration and further management features which extended Docker functionality.

Containers compared to Virtual Machines (VMs)

Containers and VMs are similar in their goals: to isolate an application and its dependencies into a self-contained unit that can run anywhere.

Additionally, containers and VMs remove the need for physical hardware, enabling more efficient use of computing resources, both in terms of energy consumption and cost-effectiveness.

The main difference between containers and VMs is in their architectural approach.

If you have five VMs running on a physical host with an application in each, you will also have five operating systems running too.

With five containers you have one operating system and five independent application environments.

Containers and Microservices

Containers and Microservices


Moving to a container architecture usually goes hand in hand with a shift to a microservices architecture.

This is part of the application modernisation journey, with the primary goal in microservices architecture being to break an application down into constituent services and decouple these services as much as possible. With a microservices architecture, you can scale the parts that need scaling independently from the whole, fault management is simpler and quicker to resolve, applications are easier to build and maintain, the services are reusable, it works well with DevOps – specifically continuous delivery, components can be spread across locations for resilience, and reduced vendor or technology lock-in.

The diagram above shows the traditional application architecture on the left and microservices-based application architecture on the right.

There are challenges with microservices. Most applications would need to be recoded to take advantage of microservices, the communication between services is more complex, and it would represent a mind shift in the developers and operations teams that support it.

However, this is the direction the industry has been heading, with more software vendors and online service providers going down the microservices route.


Docker is an “Open platform for developing, shipping, and running applications which enables you to separate your applications from your infrastructure”.

Docker is an open-source project launched in 2013. It is built using native functionality in the Linux kernel (“cgroups” and LXC), it is also available for Windows Server 2016 and later.

Docker is the most popular container engine, with a market share of 65%, according to a 2020 survey of enterprise container users.

Software Docker

Docker can be used as a full-stack solution and not just a container engine. This includes management of images, containers, networking, and storage. However, it is more commonly managed with Kubernetes.

Docker is a client-server architecture with a standard API between the two.

There is a public library of Docker images, with verified vendor releases of software, similar to an app store. Docker images will work on any distribution of Linux that has Docker installed, it’s not tied to a specific distribution. Container versions and updates can be managed and tracked in Docker.

When creating containers, if there are common applications or libraries, these can be created as a base image and the unique parts added on top to create your containers. This makes the container creation process simpler. For example, if all your containers require an Apache web server, this can be created in a base image that is used as a starting point for the containers you are creating.

Docker Alternatives

LinuX Containers (LXC)

LXC was released before Docker, is built on top of LXC and extends its feature set. Whilst LXC has fewer features than Docker, it is more lightweight and a well-established, if less popular, product.

Container Runtime Interface (CRI-O)

CRI-O is Red Hat’s container engine for OCI compliant containers.

It is designed to work with Kubernetes and does not include any unnecessary code or features to keep it as lightweight as possible.

Kubernetes created and documented a standard for the container runtime interface (CRI). Red Hat built a container engine to work with this standard that is compatible with the Open Containers Initiative image format, calling it CRI-O.

Red Hat’s motivation for creating CRI-O is to reduce the footprint of the container engine, make it more secure, and to address some architectural concerns with Docker.

However, both standards being used are based on Docker. Both the CRI defined by Kubernetes and the container image format are based on Docker. This was deliberate to make transition between the two as simple and easy as possible.

What is Kubernetes?

Kubernetes is a portable and extensible open-source platform for managing containerised workloads and services, that facilitates both interactive configuration and automation. It is a popular product with around 58% of organisations using Kubernetes, and has a large and growing ecosystem.

It was originally an in-house product used by Google, but it was open-sourced in 2014. Google had been using it internally for about a decade prior to that point, so it represents a key technology for them. It is now the flagship product of the Cloud Native Computing Foundation (CNCF) which is backed by key industry players such as Google, AWS, Microsoft, IBM, Intel, Cisco, and Red Hat.

Kubernetes is the de facto standard for enterprise container orchestration and automation.

Usually deployed as a cluster, it goes hand in hand with Docker or CRI-O, all of which are most likely running on top of Linux. For Red Hat that would be CoreOS. CoreOS is a special version of Red Hat Enterprise Linux specifically designed to be container-centric. Kubernetes creates an abstraction layer on top of a group of hosts that provide services to IT teams and developers to deploy their containers.

Kubernetes features:

  • Control resource consumption by application or user group/team.
  • Evenly distribute container load across the host infrastructure.
  • Load balance requests across instances of the same container.
  • Monitor resource consumption and take action to stop or limit containers.
  • Move a container instance from one host to another if there are not enough free resources, or a host fails.
  • Automatically relevel a cluster if a new host is added.


OpenShift is the Red Hat implementation of a container engine and Kubernetes, including the underlying OS - CoreOS

Red Hat defines OpenShift as “…an open-source container application platform based on the Kubernetes container orchestrator for enterprise application development and deployment”.

Since version 4.0 (2019) it has defaulted to using CRI-O instead of Docker.

Think of OpenShift as the full software stack for an enterprise clustered container platform. It includes all the necessary components to develop, deploy, and operate Linux containers in an enterprise environment. The way Red Hat explain the differences between OpenShift and Kubernetes is by comparing it to Linux - Kubernetes is the kernel and OpenShift is the distribution.

vSphere Integrated Containers

This is a free, open-source add-on for vSphere.

VMware defines it as: “vSphere Integrated Containers enables VMware customers to deliver a production-ready container solution to their developers and DevOps teams. By leveraging their existing SDDC (Software Defined Data Centre), customers can run container-based applications alongside existing virtual machine based workloads in production without having to build out a separate, specialized container infrastructure stack”.


Many of the tools used with containers are also used in DevOps.

Containers and DevOps, in general, are closely related, with containers playing an important part in the DevOps methodology.

Business Benefits:

  • Faster delivery of features.
  • More stable operating environments.
  • Improved communication and collaboration.
  • More time to innovate.

Key Points

  • Container deployment goes hand in hand with DevOps.
  • The same container image used in testing can be deployed to production.
  • Any dependencies are included by the container producer in a known good and verified configuration.
  • Many of the tools used for managing and deploying containers will be familiar to anyone who practices DevOps or has used a DevOps pipeline.
  • You can achieve higher consolidation ratios with containers than virtual machines because they are smaller and more lightweight.
  • The same container image can be used in testing as well as production.
  • The image is supplied with all necessary dependencies and configured in a ‘known good’ and verified state.