To understand what a software container is, one should first know why it's useful. As Solomon Hykes, co-founder of Docker, explained in 2013, the concept comes from shipping containers: boxes with a standard shape, size, and locking mechanism used to ship goods around the world. Any shipping container can be moved around by the same cranes, ships, trains, and trucks because these only interact with the box itself, regardless of its contents. This separation of concerns allows for automation, which leads to higher reliability and lowers costs.
Container technology, also known as just a container, is a method to package software into standardized units so-called Containers for:
Docker is a set of products that use OS-level virtualization to deliver software in packages called containers. Containers are isolated from one another and bundle their own software, libraries, and configuration files; they can communicate with each other through well-defined channels.
A virtual machine (VM) is a virtual computer system running on a host system. Multiple isolated VMs can be run in parallel on a host system. The physical hardware resources of the host system are allocated via so-called hypervisors.
Docker is not an alternative for virtual machines, but complementary technology for different purposes. VMs enable host management via APIs and provide infrastructure elasticity. Docker, on the other hand, allows software to be assembled as small Lego blocks, so that they implement modern architectural approaches: immutable infrastructures, microservices, distributed software.
Virtual Machines (VMs) | Containers |
---|---|
Represents hardware-level virtualization | Represents operating system virtualization |
Heavyweight | Lightweight |
Slow provisioning | Real-time provisioning and scalability |
Limited performance | Native performance |
Fully isolated and hence more secure | Process-level isolated and hence less secure |
Docker aims to run applications, with the containers running in Docker sharing the host OS kernel. In contrast, virtual machines are not based on containers, but are built from the user space plus kernel space of an operating system. Under VMs the server hardware is virtualized. Each VM has an operating system and applications and shares hardware resources from the host.
Both VMs and Docker have advantages and disadvantages. In a VM environment, each workload requires a complete OS - in a container environment, multiple workloads run in one OS. The larger the OS footprint, the more worthwhile container environments are. In addition, containers offer other benefits such as reduced IT management resources, smaller snapshots, faster application launch, reduced and simplified security updates, less code to transfer, migrate and load workloads.
The importance of isolation is obvious: It helps us manage resources and security as efficiently as possible. It simplifies the monitoring of the system
There are different types of virtualization:
Docker takes advantage of several features of the Linux kernel to deliver its functionality.
Docker takes advantage of several features of the Linux kernel to deliver its functionality.
Since version 0.9, Docker includes its own component (called libcontainer) to directly use virtualization facilities provided by the Linux kernel, in addition to using abstracted virtualization interfaces via libvirt, LXC, and systemd-nspawn. Libcontainer enables containers to work with Linux namespaces, control groups, capabilities, AppArmor, security profiles, network interfaces and firewalling rules in a consistent and predictable way.
Runc is a client wrapper around the pre-existing libcontainer library project. It is one implementation of the OCI runtime specification. Scope of runc is clearly limited by OCI charter: no networking, image handling/resolution, storage support.
containerd is the container runtime that the docker engine uses to create and manage containers. Under the hood, containerd uses runc to do all the linux work. It abstracts away calls to system or OS specific functionality to run containers on windows, solaris and other operating systems. The scope of containerd includes the following
Docker makes use of kernel namespaces to provide the isolated workspace called the container. When you run a container, Docker creates a set of namespaces for that container. These namespaces provide a layer of isolation. Each aspect of a container runs in a separate namespace and its access is limited to that namespace.
Docker also makes use of kernel control groups for resource allocation and isolation. A cgroup limits an application to a specific set of resources. Control groups allow Docker Engine to share available hardware resources to containers and optionally enforce limits and constraints.
Docker Engine uses the following cgroups:
Following the release of Docker, a large community emerged around the idea of using containers as the standard software delivery unit. As companies began to use containers to package and deploy their software more and more, Docker's runtime container did not meet all the technical and business needs that engineering teams could have. In response, the community began developing new runtimes with different implementations and capabilities. Simultaneously, new tools for building container images are designed to improve Docker's speed or ease of use. To make sure that all container runtimes could run images produced by any build tool, the community started the Open Container Initiative (OCI) to define industry standards around container image formats and runtimes.
Docker's original image format has become the OCI Image Specification, and various open-source build tools support it, including:
(End of slides)