An overview of the Kubernetes Container Runtime Interface (CRI)

July 29, 2019 devadvin

A long time ago, in a Github repo far away, the Kubernetes development team…

Wait! Before we go in depth into the Kubernetes Container Runtime Interface, let me explain a few things first. Kubernetes includes a daemon called kubelet for managing pods. Kubernetes introduced pods, which specify the resources used by a group of application containers. Docker made these application containers popular just five years ago, and now they are even more popular due to the immense ecosystem surrounding Kubernetes. At run time, a pod’s containers are instances of container images, packaged and distributed through container image registries.

The following architecture diagram shows where kubelet and Docker fit in the overall design:

image

Arguably the most important and most prominent controller in Kubernetes, kubelet runs on each worker node of a Kubernetes enabled cluster. Acting as the primary node agent, kubelet is the primary implementer of the pod and node application programming interfaces (APIs) that drive the container execution layer. Without these APIs, Kubernetes would, mostly, be a CRUD-oriented REST application framework backed by a key-value store.

Kubelet processes pod specs, which identify the configuration for the Pod and application containers. Kubernetes pods can host multiple application containers and storage volumes. Pods are the fundamental execution primitive of Kubernetes. Kubernetes pods facilitate the packaging of a single application per container and decouple deployment-time concerns from build-time concerns. Kubelet executes isolated application containers as its default, native mode of running containers in a pod, as opposed to processes and traditional operating-system packages. After kubelet gets the configuration of a pod through it’s pod spec, it ensures that the specified containers for the pod are up and running.

To create a pod, kubelet needs a container runtime environment. For a long time, Kubernetes used Docker as its default container runtime. However, along the path to each release, it became clear that the Docker interface would continue to progress and change — and thus occasionally break Kubernetes.

In addition, other container runtime environments came along, each vying to be the container runtime environment used by Kubernetes. After trying to support multiple versions of kubelet for different container runtime environments, and trying to keep up with the Docker interface changes, it became clear that the specific needs of a Kubernetes container runtime environment needed to be set in stone. Now any container runtime environment under kubelet needs to meet a specified interface, allowing for separation in the kubelet codebase, and quelling the need to support n-different versions of kubelet. This situation begat the Kubernetes Container Runtime Interface (CRI). The current version of CRI is v1alpha2.

To implement a CRI integration with Kubernetes for running containers, a container runtime environment must be compliant with the Open Container Initiative (OCI). OCI includes a set of specifications that container runtime engines must implement and a seed container runtime engine called runc. Most container runtime environments use runc, and it can also be used as a measure of compatibility against other non-runc container runtime engines. The OCI Runtime Specification defines “the configuration, execution environment, and lifecycle of a container.” The OCI Image Format Specification defines “an OCI Image, consisting of a manifest, an image index (optional), a set of filesystem layers, and a configuration.” Additionally, the CRI container runtime environment should successfully run all CRI tools validation tests and Kubernetes end-to-end tests on Kubernetes test-infra.

For more information, check out this brief discussion around OCI image support in the ecosystem.

To go a bit deeper, look at an architecture diagram for a container runtime environment called containerd:

image

Since containerd V1.1, CRI support is built into containerd as a plugin. It is enabled by default, but optional. The CRI plugin interacts with containerd through direct function calls that exploit containerd’s client interface. This architecture based on plugins proved to be both stable and efficient. The plugin handles all CRI service requests from kubelet and manages the pod lifecycle through operating system services, CNI, and containerd client APIs that in turn use containerd services
(more plugins). These containerd services pull container images, creating runtime images for the containers (snapshots), and use container runtimes environments, such as runc, to subsequently create, start, stop, and monitor the containers.

Other CRI integrations include cri-o and dockershim (which is currently built into kubelet). It uses Docker, which in turn uses containerd.

To figure out which CRI you should use is beyond the scope of this blog post, and there are many opinions on this topic. Phil Estes of IBM recently presented “Let’s Try Every CRI Runtime Available for Kubernetes” at KubeCon Barcelona, which gives some perspective.

In follow-up blog posts, I will compare and contrast some of the more popular CRI integrations and the runtime environments they can be configured to use, both virtual machines and runc types. I will dig into the CRI APIs themselves and a few commonly used CRI and OCI tools.

For now I leave you with a few links to reference for configuring CRI integrations and configuring pod specs to select the runtimes used by these CRI integrations.

In closing, I’d like to give a special shout out to all the maintainers of kubelet, CRI runtimes, and OCI, for which there are seriously too many to list.

May the CRI be with you.

Previous Article
Call for entries: IBM launches IBM Hyper Protect Accelerator Powered by IBM LinuxONE
Call for entries: IBM launches IBM Hyper Protect Accelerator Powered by IBM LinuxONE

Today, IBM announced the IBM Hyper Protect Accelerator Powered by IBM LinuxONE designed to build and scale ...

Next Article
IBM and the Linux kernel
IBM and the Linux kernel

Learn about IBM's history with the Linux kernel--straight from someone who has been working in the kernel f...

×

Want our latest news? Subscribe to our blog!

Last Name
First Name
Thank you!
Error - something went wrong!