Kubernetes or K8s is an open source container orchestration platform which was originally developed by Google. It is designed to automate the operation of Linux containers – whether they are running on local systems, data centers or in the cloud. 

Since its initial release in 2014, Kubernetes has been gaining wider distribution in enterprises. Its great strength and the reason for its popularity is its ability to manage large numbers of contain
ers. In Kubernetes, orchestration involves automatically monitoring, rolling out, fixing, copying and migrating containers.

Despite its popularity, even experts are not able to understand Kubernetes completely. While it has a clear hierarchy, it also consists of many objects and individual functions that interact with each other. This makes it a complex and multi-layered technology. In this article, we will try to give a simplified overview of the architecture and the main K8s objects. Using Kubernetes always starts with creating a cluster.

What is a Kubernetes cluster?

Using Kubernetes generally means running one or more clusters. Typically, a cluster consists of a control plane, which controls all objects in the cluster, and one or more worker nodes running the containerized applications. To ensure redundancy of the Kubernetes environment, the default setup of a cluster often consists of three control plane nodes and three worker nodes.

For orchestrating the worker nodes, the control plane in a cluster includes the following components:

  • API server: The API server acts as the front-end of the Kubernetes control plane. It can be used to read the desired state of the cluster and determine which applications K8s should run and how.
  • Scheduler: The scheduler places containers based on defined resource requirements and assigns free pods to a node. 
  • Controller manager: The controller manager compares the actual state of the cluster with the given specifications. 
  • Etcd: Etcd is a consistent and highly available store for storing all critical cluster data, such as Kubernetes configuration, state, and metadata.

The worker nodes run the two components kubelet and kube-proxy, which are responsible for controlling the containers and for communication between the nodes.

A cluster thus consists of different nodes. But what do the nodes do exactly?

What is a Kubernetes node?

Nodes are the central building blocks of a cluster. A distinction is made between control plane nodes and worker nodes. While the master node is responsible for the administration of the cluster, worker nodes provide the resources and services required for the operation of workloads. 

Practice tip: The role of a node can be recognized by the labels “control-plane” or “master” using kubectl get nodes.  

By hosting the pods, worker nodes enable the execution of the actual workloads. In doing so, the nodes act as a pool of CPU and RAM resources on which Kubernetes subsequently runs the containerized workloads. 

While you usually still have to manage the physical or virtual machines yourself, Kubernetes takes the decision away from you as to which node the application will end up running on and whether there are enough resources available on that node at all. 

In practice, if you deploy an application in a cluster, Kubernetes distributes the work among the nodes. In doing so, it can seamlessly move workloads between the nodes of the cluster.

You can work with clusters to separate applications and resources from different teams in Kubernetes. In traditional IT infrastructures, you would often implement this using VMs, separated networks or environments. In Kubernetes, however, there is a very proven and efficient tool for dividing clusters into isolated areas: namespaces.

What are namespaces in Kubernetes?

Namespaces allow you to divide Kubernetes clusters into isolated, logical areas. Each cluster supports any number of namespaces, but the namespaces themselves cannot be nested. 

They can be used, for example, to organize the resources of the Kubernetes infrastructure. Thus, a namespace can be thought of as a kind of virtual sub-cluster.

Almost every Kubernetes resource is either in a default namespace or in a manually created namespace. However, the nodes and persistent volumes are exceptions. Both are so-called low-level resources and exist outside of namespaces – so they are always visible to every namespace in the cluster.

Practical tip: All non-namespace-bound objects can be found by kubectl api-resources --namespaced=false

Why is it advisable to use namespace?

As already mentioned, namespaces allow different teams to work together in their own virtual cluster, for example in the context of projects. This means that the environment of the individual teams is not affected. At the same time, namespaces can be used to share the resources of a cluster between several teams and users via resource quotas. 

Namespaces can also be used to limit user and workload access to specific namespaces. They provide a simple way to separate application development, testing and operations – and thus map the entire lifecycle on the same cluster.

Does communication take place between namespaces?

Namespaces are separated from each other, but they can easily communicate with each other. Kubernetes' DNS service can locate any service by name, using an extended form of DNS addressing (svc.cluster.local). Adding the namespace name to the service name allows access to services in any namespace in the cluster.

Practice tip: A query on service.namespace is identical to service.namespace.svc.cluster.local

Optionally, network policies can also be used for access between namespaces, for instance to allow or deny all traffic from other namespaces.

So far, you've learned what clusters and nodes are and how you can use namespaces to divide your clusters into logical areas. We've also learned that worker nodes host pods without specifying them further. We will now catch up on that in the next sections.

What is a Kubernetes pod?

A pod is an environment that is deployed on a node and runs a container. While it is possible to run multiple containers in a pod, it is usually the exception. 

Pods can be thought of as standalone computers that run a single task. If you change the pod's configuration, Kubernetes automatically implements the changes by creating or removing new pods. 

Should a node fail, Kubernetes automatically deploys the affected pods to another node. If Kubernetes detects the failure of a pod via health check, it also restarts it. Pods are rarely specified in Kubernetes itself. Usually, they are created via higher-level mechanics such as deployments, DaemonSets, and jobs. If pods depend on a system state, creation is often done via StatefulSets. 

Practice Tip: The “Controlled By” attribute on output via kubectl describe pod [podname] can be used to identify how a pod was created. If the field is empty, the pod was started without higher-level mechanics.

What are deployments in Kubernetes?

Deployments are one of the first workload creation mechanics in Kubernetes. In the description of a deployment, you can specify which images the application should use, how many pods are required for it, and how Kubernetes should update them. This simplifies the process of performing updates and the steps included to do so, and allows you to automate and repeat this process.

Specifically, deployments allow you to provision pods and update them, roll back to previous deployment versions, and scale, pause, or resume deployments. Once they create a deployment, the Kubernetes control plane takes over scheduling for said application instances.

Deployments are primarily suited for stateless applications. This is for historical reasons, as the original focus of Kubernetes was on these very stateless applications. Over time, however, it has become clear that Kubernetes must also support stateful applications. As a result, so-called StatefulSets were created.

What are StatefulSets?

Kubernetes uses StatefulSets to control the execution of stateful applications that require persistent identities and hostnames. In practice, this means that it manages the provisioning and scaling of the pods contained in the set, while also guaranteeing the order and uniqueness of those pods. 

For example, with StatefulSets for a MySQL database, scaled to three replicas in a Kubernetes cluster, you can specify that a front-end application accesses all three pods for read processes. For write processes, however, it addresses only the first pod, subsequently synchronizing the data with the other two pods. 

StatefulSets assign an immutable identity to each pod, starting at 0 (0 -> n). A new pod is created by cloning the data from the previous pod. Deleting or terminating takes place in reverse order (n -> 0). So if you reduce the number of replicas from four to three, Kubernetes terminates or deletes the pod with number 3 first.

StatefulSets are particularly suited for applications that require at least one of the following: stable, unique network identifiers, stable, persistent storage, regulated, orderly provisioning, and orderly, automatic continuous updates.

A created StatefulSet ensures that the desired number of pods are running and available. It automatically replaces failed pods or pods removed from the node based on the specifications configured in the set.

Deleting a StatefulSet does not automatically delete the data on the associated volumes for data security reasons.

In addition to deployments and StatefulSets, Kubernetes provides another mechanism for launching pods in the form of DaemonSets.

What are DaemonSets?

With DaemonSets, you launch a specific pod on each worker node. This means that it is suitable for applications you would like to run on each individual worker node. For example, this could be a logging daemon that collects logs from all containers and sends them to a central log server.

We'll look at how the various objects and functions in Kubernetes communicate with each other in the next step. A distinction is made between internal and external communication. For the communication of the objects within a cluster, Kubernetes uses so-called services.

What are services in Kubernetes?

Services in Kubernetes are a logical abstraction for a group of pods in a cluster that perform the same function. A service assigns a name and a unique IP address (clusterIP) to this pod group so that it is reachable by other pods. It also sets access policies and handles automatic load balancing of all incoming requests.

Because a service handles discovery and routing between dependent pods, such as front-end and back-end components, if a pod dies and is replicated, the application is not affected. 

To connect pods into a group, services use labels and selectors.

In addition to internal communication, however, applications must be able to communicate externally. This is done via Ingress.

What is a Kubernetes Ingress?

Kubernetes Ingress is an interface that allows external users to access services in a Kubernetes cluster via HTTP/HTTPS. Ingress also allows you to create rules for routing traffic without having to create load balancers or expose all services on a node. 

You've now learned about the main Kubernetes objects. What you're missing now is the knowledge of how to roll out an application on Kubernetes in the first place.

How do you roll out applications in Kubernetes?

To roll out an application in Kubernetes, you must create and run a manifest. A manifest is a JSON or YAML file in which you specify which objects – deployments, services, pods, etc. – you want to create, how they should run in the cluster, which container image to use, or how many replicas Kubernetes should create. You then roll out the manifest to the cluster via the command line (kubectl), or via a CI.

Because of its ease of use, most Kubernetes newcomers write their manifests using YAML. However, a manifest has the disadvantage of being very static and not very flexible. For this reason, at some point Kubernetes users usually switch to Helm charts. Helm is a package manager of Kubernetes and especially facilitates the deployment of highly repeatable applications. A Helm chart is a package of one or more manifests reworked into templates with variables. The variables can be conveniently specified via another YAML that can also be given to the Helm. Unfortunately, this usually results in the templates themselves becoming almost unreadable.  

Another method for rolling out applications is the so-called Kubernetes operator. This is a controller that further automates this process. In practice, this means encoding knowledge about the lifecycle of an application. As a result, operators allow you to manage and scale stateless applications, such as web applications or API services, without having to know how they work. In this way, an operator reduces the manual tasks required to manage Kubernetes workloads. However, programming an operator requires the appropriate expertise.

FAQ

What is the difference between deployments and StatefulSets?

To deploy applications, one usually uses either deployments or StatefulSets. The difference between deployments and StatefulSets can be explained historically: In the early days, K8s was limited to stateless applications. Over time, however, it became apparent that stateful applications could not be mapped with deployments. As a result, StatefulSets were introduced.

What are stateful and stateless applications?

Stateful applications are applications that store and track data. For example, databases such as MySQL, Oracle and PostgeSQL are examples for such applications. In contrast, stateless applications do not store data, but process the data received with each request anew each time, for example by passing it to a stateful application.

How can I create a Kubernetes deployment?

Deployments can be created and managed using YAML or JSON with kubectl. The command line interface communicates with the Kubernetes API to interact with the cluster. Once a pod of the deployment is created, a Kubernetes deployment controller continuously monitors those instances. If a node running an instance fails, the controller replaces the instance with a new one on another node in the cluster. In this way, Kubernetes provides a self-healing process designed to avoid system failures.

What is the Horizontal Pod Autoscaler?

As the name suggests, the Horizontal Pod Autoscaler (HPA) automatically scales the number of pods from, for example, a deployment. It uses the average CPU usage of the assigned target or metrics provided by the application for autoscaling. In this way, the HPA ensures that additional pods are launched during performance spikes and that the application continues to perform. The HPA makes Kubernetes particularly effective during peak workloads and provides protection against system failures.

Important: You must specify the value at which the HPA should start new pods or delete existing pods when you create the deployment. 

How do I monitor Kubernetes?

There are several ways to monitor Kubernetes. The status of applications can be easily checked using health endpoints. If a problem occurs, Kubernetes for example offers with kubectl an on-board tool to query the logs of containers for troubleshooting via the API.

Those who need deeper insights into resource consumption must implement additional tools. For example, with Kubernetes Dashboard there is a simple way to visualize metrics. Those who need a holistic picture of all aspects of the Kubernetes environment, as well as an automatic detection of a problematic status and an intelligent alerting, are well advised to use a Kubernetes monitoring software like Checkmk.


Related Articles

Monitor GKE Autopilot with Checkmk
Monitor GKE Autopilot with Checkmk
As an Autopilot partner, Checkmk is part of an exclusive group of Google partners that offer privileged partner workloads for GKE Autopilot. Checkmk…
A practical guide to Kubernetes
A practical guide to Kubernetes
Kubernetes is the de-facto standard for container orchestration and the most hip technology currently around. All managers, IT consultants and newer…
Checkmk 2.1 hybrid IT infrastructure monitoring at its best
Checkmk 2.1 hybrid IT infrastructure monitoring at its best
The best for both worlds: With the all new Checkmk 2.1, we are laying the foundation for hybrid IT infrastructure monitoring at its best. It brings…