Kubernetes is the de-facto standard for container orchestration and the most hip technology currently around. All managers, IT consultants and newer colleagues want it. But why? What are the reasons that people use this technology at all? Is it just a hype? If yes, how long will this hype last? And what should you better know now if you are already using Kubernetes or plan to use it in the future. If you are not familiar at all with Kubernetes, please read our Kubernetes primer [LINK to Part 0].

To help you understand where using Kubernetes makes sense and the benefits it brings, I'll share three examples where I've seen Kubernetes in production:

  • No more trouble with the IT platform – enabling standardized software roll-out
  • Auto-scaling – dealing cleverly with load peaks without exploding costs
  • Going multi-cloud at low management cost

Standardized software deployment for multinational automotive company

The multi-billion dollar company in question uses Kubernetes as the platform for the production software at all of their plants. The internal applications for managing the production lines are developed at the worldwide R&D center, packaged into containers and then pushed to local container repositories at each production facility, and then rolled out via Kubernetes into the plants.

Kubernetes thus enables the company to easily deploy new software. Because Kubernetes provides the development team with a unified platform across all plants, they no longer have to consider the idiosyncrasies of the different IT architectures at the various sites during development.
 In the past, rolling out new applications always posed a problem, because the computing, network and storage infrastructures in the individual plants were always different and the automation tools were also not standardized. As a result, even the smallest differences resulted in individual roll-outs and patches – and more work for the development team.

With Kubernetes not only providing an abstraction layer for compute, but also for networking and storage, this allowed the company to gain a lot of speed and efficiency in roll-out of software to the plants. On the other hand, one should not forget the high effort required to operate Kubernetes. The bottom line, however, is that the sheer size of the company makes the use of Kubernetes worthwhile.

Scaling your e-commerce web application (or horse race betting platform) to user demand

Whereas the first use case was new to me, this was the one I had been reading for ages about. The customer used to have a monolithic application in use. But every time a sale took place, the e-commerce web application had performance problems. As a result, the company lost potential sales because the website stopped working properly.

To solve this problem, the company migrated to a Kubernetes microservices architecture with the goal of benefiting from Kubernetes' native scaling tools. These are automatic scaling of nodes, specifically adding worker pods when the cluster does not have enough compute power or memory. Another feature is automatic and horizontal pod scaling. It creates new pods for the front-end application serving the customer once the CPU utilization of all pods in that application reaches 80%. This way, during peak times like Black Friday, the company can provide content to all users – and at least the IT department is no longer held responsible for lost sales.

The IT team also no longer has to worry about unexpectedly high traffic, such as when a sale takes place without their knowledge. That's because Kubernetes scales without human interaction. As long as traffic is within the normal range, which is most of the time, the platform can be managed through the use of so-called reserved instances. For these, the company has committed to its cloud provider for a term of more than one year. All peaks, on the other hand, are caught by on-demand instances, which is very cost-efficient.

I heard exactly the same story from a horse race betting platform: its issue was that a lot of traffic was over the weekends  and none at night. In order to scale, the provider had to run another big machine, which took a lot of time to ramp-up and ramp-down and created a lot of overhead.

Switching to a microservice architecture and Kubernetes ultimately resulted in big savings in IT infrastructure costs, even though the cost of the consultants the betting provider needed for Kubernetes increased tremendously.

The multi-cloud solution

Back then, when I was still working as a management consultant, cloud was still a new, hot topic. The main concern was often: How can I remain vendor-independent? Our answer was to pursue a multi-cloud strategy, although we had no idea how to implement one at all.

Those were the days when Pivotal was also hyped with its Cloud Foundry solution. In the meantime, we're wiser and know that the solution never really took off since Pivotal was acquired by VMware in 2019 and then incorporated into VMware Tanzu. In any case, there's not much happening on Cloud Foundry Git anymore. Finally, with VMware Tanzu, the love child of Kubernetes and vSphere is also on the market - as if the complexity of one of the two tools wasn't already quite enough …

Kubernetes is the answer to such a vendor-independent multi-cloud strategy. With Kubernetes, you can deploy applications on your own infrastructure, shift them almost seamlessly to either AWS, Azure, or Google – and vice versa. Of course, the devil is in the details and there are differences in how Kubernetes works on different platforms. This is where tools like Rancher or OpenShift come into play, making Kubernetes much easier to manage.

To be honest, though, I have only seen this at really large enterprises where risk mitigation is a major topic and the overall costs of Kubernetes and multi-cloud to self-build solutions for a multi-cloud environment, it’s probably still a bargain.

Kubernetes also offers several other benefits that are not unique to a particular use case, but make life easier overall. These include Kubernetes' self-healing capabilities.

No matter what type of infrastructure you have, at some point your computing infrastructure will fail or experience a downtime. Whether Kubernetes is running on bare metal, self-managed VMs or by a cloud provider, nothing is safe from a kernel panic, hardware failure or just the blue screen of death.

When a node becomes unavailable, Kubernetes simply restarts pods on another healthy node. This means applications are quickly available again without you having to lift a finger. At least in theory – we'll go into what Kubernetes' self-healing capabilities and auto-scaling look like in reality in the next part. [Kubernetes - what could possibly go wrong?]

Kubernetes is great - but not suitable for everyone

So as you can see, Kubernetes actually has some great use cases. If you recognize yourself and your company in any of the examples presented, you should give it a try. Keep in mind, however, that it's not the solution to all problems – and this is where I see many companies going astray.

The aphorism “If all you have as a tool is a hammer, you'll see a nail in every problem” can easily be applied to Kubernetes (and the very closely related concept of microservices) in the modern IT world. Just because everyone is talking about it and recommending the technology doesn't mean you should use it for everything. Why? When making such a decision, always consider the size of your business and the effort involved with a tool as complex as Kubernetes.

Do you really need auto-scaling?

When deciding for and against Kubernetes, ask yourself whether your application really needs automatic scaling, for example because you have massive workload peaks in short periods. Or is a larger VM the cheaper solution in your case? You should always take into account that a true Kubernetes cluster requires multiple worker nodes for redundancy. If you are running it on-premises, you will also need multiple control plane nodes (ideally 3).

Do you need to reinvent a well-oiled machine?

Do you run your IT infrastructure yourself and even have an excellent virtualization infrastructure with a high level of automation? Then you can probably implement software in your infrastructure without any major problems anyway. Accordingly, you do not have to reinvent such a perfectly oiled machine.

Is your application already highly available?

Are you sure you really need 99.999% availability of your application? Or your application doesn't already meet the criteria for high availability anyway?

Mind the (Knowledge) Gap

What most companies seem to forget about is the [link to Kubernetes knowledge gap]. If you don't have Kubernetes expertise in-house, don't expect to find Kubernetes experts in the market looking for a job. Because if you do, you're already too late and will likely have to rely on external Kubernetes consultants (which are also a scarce resource).

Kubernetes' heaven instead of hell

That's a lot of facts to consider, bottom line, but that's where the fun with Kubernetes begins. Let's say you've chosen Kubernetes on your way to a bright future with microservices. To make sure you end up in Kubernetes heaven and not hell, we'll devote the next part to the most common issues that arise in Kubernetes.


Related Articles

Sending Checkmk notifications via Telegram
Sending Checkmk notifications via Telegram
In this article you will learn how to use the popular messaging platform Telegram for sending notifications from Checkmk.
Minimizing false positive monitoring alerts with Checkmk
Minimizing false positive monitoring alerts with Checkmk
Good IT monitoring stands and falls with its precision. It must inform you at the right time when something is wrong. But similar to statistics, we…
Automatically detecting log4j vulnerabilities in your IT
Automatically detecting log4j vulnerabilities in your IT
The zero-day exploit Log4Shell endangers numerous servers in companies, being a critical vulnerability in the widely used Java library Log4j. Finding…