Getting started with Kubernetes: what you need to know

- Advertisement -spot_imgspot_img

The use of containers is becoming more and more common. According to Gartner By 2024, 15% of all applications will run in containers and 75% of all organizations will use containers in production. This is due to the rising popularity of cloud-native platforms that enable the use of containers. In addition, Gartner expects 95% of new digital workloads to land on these platforms by 2025. Kubernetes is one of the most used platforms. Kubernetes has become the de facto standard for deploying and managing containerized workloads. But if you are going to use Kubernetes, there are a number of points to consider. In this article we will take a closer look at this, so that you are better prepared when using Kubernetes.

Why Kubernetes?

It starts with the use of container technology. An important feature of containers is portability. By packaging an application’s code in a container, it can run anywhere because everything needed for the application is contained in the container. However, the container runtime required to run it depends on a server. If the runtime fails due to a server failure, the application is no longer available. This is where Kubernetes comes into play. Kubernetes is an open source cluster system for automated deployment, scaling up/down and management of containerized applications. If an application is packaged in containers, you make an ‘agreement’ with Kubernetes to run this container in the cluster. With Kubernetes, a deal is a deal, so if one machine in the cluster fails, Kubernetes will move the container onto another machine and add a new machine to the cluster.

However, Kubernetes has even more to offer. No matter where you use Kubernetes, you always speak the same Kubernetes ‘language’, or a universal infrastructure API. Whether you use Kubernetes in a Public Cloud such as Azure or in your own data center, you just need to communicate with the Kubernetes API. A few years ago, setting up a Kubernetes cluster was a challenge, but today there are several tools available that simplify this process. In addition, the range of managed Kubernetes clusters in the cloud has increased significantly.

Even though it seems that everyone is talking about Kubernetes these days and IT suppliers are willing to do everything they can to help you use it, the question is whether it is also interesting for your company and what it entails .

Should you start using Kubernetes?

Kubernetes may not always be the right choice. If you haven’t done it before, you’ll need to containerize the application first. But it depends more on the size and complexity of the application. If you only have a simple application with a few containers, setting up a Kubernetes cluster is probably not necessary. In this case there are other alternatives such as Google Cloud Run or Azure Functions. If the number of services and containers increases, Kubernetes will become interesting.

Deploying applications to Kubernetes is more complicated

Rolling out applications to Kubernetes not only requires the necessary technology, but also knowledge and skills. Everything comes together in Kubernetes: network, compute and storage. To use this, you will need to write Kubernetes manifests to create the necessary Kubernetes objects (and there are now over 50 of them). Preferably, these manifests should be delivered automatically via a pipeline. There are no exact calculations and it also depends on the use case, but on average it takes at least a month to build a first setup. In addition, resolving configuration errors can sometimes take days. The learning curve is quite steep and before you are really familiar with Kubernetes, you will have to be working with it daily for at least a year.


Kubernetes requires many more additional tools

You can query and modify the objects via the Kubernetes API. Kubernetes itself does not have built-in tools that you would expect in a modern hosting infrastructure. Deploying a containerized application to a new Kubernetes cluster and configuring access to the container from the internet is like connecting a server with an application directly to the internet. There are no security measures. There is no network encryption, no Web Application Firewall (WAF), no monitoring, no intrusion detection, and no network and security policies. To implement all the necessary management and security tools, you have to rely on an ecosystem of more than 2000 open source tools. You have to select, implement, configure, install and manage all tools yourself, and all tools have their own configuration properties and do not work together by default.

Kubernetes is therefore not a complete solution for containerized applications, but a platform to build a platform with. Every day there is a new tool to try and there are 1001 ways to build the platform. This makes Kubernetes an interesting and challenging project for infrastructure and platform engineers. In addition, we see an explosive growth of consultancy companies around Kubernetes, because the popularity is growing so fast that there is a major shortage of knowledge in the market. A lot of time can be invested in the development of a Kubernetes platform.

Kubernetes costs are significant

It is clear that using Kubernetes does not come cheap. On average, companies spend six months with several engineers building a first version of a platform. But once the first version of the platform is completed, you’re not done. If a new Kubernetes version is released, you will have to investigate whether all the tools you use are compatible and need to be adjusted if necessary. Depending on the degree of security, a Kubernetes platform quickly uses 10 to 20 additional open source tools. A platform is never finished, so there is always a need to keep improving it by adding new features that improve the developer experience. All this has to be done in a market that lacks knowledge.

If you do nothing, you run a higher risk of security incidents

There is an increasing trend in configuration errors leading to security incidents, according to several reports. This includes containers that allow root access, making you an instant admin as soon as you enter. Or workloads without limits, which allow a container to obtain unlimited resources in the cluster, as well as containers that contain vulnerabilities, such as the Log4J situation. How do you prevent such vulnerabilities in your container? This trend is worrying and at the same time a result of the rapid adoption of Kubernetes, without all possible measures being taken to prevent these kinds of problems.

The risk of lock-in when using Public Cloud

We often get the question whether one can simply use services from cloud hyper-scalers such as GCP, Azure and AWS. Let’s take AKS, a managed Kubernetes service in Azure, as an example. Setting up an AKS cluster is easy these days and can be done in a few minutes. But as described earlier, Kubernetes alone is not enough. A Kubernetes cluster uses cloud resources (virtual machines) and Azure offers all possible services to extend the platform. These additional services are not only expensive, but can only be used in combination with Kubernetes in Azure. If you also want to use them in another cloud or switch to a Private Cloud for compliance/GDPR reasons, you have to start over and that was just not the idea (a universal infrastructure API) behind Kubernetes.

In short, Kubernetes is very suitable for companies that want to host containerized applications. If an application only consists of a few containers, Kubernetes may be unnecessary. It is specifically designed for hosting complex architecture with many front and backend services. When implementing Kubernetes, it is important to consider the impact, time investment and associated risks for the organization.

This article was written by Sander Rodenhuis, CTO of Red Kubes.

[Fotocredits –¬†bilalulker¬†¬© Adobe Stock]

- Advertisement -spot_imgspot_img
Latest news
- Advertisement -spot_img
Related news
- Advertisement -spot_img