Manage Kubernetes Clusters with OKD

Containers are the talk of the town due to their many benefits. They are just as well suited for short-term use of an application as they are for use as components in CI/CD pipelines or for operating highly available productive services in clusters. There are as many approaches for creating and running container setups as there are areas of application – and does not restrict your options. In the following we will provide a brief introduction to the world of containers and then show you how you can start running an "OKD" cluster at OKD is a comprehensive Kubernetes distribution developed as an open source project that also provides the basis for Red Hat OpenShift.

Container orchestration: what is it all about?

Many of our customers already use containers in one form or another. A currently widespread approach for isolating containers is the use of Linux namespaces in combination with cgroups. This has been a mainstream approach since about 2013 following the breakthrough of Docker and the ensuing Open Container Initiative (OCI) standardization process. In containers, applications can be separated from each other in a particularly resource-efficient manner as this approach does not use hardware virtualization and – as opposed to in the case of full virtualization – the operating system does not run in multiple parallel instances either.

While it is possible to manage several containers on a few nodes manually without any problems, container orchestrators, such as Kubernetes or "K8s", come into play at the latest when issues of scaling up and container lifecycle management arise. Kubernetes ensures, among other things, that the desired number of instances of each container are running and independently determines the appropriate nodes for them. During the deployment of new container versions, it replaces old instances in accordance with defined deployment strategies, e.g. to ensure that a service is continuously available as a whole. And if persistent storage is required for certain containers, Kubernetes can automatically provision it to the right node via CSI.

Kubernetes exists in various "flavors"

At, customers have the choice of how to install and run Kubernetes. If you would like to work directly with the "upstream" Kubernetes, you can deploy your K8s cluster with e.g. Kubespray: this tool is based on Ansible and can be used together with our Ansible collection. This way, it is straightforward for you to set up the cloud infrastructure required for the cluster via our API.

Rancher goes one step further as a higher-level administration tool. Rancher is run on a container basis itself and provides a graphic web frontend and APIs that make it possible to set up and manage complete Kubernetes clusters in just a few steps. Rancher automatically prepares the cloud resources required for a cluster; the node driver is already pre-installed in current Rancher releases.

OKD is another powerful tool that we will look at in more detail here. This open source project is also the basis for OpenShift that is sold by Red Hat as a complete software, services and support package. Along the lines of the Linux kernel and the Linux distributions built around it, OKD can be seen as a Kubernetes distribution. In addition to pure container management, OKD also integrates several tools that, for example, monitor the cluster, perform logging functions and route network traffic to the correct containers. The installation of OKD at benefits from various features that we have already reported on, e.g. private networks with managed DHCP. For what are known as master and worker nodes, OKD uses Fedora CoreOS, which stems from the previous Core OS and is one of the available options when installing new servers at (just like Flatcar Container Linux, by the way, which is a popular Core OS fork).

Create your own OKD cluster step by step

If you are interested in OKD, we have published detailed instructions on GitHub about how to create your own OKD cluster. Relying on Ansible, this tutorial uses a popular DevOps tool to automate the individual steps. In addition, our "how to" guide uses "ocp4-helpernode", which was developed in the OpenShift setting, to make the procedure even more straightforward, in particular for provisioning HAProxy and DNS. The process essentially consists of the following four steps:

Step 1: The required tools are installed on your personal device, e.g. your own laptop or an alternative cloud server, and certain basic variables are defined.

Step 2: The helper node is installed and the services that perform a range of key functions in the cluster are configured on it. API connections, for example, will run via the HAProxy, which also makes it possible to reach your applications on the worker nodes from the Internet at a later stage. The DNS server, in turn, enables resolution of cluster-internal domains and IP addresses, while an Apache HTTP server is used for serving static files. Ignition configs, in particular, are stored on the Apache: in a similar way to "cloud-init", Ignition allows individual settings to be applied on newly created servers on the very first start-up.

OKD demo cluster network diagram

Step 3: This is where the master and worker nodes are created. Like with the helper node, starting these virtual servers at is automated using our Ansible collection. The new nodes fetch their prepared Ignition configs and thus their specific configuration settings from the Apache server from the previous step, which means they can complete their set-up independently.

Step 4: In a final step where the new nodes are accepted into the cluster, the relevant CSRs need to be signed. This completes the installation of the OKD cluster.

The popularity of container setups continues unabated, not least because of the numerous advantages, such as the clean separation of individual services with simultaneous efficient use of resources. There are a wide range of possible solutions in container management, depending on the specific requirements and preferences. Whether OKD, Rancher or a particularly streamlined approach, at you will find the components you need to make it work.

For your favorite tools,
Your team

Back to overview