How to build a Kubernetes Cluster?

Photo by Samuel Sianipar

In this guide I’m going to show you how to build a single-host cluster and share the common blockers that people encounter. I will list all the minimum requirements needed for this project, but first a brief introduction to Kubernetes.

Kubernetes

Kubernetes, also abbreviated to K8, is a portable open-source platform used to manage containerized workloads and services used for Software Orchestration. It was created by Google and kept hidden until 2014, when they decided to release it. The company realised that the number of services and applications built was growing quickly and impossible to maintain manually. For this reason they created a technology that allowed them to automate the management, configuration and coordination of all their products. After several versions and names it became the Kubernetes that we know today.

Requirements

Kubernetes is a technology that keeps evolving and for this reason I would suggest you to use the same version described in this tutorial to avoid hiccups. I personally decided to build the cluster on a Raspberry Pi, but if you don’t have one - considering the current semiconductor criss - an old laptop would do. I highly recommend not using your personal laptop as the following changes may impact your network. Just make sure that you follow the below requirements:

  • Kubernetes v1.24.1
  • RedHat Enterprise Linux 7.x+, CentOS 7.x+, Ubuntu 16.04+, or Debian 9.x+
  • x86-64, arm64, ppc64le, or s390x processor
  • 2CPU
  • 2GB RAM although 4GB is recommended
  • 10GB free disk space

If you are setting up a cluster on a Raspberry Pi like me but not sure how much memory your machine has you can run the following command to find out:

grep MemTotal /proc/meminfo

Install Ubuntu on Raspberry Pi (skip if not using RPi)

We need to install an OS on the RPi and to do it we have to prepare the boot image. I have decided to use Ubuntu Server as I’ve used in the past, but you are free to choose the compatible OS that you want.

  • Go to https://www.raspberrypi.com/software/ and install the Raspberry Pi Imager.
  • Select the driver SD Card or USB in which your installing the OS
  • Stick the SD Card in your RPi and wait for the OS to be installed.
  • At the login phase don’t forget to use ubuntu as username and password. You will be able to change the password after, so don’t worry.

If your RPi is not responding after sticking the SD card and you can only see the red light, it means that the OS image is not booting into the machine. There are many reasons why this could happen. Probably you saved the OS image in the driver using the `dd` command. If you use the RPi imager you shouldn’t have this problem.

At this point you can decide to run your commands directly on your RPi or SSH into it and run them from another device.

The SSH connection is easy, the only thing you need is the IP private address of your device and getting this information is relatively simple. Just install the net-tools if you haven’t already and use the ifconfig command.

sudo apt install net-tools
ifconfig -a

If you are connected with an ethernet cable you can check the eth0 and next to the inet value you should see the IP private address. Do not share your IP private address with anyone. This info is not exposed othside your network and for a good reason.

Docker Installation

Now let’s setup Docker. First thing first, we are going to install it:

sudo apt install -y docker.io

Docker Configuration

Now we need to make sure that the Linux Kernel will isolate our containers from another by enabling Control Groups. We are going to accomplish that by changing the daemon.json as follow:

sudo cat /etc/docker/daemon.json <<EOF

{

  "exec-opts": ["native.cgroupdriver=systemd"],

  "log-driver": "json-file",

  "log-opts": {

    "max-size": "100m"

  },

  "storage-driver": "overlay2"

}

EOF

Now we are going to enable limit support at boot by adding the following configuration in the kernel file. The location of this file is different from OS:

  • Raspberry Pi 4: /boot/firmware/cmdline.txt
  • Ubuntu 20.4: /boot/config-$(uname -r)* .config

Run the following command:

sudo sed -i '$ s/$/ cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1 swapaccount=1/' <location-kernel-config-above>

What swapaccount=1 does?

Before, I mentioned to you why we need to enable the Control Groups (cgroup). Now we also need to enable swap accounting that the Arch Linux Kernel disables by default. When the kernel detects that there is not enough memory to perform important tasks, it will start killing processes to free up the memory. Obviously Docker will be at risk of being terminated if this happens, resulting in your containers being killed. To avoid that Docker adjusts the OOM priority on the daemon so that it is less likely to be killed than other processes on the system.

Configure iptables

Iptables needs to be configured to see bridged network traffic:

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

EOF

Install Kubernetes

Let’s follow the same instruction found in Kubernetes’ documentation:

​​1. Update the apt package index and install packages needed to use the Kubernetes apt repository:

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl

2. Download the Google Cloud public signing key:

sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg

3. Add the Kubernetes apt repository:

echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

4. Update apt package index, install kubelet, kubeadm and kubectl, and pin their version:

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

In the last command we are disabling the updates for these packages. Upgrading Kubernetes always requires a manual review.

Create the cluster

sudo kubeadm init --pod-network-cidr=192.168.0.0/16 —v=5

Follow the instructions below:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Install Calico

1.Install the Tigera Calico operator and other resource definitions:

kubectl create -f https://projectcalico.docs.tigera.io/manifests/tigera-operator.yaml

2.Install Calico by creating the necessary custom resource.

kubectl create -f https://projectcalico.docs.tigera.io/manifests/custom-resources.yaml

Before creating this manifest, read its contents and make sure its settings are correct for your environment. For example, you may need to change the default IP pool CIDR to match your pod network CIDR.

3.Confirm that all of the pods are running with the following command.

watch kubectl get pods -n calico-system

Wait until each pod has the STATUS of Running.

The Tigera operator install resources in the calico-system namespace. Other install methods may use the kube-system namespace instead.

4.Remove the taints on the master so that you can schedule pods on it.

kubectl taint nodes --all node-role.kubernetes.io/master-

It should return the following:

node/<your-hostname> untainted

5.Confirm that you now have a node in your cluster with the following command.

kubectl get nodes -o wide

It should return something like the following:

NAME              STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION    CONTAINER-RUNTIME

<your-hostname>   Ready    master   52m   v1.12.2   10.128.0.28   <none>        Ubuntu 18.04.1 LTS   4.15.0-1023-gcp   docker://18.6.1

Congratulations you have just created your first cluster!

Restart from scratch or reset

If for some reason you want to start from scratch and delete all the configurations above, you can run this list of commands. Deleting these files will allow you to have a clean state again.

Remove configurations

sudo rm -r /etc/cni
sudo rm -r /etc/cni/net.d
sudo rm -r $HOME/.kube/
sudo rm -r /etc/kubernetes/kubelet.conf
sudo rm -r /etc/kubernetes/pki/ca.crt

Stop the kubelet

systemctl stop kubelet

Reset cluster

sudo kubeadm reset

Debugging - Master node is not ready

Your master node’s status is ‘not ready’. There are several reasons why. The best way to find what’s wrong is by describing the node:

kubectl describe node <node’s name>

In the conditions sections you have:

  • Network Unavailable
  • Memory Pressure
  • Disk Pressure
  • PID Pressure
  • Ready

Those are the main reasons why your node is not working properly. If the Ready condition is set to false, check the reason, and if you have:

KubeletNotReady …NetworkPluginNotReady cni plugin not initialised…

This means that you haven’t (properly) configured the CNI. Install calico and if you have done it already, make sure that the calico-controller is running and not pending.

Debugging - Calico kube controllers is pending

Most likely your node had a taint that doesn’t allow you to reschedule your controller. Identify the taint and remove it from the pod:

kubectl describe node <node’s name> | grep Taints

Remove the taint by adding “-” at the end as the following command:

kubectl taint nodes <node’s name> <name of taint>-

Links to resoursces and documentations

Linkedin Logo
Facebook Logo
You Tube Logo
Twitter Logo