How to create LoadBalancer services on a bare-metal cluster?

Photo by Matthieu Beaumont

Kubernetes on bare-metal is when your cluster runs on servers rather than virtual machines. This approach might not be the best but it offers a cheap alternative as well as increasing performance. In cloud-based Kubernetes clusters, cloud vendors expose services with Load Balancers, but those are unavailable in a bare-metal environment.

Don’t panic! You can still expose your services using a virtualized environment that offers the same user experience as cloud-based load balancers. The one that we are going to use today is called OpenELB and is an open source load balancer implementation for bare-metal Kubernetes clusters.

Install OpenELB

You only need to install OpenELB once and it will take care of the rest or almost. The only requirement is that your Kubernetes version is 1.15+.

I’m going to show you how to install OpenELB using kubectl, but you can also do it with Helm3 if you’d like. Also you can find links to the documentation at the bottom of this guide.

  1. Login in your Kubernetes cluster and run:
kubectl apply -f https://raw.githubusercontent.com/openelb/openelb/master/deploy/openelb.yaml
  1. Check the status of the openlb-manager and wait until it changes to “Running”:
watch kubectl get po -n openelb-system

If you wish to delete OpenELB you just need to run:

kubectl delete -f https://raw.githubusercontent.com/openelb/openelb/master/deploy/openelb.yaml

If you want to check its deletion you can check your namespaces and if it’s not there then OpenELB has been deleted successfully:

kubectl get ns

At this point you need to choose in which mode you want OpenELB to run and if you don’t know where to start, then you'll probably want to use Layer 2 as there is no configuration required. If you are not sure which one to choose, below you can find a list of pros and cons I created.

Layer 2 Mode

Pros

  • Not setup is required, perfect for beginners

Cons

  • When a failover occurs in the workers, the Services in the cluster will be interrupted for a short period of time.
  • Bandwidth bottlenecks: the traffic is always sent to the first node and then forwarded to the other node and finally to the kube-proxy, having the bandwidth limited of a single node.
  • Requires your infrastructure to allow anonymous ARP/NDP packets

BGP Mode

A BGP is a gateway protocol that allows the internet to exchange routing information between Anonymous Systems (AS) via peering.

Pros

  • High availability
  • No failover interruptions
  • No bandwidth bottlenecks

Cons

  • Your router must support BGP or ECMP

VIP Mode

Pros

  • All the pros from the BGP
  • Doesn’t require your infrastructure to allow anonymous ARP/NDP packets

Cons

  • Vip mode is a beta and has not been fully tested. You may encounter unknown issues.

Still don’t know which mode to use?

If you don’t care about High Availability etc… and you just want to learn how to use Kubernetes and have a private cluster to experiment with, your choice is Layer 2.

If you want to manage your own cluster for your applications and services, use the BGP mode.

And If your router doesn't support the BGP mode, but you still want to have high availability for your services and you’re not afraid of potential issues you might have, then your last choice is the VIP mode.

If you are running a cluster as a personal project and/or to learn how to use Kubernetes, my suggestion is to avoid VIP and Layer 2 modes and use BGP. If your router doesn’t support it don’t worry, because you can install a virtual router using BIRD.

Install BIRD

When you don’t have a router or one that supports BGP, you can use BIRD. It allows you to create a virtual router supporting BGP.

  1. Run the following commands to install BIRD:
sudo add-apt-repository ppa:cz.nic-labs/bird
sudo apt-get update 
sudo apt-get install bird
sudo systemctl enable bird
  1. Create the configuration file:
sudo vim /etc/bird/bird.conf
  1. And paste the following configurations:
router id 192.168.0.5;

protocol kernel {
   scan time 60;
   import none;
   export all;
   merge paths on;
}

protocol device {
   scan time 60;
}

protocol bgp neighbor1 {
   local as 50001;
   neighbor 192.168.0.2 port 17900 as 50000;
   source address 192.168.0.5;
   import all;
   export all;
   enable route refresh off;
   add paths on;
}
  1. Restart BIRD:
sudo systemctl restart bird 
  1. Check the status of BIRD:
sudo systemctl status bird

Next, we are going to create the 3 objects that will make BGP possible:

  • The BGP Conf
  • The BGP Peer
  • The EIP

The commands that follows consist in:

  • create the yaml file
  • pasting the configurations
  • apply the Object

Create a BgpConf Object

  1. Create the yaml file:
sudo vim bgp-conf.yaml
  1. Add configurations:
apiVersion: network.kubesphere.io/v1alpha2
kind: BgpConf
metadata:
 name: default
spec:
 as: 50000
 listenPort: 17900
 routerId: 192.168.0.2
  1. Create the BgpConf object:
kubectl apply -f bgp-conf.yaml

Create a BgpPeer Object

  1. Create the yaml file:
sudo vim bgp-peer.yaml
  1. Add configurations:
apiVersion: network.kubesphere.io/v1alpha2
kind: BgpPeer
metadata:
 name: bgp-peer
spec:
 conf:
   peerAs: 50001
   neighborAddress: 192.168.0.5
  1. Create the BgpPeer object:
kubectl apply -f bgp-peer.yaml

Create an Eip Object

  1. Create the yaml file:
sudo vim bgp-eip.yaml
  1. Add configurations:
apiVersion: network.kubesphere.io/v1alpha2
kind: Eip
metadata:
 name: bgp-eip
spec:
 address: 172.22.0.2-172.22.0.10
  1. Create the Eip object:
kubectl apply -f bgp-eip.yaml

Now you can create your Deployment and Service yaml files by using the below configurations and applying.

Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
 name: <name of the deployment>
spec:
 replicas: 2
 selector:
   matchLabels:
     app: <name of the app>
 template:
   metadata:
     labels:
       app: <name of the app>
   spec:
     containers:
       - image: <username/image>
         name: <image>
         ports:
           - containerPort: 8080

Service

kind: Service
apiVersion: v1
metadata:
 name: <service>
 annotations:
   lb.kubesphere.io/v1alpha1: openelb
   protocol.openelb.kubesphere.io/v1alpha1: bgp
   eip.openelb.kubesphere.io/v1alpha2: bgp-eip
spec:
 selector:
   app: <yourapp>
 type: LoadBalancer
 ports:
   - name: http
     port: 80
     targetPort: 8080
 externalTrafficPolicy: Cluster

Check the LoadBalancer

Once you’ve done that, you can see if your LoadBalancer is running correctly by checking its External-IP:

kubectl get svc

Conclusion

That’s how you set up a LoadBalancer for your bare-metal cluster. Obviously you won’t use the virtual router for a Production environment, but it’s useful when you want to run your private Kubernetes.

Link to documentations:

Linkedin Logo
Facebook Logo