High Availability Raspberry Pi K8s Cluster

Pi K8s
Pi K8s

Introduction

For tech enthusiasts, a home-lab setup can be incredibly useful. A great option to consider is deploying a Kubernetes (k8s) cluster on Raspberry Pi, which offers an affordable and efficient solution. Here is what I did to set up a HA K8s cluster for myself.

Prerequisites

  • A domina name.
  • Three or more Raspberry Pi devices.
  • Ubuntu 24.04 OS flashed on each Pi.
  • Private home network.

Step-by-step Guide

For this guide, I assume you have the domain harrytang.com and three Pi devices with the private IPs of 192.168.68.51, 192.168.68.52 and 192.168.68.53.

  1. Config static IP addresses:

    Edit the /etc/netplan/50-cloud-init.yaml file with the following settings:

    # /etc/netplan/50-cloud-init.yaml
    network:
      ethernets:
        eth0:
          dhcp4: no
          addresses:
            - 192.168.68.51/22 # 22 is the CIDR notation for 255.255.252.0
          routes:
            - to: default
              via: 192.168.68.1
          nameservers:
            addresses:
              - 1.1.1.1
              - 1.0.0.1
              - 2606:4700:4700::1111
              - 2606:4700:4700::1001
          optional: true
      version: 2
    

    Then, apply it and do the same for the other two devices:

    sudo netplan try
    sudo netplan apply
    
  2. Install required packages (do this for all Pi devices):

    sudo apt-get update && sudo apt-get upgrade -y
    sudo apt-get install -y net-tools iputils-ping ufw vim socat
    
  3. Enable kernel modules and IP forwarding (do this for all Pi devices):

    cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
    br_netfilter
    overlay
    EOF
    sudo modprobe br_netfilter
    sudo modprobe overlay
    
    cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
    net.ipv4.ip_forward=1
    net.ipv6.conf.all.forwarding=1
    EOF
    sudo sysctl --system    
    
  4. Configure the firewall (do this for all Pi devices):

    sudo ufw allow ssh
    sudo ufw default allow routed # Allow routed traffic
    sudo ufw allow from 192.168.0.0/16 # Allow traffic from private network
    sudo ufw allow from fe80::/10 # Allow traffic from private network
    sudo ufw enable
    sudo ufw status verbose
    
  5. Install Kubernetes components and CRI-O (do this for all Pi devices):

    kubernetes_version=v1.31
    crio_version="$kubernetes_version"
    
    # Add Kubernetes and CRI-O apt repositories
    curl -fsSL "https://pkgs.k8s.io/core:/stable:/$kubernetes_version/deb/Release.key" \
        | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
    echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/$kubernetes_version/deb/ /" \
        | sudo tee /etc/apt/sources.list.d/kubernetes.list
    
    curl -fsSL "https://pkgs.k8s.io/addons:/cri-o:/stable:/$crio_version/deb/Release.key" \
        | sudo gpg --dearmor -o /etc/apt/keyrings/cri-o-apt-keyring.gpg
    echo "deb [signed-by=/etc/apt/keyrings/cri-o-apt-keyring.gpg] https://pkgs.k8s.io/addons:/cri-o:/stable:/$crio_version/deb/ /" \
        | sudo tee /etc/apt/sources.list.d/cri-o.list
    
    # Install Kubernetes and CRI-O packages
    sudo apt-get update
    k8s_version=$(apt-cache madison kubeadm | awk '{print $3}' | head -1)
    crio_version=$(apt-cache madison cri-o | awk '{print $3}' | head -1)
    sudo apt-get install -y kubeadm="$k8s_version" kubelet="$k8s_version" kubectl="$k8s_version" cri-o="$crio_version"
    sudo apt-mark hold kubeadm kubelet kubectl cri-o
    sudo systemctl enable --now crio
    
  6. Generate the Encryption Configuration file for encryption at rest:

    Add more secure to your K8s cluster by using encryption at rest, follow this guide to generate the /etc/kubernetes/enc/enc.yaml file.

  7. Configure domain for K8s API endpoint:

    Go to your domain panel and add a A Record of k8s.harrytang.com point to the first Pi device, e.g 192.168.68.51:

    k8s.harrytang.com.      300     IN      A       192.168.68.51
    
  8. From the first Pi, initializes a Kubernetes control-plane node:

    Create the init-config.yaml file:

    apiVersion: kubeadm.k8s.io/v1beta3
    kind: ClusterConfiguration
    controlPlaneEndpoint: 'k8s.harrytang.com:6443'
    etcd:
      local:
        extraArgs:
          listen-metrics-urls: http://0.0.0.0:2381
    controllerManager:
      extraArgs:
        bind-address: '0.0.0.0'
    scheduler:
      extraArgs:
        bind-address: '0.0.0.0'
    networking:
      podSubnet: 10.1.0.0/16,fd01::/64
      serviceSubnet: 10.96.0.0/16,fd98::/108
    apiServer:
      extraArgs:
        encryption-provider-config: /etc/kubernetes/enc/enc.yaml
      certSANs:
        - 'k8s.harrytang.com'
      extraVolumes:
        - name: enc
          hostPath: /etc/kubernetes/enc
          mountPath: /etc/kubernetes/enc
          pathType: DirectoryOrCreate
          readOnly: true
    ---
    apiVersion: kubeadm.k8s.io/v1beta3
    kind: InitConfiguration
    localAPIEndpoint:
      advertiseAddress: 192.168.68.51
      bindPort: 6443
    nodeRegistration:
      kubeletExtraArgs:
        node-ip: 192.168.68.51
    ---
    apiVersion: kubelet.config.k8s.io/v1beta1
    kind: KubeletConfiguration
    serverTLSBootstrap: true
    ---
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    kind: KubeProxyConfiguration
    metricsBindAddress: '0.0.0.0:10249'    
    

    Then, run:

    sudo kubeadm config images pull
    sudo kubeadm init phase preflight
    sudo kubeadm init --config=init-config.yaml --upload-certs
    

    Next, deploy the Calico networking for the cluster:

    kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.2/manifests/tigera-operator.yaml
    

    Create the calico.yaml:

    apiVersion: operator.tigera.io/v1
    kind: Installation
    metadata:
      name: default
    spec:
      # Configures Calico networking.
      calicoNetwork:
        # Note: The ipPools section cannot be modified post-install.
        ipPools:
          - blockSize: 26
            cidr: 10.1.0.0/16
            encapsulation: VXLAN # Use this for Oracle Cloud Ubuntu 22.04
            natOutgoing: Enabled
            nodeSelector: all()
          - blockSize: 122
            cidr: fd01::/64
            encapsulation: None
            natOutgoing: Enabled
            nodeSelector: all()
    
    ---
    # This section configures the Calico API server.
    # For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.APIServer
    apiVersion: operator.tigera.io/v1
    kind: APIServer
    metadata:
      name: default
    spec: {}    
    

    Then, apply it:

    kubectl create -f calico.yaml
    

    Finally, reboot the Pi for Calio to configure the network properly:

    sudo reboot
    
    ## Check if all good
    kubectl get pods -A
    NAMESPACE         NAME                                       READY   STATUS    RESTARTS   AGE
    calico-system     calico-kube-controllers-598654fc5f-2tj82   1/1     Running   2          4m39s
    calico-system     calico-node-zmq2r                          1/1     Running   1          4m39s
    calico-system     calico-typha-7d99f69485-8qjr2              1/1     Running   1          4m40s
    calico-system     csi-node-driver-xfngl                      2/2     Running   2          4m39s
    kube-system       coredns-7c65d6cfc9-756vf                   1/1     Running   1          15m
    kube-system       coredns-7c65d6cfc9-p8rc2                   1/1     Running   1          15m
    kube-system       etcd-pi1                                   1/1     Running   1          15m
    kube-system       kube-apiserver-pi1                         1/1     Running   1          15m
    kube-system       kube-controller-manager-pi1                1/1     Running   1          15m
    kube-system       kube-proxy-6dnbm                           1/1     Running   1          15m
    kube-system       kube-scheduler-pi1                         1/1     Running   1          15m
    tigera-operator   tigera-operator-89c775547-xbf9j            1/1     Running   1          5m45s    
    
  9. Join the other two Pi devices as control-plane to form a High-availability cluster:

    Create the cp-config.yaml file:

    apiVersion: kubeadm.k8s.io/v1beta4
    kind: JoinConfiguration
    caCertPath: /etc/kubernetes/pki/ca.crt
    controlPlane:
      certificateKey: q1w2e3r4t5y6u7i8o9p0 # replace this
      localAPIEndpoint:
        advertiseAddress: 192.168.68.52 #  192.168.68.53 for the 3rd device
        bindPort: 6443
    discovery:
      bootstrapToken:
        apiServerEndpoint: k8s.harrytang.com:6443
        caCertHashes:
          - sha256:0p9o8i7u6y5t4r3e2w1q # replace this
        token: werwerw.qweertytyudfgsdf # replace this
      tlsBootstrapToken: werwerw.qweertytyudfgsdf # replace this
    nodeRegistration:
      kubeletExtraArgs:
        - name: node-ip
          value: 192.168.68.52 #  192.168.68.53 for the 3rd device
    

    And run the join command:

    sudo kubeadm join --config=cp-config.yaml
    
  10. Update the K8s API endpoint with the remaining two IP addresses.

    k8s.harrytang.com.      300     IN      A       192.168.68.51
    k8s.harrytang.com.      300     IN      A       192.168.68.52
    k8s.harrytang.com.      300     IN      A       192.168.68.53
    
  11. Verify and configure the control-plane nodes to treat them like worker nodes so they can start accepting pods:

    kubectl get nodes
    NAME   STATUS     ROLES           AGE     VERSION
    pi1    Ready      control-plane   4h21m   v1.31.2
    pi2    Ready      control-plane   3h45m   v1.31.2
    pi3    Ready      control-plane   3h51m   v1.31.2    
    kubectl taint nodes --all node-role.kubernetes.io/control-plane-
    

Conclusion

Well done on setting up a Kubernetes (K8s) system for your home-lab! Now, you’re ready to install any applications you want on it.

References

Comments