Kubernetes kube-proxy-free setup with Containerd 2.0 and Cilium

Introduction
Kubernetes setups without kube-proxy are gaining popularity due to their high performance. In this article, I'll show you how I bootstrapped a Kubernetes cluster using kubeadm, Containerd 2.0, and Cilium networking, completely eliminating kube-proxy
.
Prerequisites
At the time of writing, I used the following versions. Newer versions may also work.
- 4 Ubuntu 24.04 servers.
- Containerd v2.0.4.
- K8s v1.33.0.
- Cilium v1.17.3.
Step-by-step
-
Upgrade the OS
sudo apt-get update && sudo apt upgrade -y sudo apt-get install -y net-tools iputils-ping vim socat nfs-common cryptsetup apt-transport-https ca-certificates curl gpg
-
Clean up the iptables rules
sudo iptables -F sudo iptables -X sudo iptables -P INPUT ACCEPT sudo iptables -P FORWARD ACCEPT sudo iptables -P OUTPUT ACCEPT sudo iptables-save | sudo tee /etc/iptables/rules.v4 > /dev/null sudo iptables -L sudo nft list ruleset
-
Load the required kernel modules and enable IP forwarding
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf br_netfilter overlay dm_crypt EOF sudo modprobe br_netfilter sudo modprobe overlay sudo modprobe dm_crypt cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.ipv4.ip_forward=1 net.ipv6.conf.all.forwarding=1 EOF sudo sysctl --system
-
Create
crictl.yaml
configurationCreate the
crictl.yaml
file to configure containerd:sudo mkdir -p /etc/crictl sudo vi /etc/crictl.yaml
Add the following content:
runtime-endpoint: unix:///run/containerd/containerd.sock image-endpoint: unix:///run/containerd/containerd.sock timeout: 10 debug: false
-
Install containerd
wget https://github.com/containerd/containerd/releases/download/v2.0.4/containerd-2.0.4-linux-arm64.tar.gz sudo tar Cxzvf /usr/local containerd-2.0.4-linux-arm64.tar.gz
-
Install runc
wget https://github.com/opencontainers/runc/releases/download/v1.2.6/runc.arm64 sudo install -m 755 runc.arm64 /usr/local/sbin/runc
-
Install CNI plugin
wget https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-arm64-v1.6.2.tgz sudo mkdir -p /opt/cni/bin sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-arm64-v1.6.2.tgz
-
Enable containerd service
wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service sudo mv containerd.service /etc/systemd/system/ sudo systemctl daemon-reexec sudo systemctl enable --now containerd
-
Configure containerd to use cgroup driver
Generate the default config file:
sudo mkdir /etc/containerd containerd config default | sudo tee /etc/containerd/config.toml
Modify the
config.toml
file as follow:[plugins.'io.containerd.grpc.v1.cri'.containerd.runtimes.runc] [plugins.'io.containerd.grpc.v1.cri'.containerd.runtimes.runc.options] SystemdCgroup = true
Restart the containerd:
sudo systemctl restart containerd sudo systemctl status containerd
-
Install kubeadm, kubectl & kubelet
K8S_VERSION=v1.33 curl -fsSL https://pkgs.k8s.io/core:/stable:/$K8S_VERSION/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/$K8S_VERSION/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list sudo apt-get update sudo apt-get install -y kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectl sudo systemctl enable --now kubelet
-
Create K8s cluster
Create the
init-config.yaml
file with the following content (modify it according to your needs):apiVersion: kubeadm.k8s.io/v1beta4 kind: InitConfiguration bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: qwerty.q1w2e3r4t5y6u7i8 ttl: 24h0m0s usages: - signing - authentication localAPIEndpoint: advertiseAddress: 10.4.0.47 # Control Plane IP bindPort: 6443 nodeRegistration: criSocket: unix:///var/run/containerd/containerd.sock imagePullPolicy: IfNotPresent imagePullSerial: true kubeletExtraArgs: - name: node-ip value: 10.4.0.47 # Control Plane IP name: cp1 taints: - effect: NoSchedule key: node-role.kubernetes.io/control-plane timeouts: controlPlaneComponentHealthCheck: 4m0s discovery: 5m0s etcdAPICall: 2m0s kubeletHealthCheck: 4m0s kubernetesAPICall: 1m0s tlsBootstrap: 5m0s upgradeManifests: 5m0s --- apiVersion: kubeadm.k8s.io/v1beta4 kind: ClusterConfiguration apiServer: certSANs: - api.k8s.maxspell.net # Control Plane FQDN caCertificateValidityPeriod: 87600h0m0s certificateValidityPeriod: 8760h0m0s certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controlPlaneEndpoint: api.k8s.maxspell.net:6443 # Control Plane FQDN controllerManager: extraArgs: - name: bind-address value: 0.0.0.0 dns: {} encryptionAlgorithm: RSA-2048 etcd: local: dataDir: /var/lib/etcd extraArgs: - name: listen-metrics-urls value: http://0.0.0.0:2381 imageRepository: registry.k8s.io kubernetesVersion: v1.32.3 networking: dnsDomain: cluster.local podSubnet: 172.16.0.0/16 serviceSubnet: 172.31.0.0/16 proxy: {} scheduler: extraArgs: - name: bind-address value: 0.0.0.0
Then, initialize the control-plane node :
sudo kubeadm init --config=init-config.yaml --upload-certs --skip-phases=addon/kube-proxy
-
Join the worker nodes
Repeat all the steps above, then run the join command provided by the kubeadm init output.
-
Install Cilium
From the control-plan node, install the Heml CLI:
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
Create the
cilium.yaml
value file for the Cilium installation (modify it according to your needs):k8sServiceHost: api.k8s.maxspell.net k8sServicePort: 6443 kubeProxyReplacement: true ingressController: enabled: true default: true loadbalancerMode: shared enforceHttps: false enableProxyProtocol: false service: type: NodePort insecureNodePort: 32080 secureNodePort: 32443 gatewayAPI: enabled: true hostNetwork: enabled: true enableProxyProtocol: false nodePort: enabled: true hostPort: enabled: true externalIPs: enabled: true ipam: operator: clusterPoolIPv4PodCIDRList: - 172.16.0.0/16 socketLB: enabled: true hostNamespaceOnly: false bpf: masquerade: true hubble: relay: enabled: true ui: enabled: true
Then, install Cilium:
helm upgrade --install cilium cilium/cilium --version 1.17.3 \ --namespace kube-system \ -f cilium.yaml
-
Verify the installation
Check if all the Cilium pods are running:
kubectl -n kube-system get pods NAME READY STATUS RESTARTS AGE cilium-64ds2 1/1 Running 0 3d11h cilium-envoy-2g6vt 1/1 Running 0 3d11h cilium-envoy-5k64m 1/1 Running 0 3d11h cilium-envoy-chfvq 1/1 Running 0 3d11h cilium-envoy-zr67x 1/1 Running 0 3d11h cilium-g2pj6 1/1 Running 0 3d11h cilium-jk6kw 1/1 Running 0 3d11h cilium-operator-8d75b5c48-fwb4s 1/1 Running 0 3d11h cilium-operator-8d75b5c48-jw8kh 1/1 Running 0 3d11h cilium-pkxlz 1/1 Running 0 3d11h hubble-relay-589fbddbf-m2srs 1/1 Running 0 3d11h hubble-ui-76d4965bb6-fphjc 2/2 Running 0 3d11h
Check if all nodes are ready:
kubectl get nodes NAME STATUS ROLES AGE VERSION cp1 Ready control-plane 9m v1.33.0 node1 Ready <none> 5m v1.33.0 node2 Ready <none> 4m v1.33.0 node3 Ready <none> 3m v1.33.0
Conclusion
With this setup, you have a production-ready, kube-proxy-free Kubernetes cluster. Feel free to explore Cilium's advanced features, such as Hubble.