Deploying a Multi-Node RKE2 Cluster with Cilium, Rancher, and ArgoCD on Hetzner
How to deploy a production-ready RKE2 Kubernetes cluster with Cilium, Rancher, ArgoCD, on Hetzner

Deploying a production-grade Kubernetes cluster doesn’t have to be complicated.
In this guide, we deploy:
- RKE2 (Rancher Kubernetes Engine)
- Cilium CNI
- Hetzner Cloud Controller Manager
- Rancher with Let’s Encrypt
- ArgoCD GitOps
- Multi-node support (Control Plane + Workers)
Although the examples use Hetzner Cloud, this guide works on any provider with private networking (VPC).
Architecture Overview
We deploy:
- 1× RKE2 Server (Control Plane)
- N× RKE2 Agent Nodes (Workers)
- Cilium CNI
- Ingress (NGINX from RKE2)
- Cert-Manager + Rancher
- ArgoCD
Example internal network:
| Node | IP |
|---|---|
| master node | 10.0.0.3 |
| worker nodes | 10.0.0.X |
Base OS Preparation (All Nodes)
apt updateapt install nfs-common -yapt upgrade -yapt autoremove -y
Enable kernel modules for Cilium
sudo modprobe vxlansudo modprobe cls_bpfsudo modprobe ip6_udp_tunnelsudo modprobe udp_tunnel
⚠️ Warning: DNS Fix (Hetzner-specific but safe anywhere)
Hetzner injects extra resolvers that can break cluster networking. Disable systemd-resolved and set custom resolvers. These are the current resolvers at this time on Hetzner: 185.12.64.1, 185.12.64.2
Make sure to check them yourself:
cat /etc/resolv.conf
sudo systemctl stop systemd-resolvedsudo systemctl disable systemd-resolvedsudo unlink /etc/resolv.confsudo tee /etc/resolv.conf << EOFnameserver 185.12.64.1nameserver 185.12.64.2search .EOFsudo chattr +i /etc/resolv.conf
Install RKE2 Server (Control Plane)
NOTE: Ensure that your master and agent nodes are running the same version!
curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE=server sh -
Create config:
mkdir -p /etc/rancher/rke2/cat << EOF > /etc/rancher/rke2/config.yamlwrite-kubeconfig-mode: "0644"node-name: "k8s-master"node-ip: "10.0.0.3"cloud-provider-name: externalcluster-cidr: 10.244.0.0/16kube-proxy-arg: "metrics-bind-address=0.0.0.0"disable:- cloud-controller- rke2-canaltls-san:- 10.0.0.3- k8s-mastercni: ciliumtoken: passwordEOF
Start RKE2:
systemctl enable --now rke2-server.service
Symlink kubectl
ln -s $(find /var/lib/rancher/rke2/data/ -name kubectl) /usr/local/bin/kubectl
Persist environment variables
echo "export KUBECONFIG=/etc/rancher/rke2/rke2.yaml PATH=$PATH:/usr/local/bin/:/var/lib/rancher/rke2/bin/" >> ~/.bashrcsource ~/.bashrc
Verify node
kubectl get node
Install Hetzner Cloud Controller Manager (Optional)
This step is optional, but it's really nice to have CCM Hetzner as you will be able to create storage Volumes on Hetzner directly from Kubernetes. More details on how to create hetzner api token here: https://docs.hetzner.com/cloud/api/getting-started/generating-api-token/
Also make sure you have created a private network called vnet and your VPS is on this private network
Create secret:
kubectl -n kube-system create secret generic hcloud \--from-literal=token=<your-hetzner-api-token> \--from-literal=network=vnet
Install CCM:
kubectl apply -f https://github.com/hetznercloud/hcloud-cloud-controller-manager/releases/latest/download/ccm-networks.yaml
Note: On a single-node cluster, one Cilium operator pod will remain Pending. This is expected. It will be set as
Readyif you deploy more nodes
Fix Ingress-NGINX Validation (RKE2-specific)
NOTE: This is VERY VERY important step, if you plan on using rke2-ingress-nginx like i do, you must do this step so that your cert-manager can resolve HTTP-01 challenges
kubectl patch configmap rke2-ingress-nginx-controller -n kube-system --type merge -p '{"data":{"strict-validate-path-type":"false"}}'
Install Helm, Cert-Manager & Rancher
Install Helm:
curl -#L https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
Add repos:
helm repo add rancher-latest https://releases.rancher.com/server-charts/latest --force-updatehelm repo add jetstack https://charts.jetstack.io --force-update
Install Cert-Manager
helm install cert-manager jetstack/cert-manager \--namespace cert-manager \--create-namespace \--set crds.enabled=true
Install Rancher (Let's Encrypt enabled)
Make sure to change hostname and email
helm install rancher rancher-latest/rancher \--namespace cattle-system \--create-namespace \--set hostname=rancher.example.com \--set bootstrapPassword=password \--set replicas=1 \--set ingress.tls.source=letsEncrypt \--set letsEncrypt.email=youremail@example.com \--set ingress.ingressClassName=nginx
Install ArgoCD (with HTTPS)
Add repo:
helm repo add argo https://argoproj.github.io/argo-helmhelm repo update
Create Cluster issuer:
kubectl create -f - <<EOFapiVersion: cert-manager.io/v1kind: ClusterIssuermetadata:name: letsencrypt-httpspec:acme:email: youremail@example.comprivateKeySecretRef:name: letsencrypt-http-issuer-secretprofile: tlsserverserver: https://acme-v02.api.letsencrypt.org/directorysolvers:- http01:ingress:ingressClassName: nginxserviceType: ClusterIPEOF
Install:
helm upgrade argocd argo/argo-cd \--namespace argocd \--create-namespace \--install \--set configs.params.server\.insecure=true \--set server.ingress.enabled=true \--set server.ingress.ingressClassName=nginx \--set server.ingress.hostname=argocd.example.com \--set server.ingress.annotations."cert-manager\.io/cluster-issuer"=letsencrypt-http \--set server.ingress.annotations."nginx\.ingress\.kubernetes\.io/ssl-passthrough"=true \--set server.ingress.annotations."nginx\.ingress\.kubernetes\.io/backend-protocol"=HTTPS \--set server.ingress.tls=true
Check pods:
kubectl get pods -n argocd
Get initial ArgoCD password:
kubectl exec -n argocd $(kubectl get pod -n argocd -l app.kubernetes.io/name=argocd-server -o name) -- argocd admin initial-password
Optional: DockerHub Pull Secret
kubectl create secret docker-registry regcred \--docker-server=https://index.docker.io/v1/ \--docker-username=<your-username> \--docker-password=<your-password> \--docker-email=<your-email> \-n example-namespace
Install RKE2 Agent Nodes (Workers)
Prepare OS (same steps as master server), then:
export RANCHER1_IP=10.0.0.3curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE=agent sh -
Create agent config:
Change node-ip to whatever private IP agent VPS has
mkdir -p /etc/rancher/rke2/cat << EOF >> /etc/rancher/rke2/config.yamlserver: https://$RANCHER1_IP:9345token: passwordnode-ip: 10.0.0.6kube-apiserver-arg:- kubelet-preferred-address-types=InternalIPEOF
Start agent:
systemctl enable --now rke2-agent.service
Verify nodes
kubectl get nodes
🎉 All Done!
You now have a fully operational production-ready Kubernetes cluster powered by:
- 💠 RKE2
- 🧠 Cilium
- ☁️ Hetzner CCM
- 🐮 Rancher
- 🔁 ArgoCD
This setup works across any cloud provider with private networking.
Remember to replace all placeholder tokens, passwords, and IPs with your own values.
Enjoy your new cluster! 🚀