Key Takeaways
- ✓Kubeadm remains the recommended tool by Kubernetes v1.31 for self-hosted production deployments
- ✓Minimum configuration: 3 Ubuntu VMs, 2 CPU and 4GB RAM per node, duration 45-60 minutes
This complete guide walks you through installing a Kubernetes cluster with kubeadm on a multi-node infrastructure. The procedure covers Ubuntu 22.04/24.04 and uses containerd as runtime. According to the official Kubernetes v1.31 documentation, kubeadm remains the recommended tool for self-hosted production deployments.
TL;DR: This guide details the complete installation of a 3-node Kubernetes cluster (1 control plane + 2 workers) with kubeadm, containerd, and Calico CNI. Estimated time: 45-60 minutes. Prerequisites: 3 Ubuntu VMs with 2 CPU and 4GB RAM minimum.
Multi-node Kubernetes cluster configuration is at the core of the LFS458 Kubernetes Administration training.
What prerequisites for installing a Kubernetes cluster with kubeadm?
Before starting, verify that your infrastructure meets minimum requirements.
Hardware configuration
| Component | Control Plane | Worker Node |
|---|---|---|
| CPU | 2 cores minimum | 1 core minimum |
| RAM | 4 GB minimum | 2 GB minimum |
| Disk | 50 GB SSD | 30 GB SSD |
| Network | Connectivity between nodes | Connectivity between nodes |
Network configuration
# Verify connectivity between nodes
ping -c 3 <control-plane-ip>
ping -c 3 <worker1-ip>
ping -c 3 <worker2-ip>
The following ports must be open:
Control Plane:
- 6443: API server
- 2379-2380: etcd
- 10250: kubelet
- 10259: scheduler
- 10257: controller-manager
Workers:
- 10250: kubelet
- 30000-32767: NodePort Services
Key takeaway: Minimum requirements are for testing. In production, double these values and use fast SSD disks for etcd.
How to prepare nodes for installing a Kubernetes cluster with kubeadm?
This section covers preparing each node. Execute these commands on ALL nodes (control plane and workers).
Disable swap
Kubernetes requires swap to be disabled:
# Disable swap immediately
sudo swapoff -a
# Disable swap on reboot
sudo sed -i '/ swap / s/^/#/' /etc/fstab
# Verify
free -h
Configure kernel modules
# Load required modules
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
# Verify loading
lsmod | grep br_netfilter
lsmod | grep overlay
Configure sysctl parameters
# Configure forwarding and iptables
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
# Apply parameters
sudo sysctl --system
# Verify
sysctl net.bridge.bridge-nf-call-iptables
sysctl net.ipv4.ip_forward
Configure hostname
Each node must have a unique hostname:
# On control plane
sudo hostnamectl set-hostname control-plane
# On worker 1
sudo hostnamectl set-hostname worker-1
# On worker 2
sudo hostnamectl set-hostname worker-2
Add entries to /etc/hosts on all nodes:
cat <<EOF | sudo tee -a /etc/hosts
192.168.1.10 control-plane
192.168.1.11 worker-1
192.168.1.12 worker-2
EOF
How to install containerd as runtime?
containerd is the recommended container runtime for Kubernetes. Execute on ALL nodes.
containerd installation
# Install dependencies
sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg
# Add Docker repository (contains containerd)
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Install containerd
sudo apt-get update
sudo apt-get install -y containerd.io
Configure containerd
# Generate default configuration
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
# Enable SystemdCgroup
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
# Restart containerd
sudo systemctl restart containerd
sudo systemctl enable containerd
# Verify status
sudo systemctl status containerd
Key takeaway: The SystemdCgroup = true option is critical. Without it, pods encounter stability issues with cgroups v2.
How to install kubeadm, kubelet and kubectl?
Kubernetes tools are installed from the official repository. Execute on ALL nodes.
Add Kubernetes repository
# Install dependencies
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
# Add GPG key
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
# Add repository
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
Install packages
# Install kubeadm, kubelet, kubectl
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
# Prevent automatic updates
sudo apt-mark hold kubelet kubeadm kubectl
# Verify versions
kubeadm version
kubectl version --client
kubelet --version
How to initialize the control plane?
Control plane initialization is done only on the control-plane node.
Execute kubeadm init
# Initialize cluster
sudo kubeadm init \
--pod-network-cidr=10.244.0.0/16 \
--kubernetes-version=v1.31.0 \
--control-plane-endpoint=control-plane:6443
# Keep the displayed kubeadm join command!
The --pod-network-cidr parameter must match the CNI you'll install (here Calico/Flannel).
Configure kubectl
# Configure kubectl access for current user
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# Verify connection
kubectl cluster-info
kubectl get nodes
The control-plane node appears with NotReady status: this is normal, the CNI is not yet installed.
Install CNI (Calico)
# Install Calico
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/calico.yaml
# Verify installation
kubectl get pods -n kube-system | grep calico
kubectl get nodes
After a few minutes, the control plane transitions to Ready status.
Key takeaway: CNI is essential for pod communication. Without CNI, no application pod can start correctly.
How to join workers to the cluster?
Adding workers uses the command generated during init. Execute on each worker node.
Retrieve join command
If you lost the command, regenerate it from control plane:
# On control plane
kubeadm token create --print-join-command
Join workers
# On worker-1 and worker-2
sudo kubeadm join control-plane:6443 \
--token <token> \
--discovery-token-ca-cert-hash sha256:<hash>
Verify cluster
From control plane:
kubectl get nodes -o wide
Expected output:
NAME STATUS ROLES VERSION INTERNAL-IP OS-IMAGE
control-plane Ready control-plane v1.31.0 192.168.1.10 Ubuntu 22.04
worker-1 Ready <none> v1.31.0 192.168.1.11 Ubuntu 22.04
worker-2 Ready <none> v1.31.0 192.168.1.12 Ubuntu 22.04
How to validate cluster installation?
Verify that all components work correctly.
Verify system pods
kubectl get pods -n kube-system
All pods must be in Running status:
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-xxx 1/1 Running 0 5m
calico-node-xxx 1/1 Running 0 5m
coredns-xxx 1/1 Running 0 10m
etcd-control-plane 1/1 Running 0 10m
kube-apiserver-control-plane 1/1 Running 0 10m
kube-controller-manager-control-plane 1/1 Running 0 10m
kube-proxy-xxx 1/1 Running 0 10m
kube-scheduler-control-plane 1/1 Running 0 10m
Deploy a test application
# Create deployment
kubectl create deployment nginx --image=nginx:1.27
# Expose service
kubectl expose deployment nginx --port=80 --type=NodePort
# Verify
kubectl get pods -o wide
kubectl get svc nginx
Test connectivity
# Get NodePort
NODE_PORT=$(kubectl get svc nginx -o jsonpath='{.spec.ports[0].nodePort}')
# Test from outside
curl http://worker-1:$NODE_PORT
curl http://worker-2:$NODE_PORT
Key takeaway: An end-to-end test (deployment + service + external access) validates that your cluster is operational.
How to resolve common problems?
Here are frequent errors and their solutions.
Error: Node NotReady
# Diagnosis
kubectl describe node <node-name>
journalctl -u kubelet -f
# Common causes:
# - CNI not installed
# - kubelet crashed
# - Network problem
Error: Pods in Pending
# Check events
kubectl describe pod <pod-name>
# Common causes:
# - Not enough resources
# - Taints on nodes
# - Unsatisfied affinity
Error: kubeadm init fails
# Reset and retry
sudo kubeadm reset
sudo rm -rf /etc/cni/net.d
sudo iptables -F && sudo iptables -t nat -F
For more solutions, consult the page Resolve the 10 most common cluster problems.
What best practices for a production cluster?
This guide covers a basic installation. For production, add:
High availability
- 3 control plane nodes minimum
- Load balancer in front of API server
- etcd in HA mode (stacked or external)
Security
- Network Policies to segment traffic
- Pod Security Admission to restrict pods
- Finely configured RBAC
Consult our regional pages for in-person training:
- Kubernetes Administration Training in Toulouse
- Kubernetes Administration Training in Luxembourg
- Kubernetes Administration Training in Brussels
Monitoring
- Prometheus for metrics
- Grafana for visualization
- Alertmanager for alerts
Backup
# Daily etcd backup
0 2 * * * /usr/local/bin/etcd-backup.sh /backups/etcd-$(date +\%Y\%m\%d).db
Take action: deepen your skills
You've installed your first multi-node cluster. Here are the next steps.
Continue your learning
- Kubernetes cluster administration: all guides and resources
- kubectl cheatsheet: essential commands: quick reference
Prepare for CKA certification
The infrastructure engineer LFS458 Kubernetes Administration training covers these skills and more:
- Advanced installation with kubeadm
- etcd management and high availability
- Networking and troubleshooting
Discover the LFS458 Kubernetes Administration training and check upcoming dates on the training calendar.
Recommended paths based on your profile:
- Beginner: Kubernetes fundamentals (1 day)
- Administrator: LFS458 Kubernetes Administration (4 days, CKA preparation)
- Developer: LFD459 Kubernetes for developers (3 days, CKAD preparation)
- Security: LFS460 Kubernetes Security (4 days, CKS preparation)
For personalized advice or intra-company session, contact our advisors.