quickstart6 min read

Deploy Your First Kubernetes Cluster in 30 Minutes with kubeadm

SFEIR Institute

Key Takeaways

  • kubeadm deploys a Kubernetes cluster in 30 minutes with 6 main commands
  • 'Prerequisites: 2 Linux machines, 2 CPUs and 2 GB RAM minimum per node'

Want to deploy your first Kubernetes cluster with kubeadm quickly and efficiently? This quickstart guides you step by step, from machine preparation to your first deployed application.

The kubeadm cluster deployment training becomes accessible to any Backend developer or system administrator with this tutorial.

TL;DR: kubeadm is the official tool for bootstrapping a Kubernetes cluster in under 30 minutes. Prerequisites: 2 Linux machines (2 CPU, 2 GB RAM minimum), root access, and network connection. You'll execute 6 main commands to get a working cluster.

This skill is at the heart of the LFS458 Kubernetes Administration training.

Why kubeadm for Your First Cluster?

kubeadm is the installation tool recommended by the Kubernetes community. It automates control plane configuration while following security best practices. According to the CNCF Annual Survey 2025, 82% of container users run Kubernetes in production.

kubeadm automatically manages:

  • TLS certificate generation
  • etcd configuration
  • Control plane component deployment
  • Admin kubeconfig file creation

To understand these components, check the Kubernetes control plane architecture.

Remember: kubeadm is the standard for manual Kubernetes installations. It also prepares you for the CKA certification where this skill is evaluated.

Hardware and Software Prerequisites

Minimum Required Configuration

RoleCPURAMDiskOS
Control plane2 cores2 GB20 GBUbuntu 22.04+ / Debian 12+
Worker node2 cores2 GB20 GBUbuntu 22.04+ / Debian 12+

Preliminary Checks

Run these commands on all machines:

# Check system configuration
cat /etc/os-release
nproc
free -h
hostname

Disable swap (required for Kubernetes):

sudo swapoff -a
sudo sed -i '/ swap / s/^/#/' /etc/fstab

Configure required kernel modules:

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

sudo sysctl --system

For a detailed guide on multi-node installations, see the complete kubeadm installation guide.

Step 1: Install containerd

containerd is the recommended container runtime for Kubernetes since version 1.24.

# Install containerd
sudo apt-get update
sudo apt-get install -y containerd

# Create default configuration
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml

# Enable systemd cgroup driver
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml

# Restart containerd
sudo systemctl restart containerd
sudo systemctl enable containerd

Verify installation:

sudo systemctl status containerd

Step 2: Install kubeadm, kubelet, and kubectl

These three tools form the foundation of any Kubernetes cluster. kubelet is the agent that runs on each node.

# Add Kubernetes repository (version 1.32)
sudo apt-get install -y apt-transport-https ca-certificates curl gpg

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

# Install packages
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
Remember: The apt-mark hold command prevents automatic updates that could break your cluster.

Step 3: Initialize the Control Plane

On the machine designated as control plane, run kubeadm init:

sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=<CONTROL_PLANE_IP>

Replace with your machine's IP address. The --pod-network-cidr parameter defines the pod network, required for Flannel.

The command generates output containing:

  1. Instructions for configuring kubectl
  2. The kubeadm join command for adding workers

Configure kubectl for the current user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Verify the control plane responds:

kubectl get nodes
kubectl cluster-info

To learn more about network configuration, see Configure a Kubernetes cluster network.

Step 4: Install the CNI Network Plugin

Kubernetes requires a CNI plugin for pod communication. Flannel is the simplest to start with.

kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

Wait for system pods to be ready:

kubectl get pods -n kube-system -w

All pods must show Running before continuing.

Step 5: Join Worker Nodes

On each worker machine, run the kubeadm join command generated in step 3. It looks like:

sudo kubeadm join <CONTROL_PLANE_IP>:6443 --token <token> \
--discovery-token-ca-cert-hash sha256:<hash>

If you lost the token, regenerate it from the control plane:

kubeadm token create --print-join-command

Verify worker addition from the control plane:

kubectl get nodes

All nodes should show Ready after a few seconds.

Remember: The token expires after 24 hours by default. For production use, configure permanent tokens or use certificate authentication.

Step 6: Deploy Your First Application

Validate your cluster with an nginx deployment:

# Create a deployment
kubectl create deployment nginx --image=nginx:1.27 --replicas=2

# Expose the service
kubectl expose deployment nginx --port=80 --type=NodePort

# Verify the deployment
kubectl get pods -o wide
kubectl get services

Test application access:

NODE_PORT=$(kubectl get svc nginx -o jsonpath='{.spec.ports[0].nodePort}')
curl http://<WORKER_IP>:$NODE_PORT

You should see the nginx welcome page.

Quick Troubleshooting

SymptomProbable CauseSolution
NotReady on a nodeCNI not installedCheck Flannel pods
Pods in PendingInsufficient resourcesAdd a worker or increase resources
Token expired errorToken >24hRegenerate with kubeadm token create
kubelet won't startSwap enabledDisable swap

For more solutions, see the Kubernetes cluster administration FAQ.

"Don't let your knowledge remain theoretical - set up a real Kubernetes environment to solidify your skills."
- TealHQ Kubernetes DevOps Guide

What's Next After This First Cluster?

Your cluster is running. According to the Spectro Cloud 2025 report, 80% of organizations run Kubernetes in production with an average of 20+ clusters. Here are the recommended next steps:

  1. Monitoring: Install Prometheus and Grafana to monitor your cluster. See the Kubernetes monitoring tools comparison 2025.
  1. High availability: Configure multiple control planes with distributed etcd.
  1. Security: Apply Network Policies and configure RBAC. The LFS460 Kubernetes Security training covers these aspects.
  1. Certification: The LFS458 Kubernetes Administration training prepares you for the CKA certification that validates these skills.

Also explore the cluster administration fundamentals and the complete Kubernetes Training guide.

For beginners wanting to understand concepts before installation, the Kubernetes fundamentals training offers a one-day introduction.

Take Action

You've deployed your first working Kubernetes cluster. As The Enterprisers Project states: "Anybody can learn Kubernetes. With abundant documentation and development tools available online, teaching yourself Kubernetes is very much within reach."

To master production cluster administration and prepare for the CKA certification:

Contact our training advisors to identify the path suited to your goals.