Key Takeaways
- ✓96% of organizations use or evaluate Kubernetes, compared to 24% for Docker Swarm
- ✓Migration in 5 phases over 2 to 6 months depending on existing system complexity
- ✓Costly errors occur during the preparation phase, not during deployment
Migrating to Kubernetes from Docker Compose, VMs or monoliths represents a strategic investment: 82% of container users run Kubernetes in production, compared to 66% in 2023. This migration is no longer optional for teams targeting scale.
TL;DR: Kubernetes migration follows a 5-phase path: audit existing systems, containerization, manifest adaptation, progressive deployment, validation. Plan for 2 to 6 months depending on complexity. The most costly errors occur during the preparation phase, not during deployment.
To master the Kubernetes administration required for a successful migration, discover the LFS458 Kubernetes Administration training.
Why migrate to Kubernetes in 2026?
Kubernetes has become the de facto standard. 96% of organizations use or are evaluating Kubernetes, while Docker Swarm plateaus at 24% adoption. This disparity is explained by fundamental capability differences.
Key takeaway: Kubernetes scales to thousands of containers. Docker Swarm suits more modest workloads (PhoenixNAP).
The Kubernetes market will represent $8.41 billion by 2031, with 21.85% annual growth. Migration skills are becoming critical for Kubernetes production best practices.
Before/After: what concretely changes?
| Aspect | Docker Compose / VMs | Kubernetes |
|---|---|---|
| Scaling | Manual, per file | Automatic via HPA/VPA |
| High availability | External configuration | Native (ReplicaSets) |
| Service Discovery | Basic or manual DNS | Integrated DNS + Services |
| Rolling Updates | Stop/restart | Native zero-downtime |
| Secrets | Environment variables | Encrypted Secret objects |
| Monitoring | External tools | Integrated metrics |
Docker Compose is configured in one command (docker-compose up). Kubernetes requires a more complex multi-step installation, but offers superior orchestration capabilities.
For teams coming from traditional VMs, the paradigm shift is deeper: moving from an imperative model (provisioning scripts) to a declarative model (YAML manifests). See the guide Kubernetes vs Docker: understanding essential differences to explore this distinction further.
What technical prerequisites before starting?
Minimum infrastructure
Configure a development cluster before any production migration:
# Option 1: Kind (Kubernetes in Docker)
kind create cluster --name migration-test
# Option 2: Minikube
minikube start --cpus=4 --memory=8192
# Verification
kubectl cluster-info
kubectl get nodes
Required team skills
| Role | Minimum skills | Recommended training |
|---|---|---|
| Ops/SRE | kubectl, YAML, networking | CKA (4 days) |
| Developers | Dockerfile, ConfigMaps | CKAD (3 days) |
| Security | RBAC, Network Policies | CKS (4 days) |
The Kubernetes CKA CKAD CKS certifications structure this learning.
Audit existing systems
Inventory your applications with this template:
| Application | Current type | Stateful? | Dependencies | Migration complexity |
|---|---|---|---|---|
| API Gateway | Docker Compose | No | Redis | Low |
| User Database | VM PostgreSQL | Yes | Storage | High |
| Legacy monolith | Bare VM | Yes | NFS, LDAP | Very high |
Key takeaway: Stateless applications migrate in days. Stateful applications with persistent storage require weeks of planning.
How to migrate from Docker Compose to Kubernetes?
Step 1: Optimize Docker images
Before migration, reduce your image sizes. Alpine images weigh ~3 MB compared to ~70 MB for Ubuntu.
# BEFORE: 800 MB
FROM node:18
COPY . /app
RUN npm install
CMD ["npm", "start"]
# AFTER: 25 MB with multi-stage
FROM node:22-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
FROM node:22-alpine
WORKDIR /app
COPY --from=builder /app/node_modules ./node_modules
COPY . .
USER node
CMD ["npm", "start"]
Multi-stage builds reduce image size from 800 MB to 15-30 MB. The guide Optimize a Dockerfile for Kubernetes details these techniques.
Step 2: Convert docker-compose.yml to Kubernetes manifests
Use Kompose for initial conversion, then refine manually:
# Automatic conversion
kompose convert -f docker-compose.yml
# Generated files
ls *.yaml
# api-deployment.yaml
# api-service.yaml
# redis-deployment.yaml
# redis-service.yaml
Manual conversion example (recommended for production):
# Original docker-compose.yml
services:
api:
image: myapp:1.0
ports:
- "8080:8080"
environment:
- DATABASE_URL=postgres://db:5432
depends_on:
- db
# Equivalent Kubernetes deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
labels:
app: api
spec:
replicas: 3
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: myapp:1.0
ports:
- containerPort: 8080
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: db-credentials
key: url
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: api
spec:
selector:
app: api
ports:
- port: 8080
targetPort: 8080
type: ClusterIP
Key takeaway: Never deploy without resources (limits/requests) or health probes. These elements are optional in Docker Compose but essential in Kubernetes.
Step 3: Manage ConfigMaps and Secrets
Extract environment variables into dedicated Kubernetes objects:
# Create a Secret from an .env file
kubectl create secret generic app-secrets \
--from-env-file=.env.production
# Create a ConfigMap for non-sensitive configuration
kubectl create configmap app-config \
--from-file=config.json
See the Docker and Kubernetes cheatsheet for complete commands.
How to migrate from VMs to Kubernetes?
Migration from VMs follows a different process that begins with containerization.
Phase 1: Containerize the application
Analyze your VM's system dependencies:
# List installed packages (Debian/Ubuntu)
dpkg --get-selections | grep -v deinstall
# List active services
systemctl list-units --type=service --state=running
# Identify open ports
ss -tlnp
Create a Dockerfile that replicates the environment:
FROM ubuntu:22.04
# Replicate VM packages
RUN apt-get update && apt-get install -y \
python3.10 \
python3-pip \
libpq-dev \
&& rm -rf /var/lib/apt/lists/*
# Copy the application
COPY app/ /opt/app/
WORKDIR /opt/app
# Install Python dependencies
RUN pip3 install -r requirements.txt
# Non-root user is mandatory in production
RUN useradd -m appuser
USER appuser
EXPOSE 8000
CMD ["python3", "main.py"]
Phase 2: Manage persistent storage
VMs with local storage require PersistentVolumeClaims:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: standard
resources:
requests:
storage: 50Gi
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: database
spec:
serviceName: database
replicas: 1
selector:
matchLabels:
app: database
template:
metadata:
labels:
app: database
spec:
containers:
- name: postgres
image: postgres:15
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 50Gi
For complex architectures, the Kubernetes for Application Developers training covers stateful migration patterns.
How to decompose a monolith for Kubernetes?
Strangler Fig Pattern strategy
Migrate progressively without complete rewrite:
- Identify a loosely coupled module
- Extract this module as a microservice
- Route traffic via an API Gateway
- Repeat until the monolith is fully replaced
# Example: API Gateway with Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: api-gateway
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- host: api.example.com
http:
paths:
# New microservice
- path: /users(/|$)(.*)
pathType: ImplementationSpecific
backend:
service:
name: users-service
port:
number: 8080
# Legacy monolith (fallback)
- path: /(.*)
pathType: ImplementationSpecific
backend:
service:
name: monolith
port:
number: 80
Key takeaway: 70% of organizations use Helm to package their deployments. Adopt Helm from the first extracted microservice.
What rollback plan to prepare?
Document rollback procedures systematically:
Quick rollback (< 5 minutes)
# Revert to the previous Deployment version
kubectl rollout undo deployment/api
# Check revision history
kubectl rollout history deployment/api
# Revert to a specific revision
kubectl rollout undo deployment/api --to-revision=2
Complete rollback to the old system
| Step | Action | Owner | Estimated time |
|---|---|---|---|
| 1 | Switch DNS to the old system | Ops | 5 min |
| 2 | Restart VMs/Docker Compose | Ops | 10 min |
| 3 | Synchronize data (if necessary) | DBA | Variable |
| 4 | Validate operation | QA | 15 min |
| 5 | Communicate to users | PM | 5 min |
Keep the old system operational for 2 to 4 weeks after complete migration. This cohabitation period allows handling edge cases.
See the guide Docker and Kubernetes troubleshooting to diagnose common migration issues.
Post-migration validation checklist
Functional tests
# Verify all pods are Running
kubectl get pods -n production
# Test endpoints
curl -I https://api.example.com/health
# Check logs
kubectl logs -l app=api -n production --tail=100
Load tests
# Example with k6
k6 run --vus 100 --duration 5m load-test.js
# Verify autoscaling
kubectl get hpa -w
Security validation
| Control | Command | Expected result |
|---|---|---|
| RBAC | kubectl auth can-i --list | Minimal permissions |
| Network Policies | kubectl get networkpolicies | At least 1 policy/namespace |
| Secrets | kubectl get secrets | No secrets in cleartext |
| Pod Security | kubectl get psp | Standards or Restricted |
The LFS460 Kubernetes Security Fundamentals training covers these post-migration security controls.
Metrics to monitor
- P99 Latency: should remain stable (±10% vs baseline)
- Error rate: <0.1% over 24h
- Resource utilization: requests aligned with actual usage
- Pod startup time: <30s for critical applications
What pitfalls to avoid during migration?
Common errors:
- Underestimating the learning curve: plan at least 3 months for skill ramp-up
- Ignoring Network Policies: by default, all pods communicate with each other
- Forgetting resource limits: a pod without limits can saturate an entire node
- Neglecting monitoring: instrument from day 1
For beginner teams, the Kubernetes fundamentals training offers a structured introduction before tackling migration.
Additional resources for your migration
Explore the Kubernetes Training hub to access all guides and best practices. The Containerization and Docker best practices section covers essential technical prerequisites.
Take action: train your team for Kubernetes migration
71% of Fortune 100 companies run Kubernetes in production. Your team must master these skills to succeed in cloud-native transformation.
SFEIR Institute offers certifying training courses adapted to each profile:
- LFS458 Kubernetes Administration (4 days): for Ops/SRE teams managing migration and cluster administration
- LFD459 Kubernetes for Developers (3 days): for developers containerizing and deploying their applications
- LFS460 Kubernetes Security (4 days): to validate post-migration security compliance
- Kubernetes Fundamentals (1 day): to discover Kubernetes before a migration project
Contact our advisors to define the training path suited to your migration project.