migration8 min read

Move from Self-Hosted Kubernetes to Managed Cloud Service

SFEIR Institute

Key Takeaways

  • Managed Kubernetes reduces control plane maintenance time by 60%
  • Successful migration in 4 to 8 weeks with progressive workload approach
  • 82% of container users run Kubernetes in production (CNCF 2025)

Migrating a Kubernetes cluster from on-premise to a managed cloud service represents a major strategic decision.

According to the CNCF Annual Survey 2025, 82% of container users run Kubernetes in production. This massive adoption is accompanied by a clear trend: organizations seek to reduce the operational burden related to cluster management.

TL;DR: Migration to managed Kubernetes (EKS, GKE, AKS) reduces control plane maintenance time by 60%. This guide details critical steps: existing audit, resource mapping, progressive workload migration, and complete validation. Plan 4 to 8 weeks for a successful migration.

This skill is at the heart of the LFS458 Kubernetes Administration training.

Why Migrate to Managed Kubernetes?

Managed Kubernetes refers to a cloud service where the provider manages the control plane (API server, etcd, scheduler, controller-manager). You retain responsibility for worker nodes and workloads.

IT teams spend an average of 34 working days per year resolving Kubernetes issues according to Cloud Native Now. Migration to a managed service significantly reduces this burden.

Key takeaway: On-premise to cloud migration is not a simple "lift and shift". It's an opportunity to modernize architecture and adopt cloud-native best practices.

Before/After Comparison: Self-Hosted vs Managed

AspectSelf-hosted KubernetesManaged Kubernetes
Control planeYou manage etcd, API server, schedulerManaged by cloud provider
UpdatesManual planning, interruption riskAutomated or one-click updates
High availabilityManual multi-master configurationIncluded by default
Operational cost2-3 FTE dedicated infrastructure0.5-1 FTE
SLADepends on your team99.95% guaranteed (EKS, GKE, AKS)
IAM integrationThird-party solutions (OIDC, LDAP)Native (IAM roles, Workload Identity)
NetworkCNI to configure (Calico, Cilium)Preconfigured CNI with options

The 89% of IT leaders planning to increase their cloud budget in 2025 (nOps FinOps Statistics) confirm this trend toward managed services.

To deepen the differences, consult our guide Managed or Self-Hosted Kubernetes: Best Practices for Making the Right Choice.

What Are the Prerequisites Before Migration?

Existing Infrastructure Audit

Inventory your current Kubernetes resources:

# List all namespaces and resources
kubectl get all --all-namespaces -o wide > inventory.txt

# Export configurations
kubectl get configmaps,secrets --all-namespaces -o yaml > configs-backup.yaml

# Document PersistentVolumes
kubectl get pv,pvc --all-namespaces -o yaml > storage-backup.yaml

Check version compatibility. Managed services generally support the last 3 minor Kubernetes versions. If your on-premise cluster runs an obsolete version, plan a prior update.

Dependency Mapping

Identify elements specific to your infrastructure:

  • Storage classes: On-premise provisioners (NFS, Ceph, vSphere) don't exist on managed clouds
  • Ingress controllers: Ingress NGINX Controller will be retired in March 2026
  • Network policies: Check compatibility with target cloud CNI
  • Custom Resource Definitions: Export and test compatibility
Key takeaway: 70% of organizations use Helm to deploy on Kubernetes (Orca Security 2025). Verify that your Helm charts are compatible with the target cluster.

Cloud Provider Choice

Consult our EKS vs GKE vs AKS comparison to choose the platform suited to your needs. Main criteria are:

  • Data location (GDPR compliance)
  • Integration with your existing tools
  • Pricing model
  • Team expertise

To master managed cluster administration, CKA certification validates necessary skills. The exam lasts 2 hours with a passing score of 66% (Linux Foundation).

How to Plan Step-by-Step Migration?

Phase 1: Preparation (Week 1-2)

Create the target environment in your cloud provider:

# Example with GKE
gcloud container clusters create production-migrated \
--region europe-west1 \
--num-nodes 3 \
--machine-type e2-standard-4 \
--enable-autoscaling \
--min-nodes 2 \
--max-nodes 10

# Example with EKS
eksctl create cluster \
--name production-migrated \
--region eu-west-1 \
--nodegroup-name workers \
--node-type t3.large \
--nodes 3 \
--nodes-min 2 \
--nodes-max 10

Configure authentication using native IAM mechanisms:

# GKE - Get credentials
gcloud container clusters get-credentials production-migrated --region europe-west1

# EKS - Configure kubeconfig
aws eks update-kubeconfig --name production-migrated --region eu-west-1

Phase 2: Storage Migration (Week 2-3)

Storage represents the most critical element. Identify your PersistentVolumes and their migration method:

Data TypeMigration MethodInterruption Time
DatabasesNative replication (PostgreSQL, MySQL)~0 with failover
Static filesSync to cloud bucket (gsutil, aws s3)0
Application volumesVelero backup/restoreMinutes
# Velero installation for backup/restore
velero install \
--provider gcp \
--bucket velero-backups \
--secret-file ./credentials-velero

# Production namespace backup
velero backup create production-backup \
--include-namespaces production \
--include-resources persistentvolumeclaims,persistentvolumes

To understand differences between kubectl commands and alternatives, refer to our kubectl vs Docker CLI cheatsheet.

Phase 3: Workload Migration (Week 3-5)

Adopt a progressive approach starting with non-critical applications:

  1. Stateless applications: Redeploy via CI/CD on the new cluster
  2. Stateful applications: Use application replication
  3. Critical services: Blue-green migration with DNS switch
# Example Deployment adapted to managed cloud
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-production
namespace: production
spec:
replicas: 3
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
# Using Workload Identity (GKE) or IRSA (EKS)
serviceAccountName: api-workload-identity
containers:
- name: api
image: gcr.io/project/api:v2.3.1
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
# Mandatory probes for managed services
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5

Phase 4: Traffic Switch (Week 5-6)

Configure progressive routing via your DNS or load balancer:

# Progressive switch with weighted routing (Route 53)
aws route53 change-resource-record-sets --hosted-zone-id Z123456 --change-batch '{
"Changes": [{
"Action": "UPSERT",
"ResourceRecordSet": {
"Name": "api.example.com",
"Type": "A",
"SetIdentifier": "new-cluster",
"Weight": 10,
"AliasTarget": {
"DNSName": "k8s-lb-new.elb.amazonaws.com",
"HostedZoneId": "Z35SXDOTRQ7X7K",
"EvaluateTargetHealth": true
}
}
}]
}'

Progressively increase weight to the new cluster: 10% → 25% → 50% → 75% → 100%.

What Rollback Plan to Prepare?

A robust rollback plan is mandatory. Document each rollback step:

Rollback Scenarios

Problem DetectedRollback ActionEstimated Time
Degraded performanceDNS switch back to old cluster5-15 minutes
Application incompatibilityVelero restore on old cluster30-60 minutes
Data lossRestore from etcd backup1-2 hours
Complete failureDR plan activationVariable
# Quick rollback command - DNS switch
aws route53 change-resource-record-sets --hosted-zone-id Z123456 --change-batch '{
"Changes": [{
"Action": "UPSERT",
"ResourceRecordSet": {
"Name": "api.example.com",
"Type": "A",
"SetIdentifier": "old-cluster",
"Weight": 100
}
}]
}'

# Velero restoration if needed
velero restore create --from-backup production-backup

Keep the old cluster operational for minimum 2 weeks after complete migration.

How to Validate the Migration?

Technical Validation Checklist

Run these checks before decommissioning the old cluster:

# Check running pods
kubectl get pods --all-namespaces | grep -v Running | grep -v Completed

# Check service endpoints
kubectl get endpoints --all-namespaces

# Network connectivity test
kubectl run test-network --image=busybox --rm -it -- wget -qO- http://service.namespace.svc.cluster.local

# Check persistent volumes
kubectl get pvc --all-namespaces -o custom-columns='NAMESPACE:.metadata.namespace,NAME:.metadata.name,STATUS:.status.phase'

Functional Validation

  • [ ] All applications respond to health checks
  • [ ] Prometheus metrics are collected
  • [ ] Logs are centralized (CloudWatch, Stackdriver, Azure Monitor)
  • [ ] Alerts are configured and functional
  • [ ] Automatic backups are operational
  • [ ] CI/CD pipelines deploy to the new cluster
  • [ ] Load tests confirm expected performance

For an overview of available distributions, consult the Kubernetes distributions comparison table.

Security Validation

# RBAC audit
kubectl auth can-i --list --as=system:serviceaccount:production:api-sa

# Check Network Policies
kubectl get networkpolicies --all-namespaces

# Image scan with Trivy
trivy image gcr.io/project/api:v2.3.1

As a company CTO interviewed by Spectro Cloud recalls: "The VMware acquisition is influencing my decision making right now, heavily." Migration to a managed cloud service offers a sustainable alternative.

What Skills for Successful Migration?

CKA certification validates the skills needed to administer Kubernetes clusters, whether on-premise or managed. More than 104,000 people have taken the CKA exam with 49% annual growth (CNCF Training Report).

According to TechiesCamp: "The CKA exam tested practical, useful skills. It wasn't just theory - it matched real-world situations you'd actually run into when working with Kubernetes."

For infrastructure engineers preparing for certification, our guide on Kubernetes vs Docker Swarm clarifies fundamental differences between orchestrators. Additionally, the Kubernetes Comparisons and Alternatives hub centralizes all comparison resources.

Next Steps: Training and Certifications

Migration to managed Kubernetes requires solid cluster administration skills. SFEIR Institute offers official Linux Foundation trainings:

  • LFS458 Kubernetes Administration: 4 days of intensive training preparing for CKA certification. Covers installation, configuration, and production cluster management.

The system administrator Kubernetes Fundamentals training is an excellent starting point.

Certifications are valid for 2 years (Linux Foundation). Chris Aniszczyk, CNCF CTO, states that "Kubernetes is no longer experimental but foundational. Soon, it will be essential to AI as well" (CNCF State of Cloud Native 2026).

Contact our advisors to plan your training path and succeed in your migration to the managed cloud. Also consult our complete Kubernetes Training guide to explore all available options.