Key Takeaways
- ✓Rolling Update enables zero-downtime updates with automatic rollback
- ✓Blue-Green and Canary minimize risks for critical deployments
- ✓Helm and Kustomize standardize Kubernetes manifest management
Kubernetes deployment and production refers to all processes for delivering, configuring, and maintaining containerized applications on a Kubernetes cluster in a production environment.
If you manage cloud-native applications in 2026, this expertise is the essential foundation for ensuring availability, scalability, and reliability of your workloads.
According to the CNCF Annual Survey 2025, 82% of organizations use Kubernetes in production. Mastering deployment strategies allows you to drastically reduce production-related incidents.
TL;DR: Deploying on Kubernetes in production requires mastering rollout strategies (Rolling Update, Blue-Green, Canary), GitOps (ArgoCD, FluxCD), autoscaling (HPA, VPA), and CI/CD pipelines. The LFS458 Kubernetes Administration training (4 days, 28h) covers these skills for CKA.
What Is a Kubernetes Deployment and Why Is It Critical?
A Deployment is a Kubernetes object that manages the lifecycle of application Pods. It defines the desired state of your application: number of replicas, container image, allocated resources, and update strategy. Kubernetes constantly compares the current state to the desired state and makes necessary corrections.
The ReplicaSet is the underlying mechanism that maintains the number of running Pods. You generally don't manipulate ReplicaSets directly: the Deployment handles that for you.
Kelsey Hightower, creator of "Kubernetes The Hard Way", compares Kubernetes to a foundational platform on which you build your own deployment system (CNCF Blog).
Here's a minimal Deployment you can apply immediately:
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-backend
labels:
app: api-backend
spec:
replicas: 3
selector:
matchLabels:
app: api-backend
template:
metadata:
labels:
app: api-backend
spec:
containers:
- name: api
image: registry.example.com/api:v2.4.1
ports:
- containerPort: 8080
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
Key insight: Always define resourcerequestsandlimits. Without them, your Pods can be evicted under memory pressure or monopolize node resources.
How to Choose Your Deployment Strategy?
Kubernetes offers several update strategies. Your choice depends on your business constraints: tolerance for downtime, criticality of production testing, and infrastructure capacity.
| Strategy | Description | Downtime | Rollback | Use Case |
|---|---|---|---|---|
| Rolling Update | Progressive Pod replacement | None | Automatic | Standard stateless applications |
| Blue-Green | Two parallel environments | None | Instant | Critical applications, manual validation |
| Canary | Gradual deployment by percentage | None | Gradual | A/B testing, progressive validation |
| Recreate | Delete then recreate | Yes | Manual | Stateful applications with constraints |
Rolling Update is the default strategy. You configure behavior via maxSurge and maxUnavailable:
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 25%
maxUnavailable: 0
See our detailed guide on Kubernetes Rolling Update to deepen this strategy.
For critical applications requiring validation before switching, Blue-Green Deployment offers a more secure approach. You maintain two identical environments and switch traffic instantly.
Canary Deployment allows you to test a new version on a limited percentage of users before generalization.
Key insight: In 2026, prefer Canary for high-traffic applications. Teams using progressive deployments significantly reduce production-related incidents.
How to Implement GitOps for Your Deployments?
GitOps is a methodology where Git becomes the single source of truth for infrastructure and applications. You declare the desired state in a Git repository, and an operator automatically synchronizes the cluster.
The two dominant tools in 2026 are ArgoCD and FluxCD. Our ArgoCD vs FluxCD comparison helps you choose.
Brendan Burns, Kubernetes co-creator, describes Kubernetes as "the assembly language for Cloud Native applications" (The New Stack). GitOps transforms deployment into an auditable and reproducible process.
Here's how you configure an ArgoCD Application:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: api-backend
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/your-org/k8s-manifests
targetRevision: main
path: apps/api-backend/production
destination:
server: https://kubernetes.default.svc
namespace: production
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
To deepen principles and implementation, see our guide GitOps and Kubernetes.
What Are Best Practices for a Kubernetes CI/CD Pipeline?
A CI/CD pipeline (Continuous Integration / Continuous Deployment) automates the steps between code commit and production deployment. For Kubernetes, you must integrate image building, testing, manifest updating, and cluster synchronization.
Our complete Kubernetes CI/CD Pipeline guide details each step.
According to the DORA 2024 report, "elite performer" teams deploy multiple times per day while low-performing teams deploy less than once a month. They also recover faster after incidents.
Structure your pipeline in distinct stages:
- Build: Build your image with an immutable tag (commit SHA)
- Test: Run unit, integration, and security tests
- Scan: Analyze the image for vulnerabilities (Trivy, Snyk)
- Push: Push to your private registry
- Update: Update the Kubernetes manifest with the new tag
- Sync: Trigger GitOps synchronization or apply directly
# Example CLI commands in your pipeline
# Build with immutable tag
docker build -t registry.example.com/api:${COMMIT_SHA} .
# Vulnerability scan
trivy image --severity HIGH,CRITICAL registry.example.com/api:${COMMIT_SHA}
# Push to registry
docker push registry.example.com/api:${COMMIT_SHA}
# Update manifest via Kustomize
cd k8s-manifests/apps/api-backend/production
kustomize edit set image api=registry.example.com/api:${COMMIT_SHA}
git commit -am "Deploy api:${COMMIT_SHA}"
git push
Key insight: Never use the latest tag in production. You lose traceability and rollback becomes impossible. Use commit SHA or semantic versioned tags.
How to Configure Autoscaling to Handle Load?
Autoscaling automatically adjusts the number of Pods based on load. Kubernetes offers three complementary mechanisms you must understand.
HPA (Horizontal Pod Autoscaler) adjusts the number of replicas. VPA (Vertical Pod Autoscaler) adjusts CPU/memory resources of existing Pods. Cluster Autoscaler adds or removes nodes as needed.
Our guide Kubernetes Autoscaling: HPA, VPA, and Automatic Scaling covers all three mechanisms in detail.
Configure an HPA based on CPU:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: api-backend-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: api-backend
minReplicas: 3
maxReplicas: 20
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
behavior:
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 10
periodSeconds: 60
According to Google Kubernetes Engine Best Practices 2026, you should always define a stabilizationWindowSeconds for scale-down to avoid oscillations.
What Tools for Managing Your Kubernetes Manifests?
You have two main approaches to manage manifest complexity: Helm and Kustomize. Each meets different needs.
Helm is a package manager. You use parameterizable templates and per-environment values. It excels for third-party applications and standardized deployments.
Kustomize is a customization tool without templating. You overlay patches on base manifests. It's native to kubectl since version 1.14.
Our Helm vs Kustomize comparison guides you in your choice.
| Criterion | Helm | Kustomize |
|---|---|---|
| Complexity | Moderate (Go templates) | Low (native YAML) |
| Reusability | Shared charts | Bases + overlays |
| Ecosystem | Artifact Hub (15,000+ charts) | Native components |
| GitOps | Via ArgoCD/Flux | Native kubectl |
| Use case | Third-party apps, standards | Internal apps, customization |
For an exhaustive view of strategies, see our deployment strategies comparison table.
What Checks Before Going to Production?
Before deploying to production, you must systematically validate several critical points. Our Production Checklist details 15 essential best practices.
Validate your resources: Each container must define requests and limits. Use kubectl describe node to check pressure on your nodes.
Configure probes: Liveness and readiness probes allow Kubernetes to properly manage your Pods' lifecycle.
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 15
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
Prepare rollback: Verify you can revert to the previous version with kubectl rollout undo deployment/api-backend.
Test monitoring: Ensure your metrics and alerts are configured before deployment. See our Kubernetes Monitoring and Troubleshooting hub for best practices.
Key insight: Fast rollback is critical to limit incident impact. Prepare your runbooks and test the rollback procedure before going to production.
How to Diagnose and Resolve Deployment Issues?
Deployment errors are inevitable. You must know how to diagnose them quickly.
The most frequent causes according to our deployment error resolution guide:
- ImagePullBackOff: Image doesn't exist or credentials are incorrect
- CrashLoopBackOff: Application crashes at startup
- Pending: Insufficient cluster resources
- OOMKilled: Memory limit exceeded
Essential diagnostic commands:
# Deployment status
kubectl rollout status deployment/api-backend
# Pod events
kubectl describe pod api-backend-xyz
# Container logs
kubectl logs -f deployment/api-backend --all-containers
# Revision history
kubectl rollout history deployment/api-backend
For concrete experience feedback, see our article Experience Feedback: Kubernetes Production Migration.
Our Kubernetes Tutorials and Practical Guides hub also offers hands-on exercises.
Take Action: Get Trained on Kubernetes Deployment
You now master the fundamental concepts of Kubernetes production deployment. To go further and obtain recognized certification, SFEIR Institute offers official Linux Foundation trainings.
The LFS458 Kubernetes Administration training prepares you for CKA certification. You'll learn to configure, manage, and troubleshoot clusters in real conditions over 4 intensive days.
For developers, the LFD459 Kubernetes for Developers training covers in 3 days the application deployment skills needed for CKAD.
If you're a beginner, the Kubernetes Fundamentals training gives you the basics in one day.
Contact your OPCO to explore funding possibilities. Contact our advisors to build your certification path.
See our Kubernetes Training: Complete Guide to discover all available paths.
Guides and Tutorials in This Section
To deepen Kubernetes deployment and production, explore these resources:
- Set Up a CI/CD Pipeline for Kubernetes: Complete Guide: deployment automation
- GitOps and Kubernetes: Principles, Tools, and Implementation: Git as source of truth
- ArgoCD vs FluxCD: Which GitOps Tool for Kubernetes: GitOps operator comparison
- Kubernetes Rolling Update: Deploy Without Service Interruption: progressive update strategy
- kubectl Cheatsheet: Essential Deployment Commands: quick command reference
- Resolve Kubernetes Deployment Errors: Diagnostic Guide: deployment troubleshooting
- Kubernetes Production Checklist: 15 Best Practices: pre-go-live validation
- First Kubernetes Deployment in 30 Minutes: quickstart for beginners
- Experience Feedback: Kubernetes Production Migration: field lessons
- Canary Deployment on Kubernetes: Progressive Deployment Explained: controlled production testing
- Deploy with Helm Charts: Installation, Configuration, and Best Practices: Kubernetes package management
- Helm vs Kustomize: Comparison for Managing Kubernetes Deployments: choose your templating tool
- Kubernetes Deployment Strategies: Complete Comparison Table: approach overview
- Blue-Green Deployment on Kubernetes: Zero Downtime in Production: instant version switching
- Kubernetes Autoscaling: HPA, VPA, and Automatic Scaling Explained: adapt resources to load
- Kubernetes Scaling Problems: Diagnosis and Solutions: resolve autoscaling issues
- Migrate to GitOps Architecture for Kubernetes: GitOps transition
- Kubernetes Deployment FAQ: Most Frequent Questions Answered: common deployment questions
- Kubernetes Multi-Environment Management: Strategies and Best Practices: dev, staging, production
- CI/CD Tools Comparison for Kubernetes in 2026: Jenkins, GitLab CI, GitHub Actions, ArgoCD