Key Takeaways
- ✓ArgoCD holds 60% of the GitOps market for Kubernetes (CNCF 2025)
- ✓'Main strategies: rolling update, blue-green and canary deployment'
- ✓GitOps and CI/CD are the standards for Kubernetes deployment
This Kubernetes deployment FAQ gathers frequently asked deployment questions from production teams. From rolling update strategy to GitOps tools, find enterprise Kubernetes deployment answers validated by practitioners.
TL;DR
The most common questions concern choosing deployment strategies, managing rollbacks, CI/CD configuration, and securing secrets. Each answer includes configuration examples and references to best practices.
This topic is covered in the LFS458 Kubernetes Administration training.
According to the CNCF Annual Survey 2025 report, 82% of container users run Kubernetes in production. This massive adoption generates many practical questions.
What deployment strategy to choose according to the Kubernetes deployment FAQ?
The strategy depends on your risk tolerance and technical constraints. Kubernetes natively supports two strategies:
| Strategy | Use Case | Risk | Rollback |
|---|---|---|---|
| RollingUpdate | Standard production | Low | Automatic |
| Recreate | Version incompatibility | Medium (downtime) | Manual |
| Blue-Green | Zero downtime required | Low | Instant |
| Canary | Progressive validation | Very low | Progressive |
RollingUpdate configuration:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 4
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
Key takeaway:maxUnavailable: 0guarantees that no Pod is unavailable during the update. The cost is one temporary additional Pod (maxSurge: 1).
The Kubernetes deployment strategies guide details each approach with comparison tables.
For Kubernetes Rolling Update, see our dedicated guide.
How to perform a rollback after a failed deployment?
Kubernetes maintains revision history for each Deployment. Rollback is a native operation.
Check history:
kubectl rollout history deployment/my-app
Output:
REVISION CHANGE-CAUSE
1 Initial deployment
2 Update image to v2.0.0
3 Update image to v2.1.0 (current)
Rollback to previous revision:
kubectl rollout undo deployment/my-app
Rollback to specific revision:
kubectl rollout undo deployment/my-app --to-revision=1
Check status:
kubectl rollout status deployment/my-app
Key takeaway: By default, Kubernetes keeps 10 revisions (revisionHistoryLimit). Increase this value for critical applications.
spec:
revisionHistoryLimit: 20
The Kubernetes Training FAQ answers other questions about basic features.
What CI/CD tool to use for deploying to Kubernetes?
This question comes up frequently in the Kubernetes production FAQ. The choice depends on your existing infrastructure.
| Tool | Strengths | GitOps Integration |
|---|---|---|
| GitHub Actions | GitHub ecosystem, simple | Via ArgoCD/Flux |
| GitLab CI | All-in-one, integrated runners | Native with GitLab Agent |
| Jenkins | Flexibility, plugins | Via Kubernetes plugins |
| Tekton | Cloud-native, CRDs | Native |
| ArgoCD Workflows | Integrated with ArgoCD | Native |
According to CNCF End User Survey 2025, ArgoCD holds 60% of the GitOps market. CI/CD to GitOps integration is a strong trend.
GitHub Actions pipeline example:
name: Deploy to Kubernetes
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build and push image
run: |
docker build -t my-registry/app:${{ github.sha }} .
docker push my-registry/app:${{ github.sha }}
- name: Update GitOps repo
run: |
cd k8s-config
kustomize edit set image my-registry/app:${{ github.sha }}
git commit -am "Deploy ${{ github.sha }}"
git push
The CI/CD pipeline for Kubernetes guide provides a complete implementation.
How to manage multiple environments (dev, staging, prod)?
Use Kustomize or Helm with overlays per environment. This approach ensures consistency while allowing variations.
Recommended Kustomize structure:
├── base/
│ ├── deployment.yaml
│ ├── service.yaml
│ └── kustomization.yaml
├── overlays/
│ ├── dev/
│ │ ├── kustomization.yaml
│ │ └── replicas-patch.yaml
│ ├── staging/
│ │ └── kustomization.yaml
│ └── prod/
│ ├── kustomization.yaml
│ └── resources-patch.yaml
Production overlay:
# overlays/prod/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
replicas:
- name: my-app
count: 5
patches:
- path: resources-patch.yaml
images:
- name: my-app
newTag: v2.1.0-stable
Key takeaway: Overlays allow modifying replicas, resources, and images without duplicating base manifests.
See Kubernetes multi-environment management for advanced patterns.
How to secure secrets in Kubernetes deployments?
Never store secrets in plain text in Git. Several solutions exist for secure Kubernetes deployment FAQ:
| Solution | Encryption | Management |
|---|---|---|
| Kubernetes Secrets (base64) | No | Avoid |
| Sealed Secrets | Asymmetric | GitOps-friendly |
| External Secrets Operator | External | Vault, AWS SM |
| SOPS | Symmetric/KMS | GitOps-friendly |
External Secrets Operator configuration:
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: my-secret
spec:
refreshInterval: 1h
secretStoreRef:
name: aws-secrets-manager
kind: ClusterSecretStore
target:
name: my-secret
data:
- secretKey: password
remoteRef:
key: prod/my-app/credentials
property: password
The operator automatically synchronizes secrets from the external provider.
Key takeaway: External Secrets Operator is the recommended solution for multi-cloud environments and existing Vault integrations.
The Kubernetes Qualiopi page explains the certified training framework.
How to monitor deployment status in production?
Combine native Kubernetes metrics with an observability stack. Essential elements:
# Deployment status
kubectl get deployments -A
kubectl rollout status deployment/my-app
# Recent events
kubectl get events --sort-by='.lastTimestamp' -n production
# Pod metrics
kubectl top pods -n production
Recommended Prometheus metrics:
# Non-ready Pods rate
sum(kube_deployment_status_replicas_unavailable) by (deployment)
# Time since last deployment
time() - max(kube_deployment_created) by (deployment)
# Container restarts
sum(increase(kube_pod_container_status_restarts_total[1h])) by (pod)
As TealHQ advises: "Don't let your knowledge remain theoretical - set up a real Kubernetes environment to solidify your skills."
The Kubernetes deployment and production hub centralizes resources on this topic.
What is the difference between Deployment, StatefulSet, and DaemonSet?
Each resource addresses a specific need:
| Resource | Use Case | Identity | Storage |
|---|---|---|---|
| Deployment | Stateless applications | Interchangeable Pods | Ephemeral |
| StatefulSet | Databases, caches | Stable identity (pod-0, pod-1) | Dedicated PVCs |
| DaemonSet | Agents on each node | One Pod per node | Node-local |
StatefulSet example for a database:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: postgres
replicas: 3
selector:
matchLabels:
app: postgres
template:
spec:
containers:
- name: postgres
image: postgres:16
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
Key takeaway: StatefulSet guarantees that postgres-0 always restarts with the same PVC, preserving data.
How to configure health checks for reliable deployments?
Kubernetes probes detect problems and automate corrective actions.
| Probe | Function | Action on failure |
|---|---|---|
| livenessProbe | Is the container alive? | Restart |
| readinessProbe | Can the container receive traffic? | Remove from Service |
| startupProbe | Has the container started? | Block other probes |
Recommended configuration:
spec:
containers:
- name: app
image: my-app:v1
ports:
- containerPort: 8080
startupProbe:
httpGet:
path: /healthz
port: 8080
failureThreshold: 30
periodSeconds: 10
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 0
periodSeconds: 10
failureThreshold: 3
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 0
periodSeconds: 5
failureThreshold: 3
Key takeaway:startupProbeis essential for slow-starting applications. Without it,livenessProbemay kill the Pod before it's ready.
The GitOps and Kubernetes guide explains how to integrate these configurations into a declarative workflow.
How to limit deployment impact with PodDisruptionBudget?
PodDisruptionBudget (PDB) guarantees minimum availability during maintenance operations.
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: my-app-pdb
spec:
maxUnavailable: 1
selector:
matchLabels:
app: my-app
This configuration guarantees that at most 1 Pod can be unavailable simultaneously during evictions (scale-down, node drain, upgrades).
For critical applications with minimum 3 replicas:
spec:
minAvailable: 2
Key takeaway: PDB protects against voluntary evictions. It does not protect against application crashes.
The CI/CD tools comparison for Kubernetes integrates these best practices.
Take action: train on Kubernetes deployments
As Hired CTO via Splunk indicates: "Demand and salaries for highly-skilled and qualified tech talent are fiercer than ever, and certifications present a clear pathway for IT professionals to further their careers."
To master Kubernetes deployments:
- The LFS458 Kubernetes Administration training covers all these topics over 4 days and prepares for CKA certification
- The LFD459 training for developers focuses on application patterns
- To get started, discover Kubernetes Fundamentals
Contact our advisors to define your training path.