Comprehensive guide7 min read

Kubernetes Rolling Update: Deploy Without Service Interruption

SFEIR Institute

Key Takeaways

  • With maxUnavailable: 0, Kubernetes guarantees all pods remain available during the update
  • maxSurge and maxUnavailable control the rolling update speed
  • kubectl rollout undo instantly restores the previous version

The Kubernetes progressive update (rolling update) enables deploying a new application version without downtime. For any system administrator taking a Kubernetes system administrator training, mastering this strategy is essential: 82% of container users run Kubernetes in production (CNCF Annual Survey 2025).

TL;DR: Rolling update progressively replaces old pods with new ones, guaranteeing continuous availability. Configure maxSurge and maxUnavailable in your Deployment, then trigger the update with kubectl set image or kubectl apply. In case of problems, kubectl rollout undo instantly restores the previous version.

This skill is at the heart of the LFS458 Kubernetes Administration training.

Prerequisites Before Starting

Before executing your first Kubernetes rolling update strategy, verify these essential elements.

Required Environment

ComponentMinimum VersionVerification Command
kubectlv1.28+kubectl version --client
Kubernetes Clusterv1.28+kubectl version
Existing Deployment-kubectl get deployments
kubectl version --client
# Expected output:
# Client Version: v1.29.0
# Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3

Prior Knowledge

You should understand:

Key takeaway: A rolling update applies only to Deployments. DaemonSets and StatefulSets use different mechanisms.

Step 1: Understand the Rolling Update Mechanism

A rolling update is a Kubernetes progressive update strategy that replaces pods one by one. Kubernetes creates a new ReplicaSet, starts pods with the new version there, then progressively terminates pods from the old ReplicaSet.

How Does the Process Work?

The rolling update process follows this sequence:

  1. Creation of a new ReplicaSet with the new image
  2. Progressive scaling up of new pods
  3. Verification of readiness probes before continuing
  4. Progressive termination of old pods
  5. Deletion of the old ReplicaSet once empty
kubectl describe deployment nginx-deployment | grep -A 5 "RollingUpdateStrategy"
# Expected output:
# RollingUpdateStrategy:  25% max unavailable, 25% max surge

This strategy ensures that at any moment, a sufficient number of pods handle requests. To deepen Kubernetes deployment and production strategies, consult our dedicated hub.

Step 2: Configure the Deployment Strategy

Define the maxSurge and maxUnavailable parameters according to your availability needs.

Key Parameters

ParameterDefinitionRecommended Value
maxSurgeAdditional pods created during update25% or 1
maxUnavailablePods that can be unavailable0 or 25%
minReadySecondsWait time after Ready before continuing10-30 seconds

Create an Optimized Deployment

Apply this configuration for a zero-interruption rolling update:

apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp
spec:
replicas: 4
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: webapp:v1.0.0
ports:
- containerPort: 8080
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 200m
memory: 256Mi
kubectl apply -f deployment.yaml
# Expected output:
# deployment.apps/webapp created
Key takeaway: With maxUnavailable: 0, Kubernetes guarantees that all your pods remain available during the update. This is the safest configuration for production.

IT teams spend an average of 34 workdays per year resolving Kubernetes problems (Cloud Native Now). Correct configuration from the start considerably reduces this time.

Step 3: Execute a Rolling Update

Several methods allow triggering a progressive update. Choose the one suited to your workflow.

Method 1: kubectl set image

The quickest method to update a container's image:

kubectl set image deployment/webapp webapp=webapp:v2.0.0 --record
# Expected output:
# deployment.apps/webapp image updated

The --record flag records the command in revision history, facilitating future rollbacks.

Method 2: kubectl apply

For more complex modifications, modify the YAML file and apply:

kubectl apply -f deployment-v2.yaml --record
# Expected output:
# deployment.apps/webapp configured

This approach integrates perfectly into a CI/CD pipeline for Kubernetes. For reproducible deployments, prefer Helm Charts: 70% of organizations use Helm to manage their deployments (Orca Security 2025).

Method 3: kubectl patch

For targeted modifications without a complete YAML file:

kubectl patch deployment webapp -p '{"spec":{"template":{"spec":{"containers":[{"name":"webapp","image":"webapp:v2.0.0"}]}}}}'
# Expected output:
# deployment.apps/webapp patched

Step 4: Monitor the Deployment in Real Time

Verify rolling update progress to detect any problem quickly.

Track Rollout Status

kubectl rollout status deployment/webapp
# Expected output (in progress):
# Waiting for deployment "webapp" rollout to finish: 2 out of 4 new replicas have been updated...
#
# Expected output (completed):
# deployment "webapp" successfully rolled out

Observe ReplicaSets

kubectl get replicasets -l app=webapp
# Expected output:
# NAME                DESIRED   CURRENT   READY   AGE
# webapp-7d4f5b8c9    4         4         4       30s
# webapp-5b6c7d8e9    0         0         0       5m

The old ReplicaSet (webapp-5b6c7d8e9) retains 0 replicas but remains present to allow rollback.

Check Events

kubectl describe deployment webapp | tail -20
# Expected output:
# Events:
#   Type    Reason             Age   From                   Message
#   ----    ------             ----  ----                   -------
#   Normal  ScalingReplicaSet  2m    deployment-controller  Scaled up replica set webapp-7d4f5b8c9 to 1
#   Normal  ScalingReplicaSet  90s   deployment-controller  Scaled down replica set webapp-5b6c7d8e9 to 3
Key takeaway: Revision history is kept by default (10 revisions). Configure revisionHistoryLimit according to your rollback needs.

For advanced monitoring strategies, consult the Kubernetes tutorials and practical guides. The Prometheus + Grafana stack is used by 75% of teams to monitor Kubernetes (Grafana Labs).

Step 5: Perform a Rollback if Necessary

A failing deployment requires a quick response. Master rollback commands.

View Revision History

kubectl rollout history deployment/webapp
# Expected output:
# deployment.apps/webapp
# REVISION  CHANGE-CAUSE
# 1         kubectl apply --filename=deployment.yaml --record=true
# 2         kubectl set image deployment/webapp webapp=webapp:v2.0.0 --record=true

Return to Previous Revision

kubectl rollout undo deployment/webapp
# Expected output:
# deployment.apps/webapp rolled back

Return to a Specific Revision

kubectl rollout undo deployment/webapp --to-revision=1
# Expected output:
# deployment.apps/webapp rolled back

Immediate verification after rollback:

kubectl get pods -l app=webapp -o jsonpath='{.items[*].spec.containers[*].image}'
# Expected output:
# webapp:v1.0.0 webapp:v1.0.0 webapp:v1.0.0 webapp:v1.0.0

Troubleshooting Common Errors

Rolling updates can fail for several reasons. Identify and resolve these problems quickly.

Pods Stuck in ImagePullBackOff

kubectl get pods -l app=webapp
# Problem detected:
# NAME                     READY   STATUS             RESTARTS   AGE
# webapp-7d4f5b8c9-abc12   0/1     ImagePullBackOff   0          2m

Solution: Verify the image name and tag, as well as registry credentials.

kubectl describe pod webapp-7d4f5b8c9-abc12 | grep -A 5 "Events"

Deployment Stuck: Readiness Probe Fails

kubectl rollout status deployment/webapp
# Problem detected:
# Waiting for deployment "webapp" rollout to finish: 1 old replicas are pending termination...

Old pods don't terminate because new ones never reach Ready state.

Solution: Check your readiness probe configuration and application logs.

kubectl logs -l app=webapp --tail=50

For a complete diagnostic guide, consult our article Resolving Kubernetes deployment errors.

Insufficient Resources

kubectl describe pod webapp-7d4f5b8c9-def34 | grep -A 3 "Conditions"
# Problem detected:
# Warning  FailedScheduling  default-scheduler  0/3 nodes are available: 3 Insufficient cpu

Solution: Adjust requests/limits or add nodes to the cluster.

Key takeaway: A maxSurge of 25% with 4 replicas creates 1 additional pod. If your nodes are saturated, the rolling update will fail. Plan capacity.

Best Practices for Reliable Rolling Updates

PracticeReasonConfiguration
Readiness probesAvoids routing to non-ready podsinitialDelaySeconds: 10
Resource requestsGuarantees schedulingCPU and memory defined
PodDisruptionBudgetProtects availabilityminAvailable: 50%
Graceful shutdownProperly terminates connectionsterminationGracePeriodSeconds: 30

These configurations are part of the skills evaluated in the CKA certification. Teams spend 88% more each year on Kubernetes TCO (Spectro Cloud): investing in best practices reduces these costs.

Take Action: Master Kubernetes Deployments

Rolling update is a fundamental skill for any Kubernetes administrator. 80% of organizations run Kubernetes in production with an average of 20+ clusters (Spectro Cloud State of Kubernetes 2025). Demand for certified professionals keeps growing: the average Kubernetes developer salary reaches $152,640/year (Ruby On Remote).

Develop your skills with SFEIR certifying training:

Contact your OPCO to explore funding possibilities. Contact our advisors to define your training path.