Key Takeaways
- ✓91% of cloud-native organizations adopted GitOps in 2025 (CNCF)
- ✓GitOps separates build and deploy for better traceability
- ✓Automated testing at every stage and automatic rollbacks reduce incidents
TL;DR: A high-performing CI/CD pipeline for Kubernetes applications relies on GitOps, automated testing at every stage, build/deploy separation, and automatic rollbacks. With 91% of cloud-native organizations having adopted GitOps (CNCF GitOps Survey 2025), mastering these practices has become essential for any Kubernetes infrastructure engineer.
These skills are at the core of the LFD459 Kubernetes for Application Developers training.
Why a Kubernetes-specific CI/CD pipeline?
A CI/CD pipeline for Kubernetes applications differs fundamentally from traditional pipelines. Kubernetes imposes a declarative approach where the desired cluster state is defined in YAML manifests, versioned, and applied idempotently.
This declarative philosophy transforms Kubernetes continuous deployment into a process where application code and infrastructure converge in the same workflow. With 82% of container users running Kubernetes in production (CNCF Annual Survey 2025), standardizing your pipelines becomes a competitive advantage.
Key takeaway: A Kubernetes pipeline must treat infrastructure manifests like application code: versioned, tested, and automatically deployed.
Adopt GitOps as a deployment model
Why: GitOps uses Git as the single source of truth for cluster state. Every change goes through a pull request, creating complete traceability and instant rollbacks.
How: Configure a GitOps operator (ArgoCD or Flux) that automatically syncs cluster state with your repository. ArgoCD dominates the market with 60% market share versus 11% for Flux (CNCF End User Survey 2025).
Example:
# argocd-application.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-app-production
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/myorg/k8s-manifests
targetRevision: main
path: environments/production
destination:
server: https://kubernetes.default.svc
namespace: production
syncPolicy:
automated:
prune: true
selfHeal: true
This configuration activates automatic synchronization with self-correction. To deepen the architecture of deployed applications, consult the guide on microservices architecture on Kubernetes.
Separate Build and Deploy pipelines
Why: Mixing image building and deployment creates fragile dependencies. A build failure should never block a production rollback.
How: Create two distinct pipelines. The Build pipeline produces a Docker image tagged with the commit SHA. The Deploy pipeline updates Kubernetes manifests in the GitOps repository.
Build pipeline example (GitHub Actions):
name: Build
on:
push:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build and push
run: |
docker build -t registry.example.com/app:${{ github.sha }} .
docker push registry.example.com/app:${{ github.sha }}
- name: Update manifests
run: |
cd k8s-manifests
kustomize edit set image app=registry.example.com/app:${{ github.sha }}
git commit -am "Deploy ${{ github.sha }}"
git push
Key takeaway: The Build pipeline never touches the cluster. Only the GitOps repository triggers deployments via the operator.
Version manifests with Helm or Kustomize
Why: Raw YAML manifests become unmanageable at scale. 70% of organizations using Kubernetes in the cloud use Helm (Orca Security 2025).
How: Choose Helm for complex applications requiring templating, Kustomize for simple variations between environments. Consult ConfigMaps and Secrets best practices for configuration management.
Kustomize example:
# base/kustomization.yaml
resources:
- deployment.yaml
- service.yaml
# overlays/production/kustomization.yaml
resources:
- ../../base
patchesStrategicMerge:
- replica-count.yaml
- resource-limits.yaml
This structure allows maintaining a common base while customizing each environment. For developers discovering these tools, the transition from Docker Compose to Kubernetes represents a key step.
Implement automated testing at every stage
Why: A pipeline without automated tests is just a glorified deployment script. Tests ensure that code and manifests work together.
How: Structure your tests in four levels: unit tests, integration tests, manifest validation (kubeval/kubeconform), and post-deployment smoke tests.
Manifest validation example:
# Validate manifest syntax
kubeconform -strict -kubernetes-version 1.29.0 manifests/
# Test security policies
kube-linter lint manifests/
# Check best practices
polaris audit --audit-path manifests/
Observability and monitoring of applications completes this approach by detecting post-deployment issues.
| Test Type | Recommended Tool | Execution Time |
|---|---|---|
| YAML validation | kubeconform | Pre-commit |
| Security policies | kube-linter, OPA | CI build |
| Integration tests | Kind, k3s | CI build |
| Smoke tests | kubectl, curl | Post-deploy |
Automate rollbacks with health checks
Why: A failing production deployment is expensive. Automatic rollbacks limit incident impact to a few minutes.
How: Configure precise readinessProbes and livenessProbes. Use Progressive Delivery with Argo Rollouts or Flagger for automated canary deployments.
Automatic rollout example:
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: my-app
spec:
replicas: 5
strategy:
canary:
steps:
- setWeight: 20
- pause: {duration: 5m}
- setWeight: 50
- pause: {duration: 5m}
- setWeight: 100
analysis:
templates:
- templateName: success-rate
startingStep: 1
If analysis detects an abnormal error rate, rollback executes automatically. To diagnose failures, refer to the guide to resolve common deployment errors.
Key takeaway: Configure business metrics (error rate, P99 latency) as rollback criteria, not just technical health checks.
Scan vulnerabilities in the pipeline
Why: Docker images often contain vulnerabilities inherited from base images. Scanning upstream blocks flaws before production.
How: Integrate Trivy, Grype, or Snyk into your CI pipeline. Define blocking severity thresholds (CRITICAL, HIGH).
Trivy example:
- name: Scan vulnerabilities
run: |
trivy image --exit-code 1 --severity CRITICAL,HIGH \
registry.example.com/app:${{ github.sha }}
Pipeline security extends to runtime. The LFS460 Kubernetes Security Fundamentals training deepens these concepts for engineers preparing for CKS.
Use namespaces and quotas per environment
Why: Isolation between environments prevents interference. A development namespace should never impact production.
How: Create a namespace per environment with ResourceQuotas and LimitRanges. Use NetworkPolicies to isolate traffic.
ResourceQuota example:
apiVersion: v1
kind: ResourceQuota
metadata:
name: staging-quota
namespace: staging
spec:
hard:
requests.cpu: "4"
requests.memory: 8Gi
limits.cpu: "8"
limits.memory: 16Gi
pods: "20"
This configuration prevents the staging environment from consuming more resources than planned. Kubernetes cluster administration covers resource management in detail.
Implement ephemeral environments
Why: Permanent environments accumulate technical debt. Ephemeral environments guarantee tests on a clean base.
How: Create a temporary namespace for each pull request. Destroy it automatically after merge or close.
GitHub Actions example:
- name: Create preview environment
if: github.event_name == 'pull_request'
run: |
NAMESPACE="pr-${{ github.event.number }}"
kubectl create namespace $NAMESPACE
helm install app-preview ./charts/app \
--namespace $NAMESPACE \
--set image.tag=${{ github.sha }}
This practice integrates perfectly into a GitOps Kubernetes pipeline where each branch has its own environment.
Centralize secrets with an external manager
Why: Secrets in Git, even encrypted, create risks. An external manager (Vault, AWS Secrets Manager) centralizes and automatically rotates.
How: Use External Secrets Operator to sync external secrets to Kubernetes.
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: db-credentials
spec:
refreshInterval: 1h
secretStoreRef:
name: vault-backend
kind: ClusterSecretStore
target:
name: db-credentials
data:
- secretKey: password
remoteRef:
key: secret/data/production/db
property: password
Anti-patterns to avoid in your pipelines
Deploying directly with kubectl apply: This approach bypasses GitOps and creates drift between declared and actual state. Always prefer a GitOps operator.
Using the latest tag: Impossible to trace which version is deployed. Use immutable tags based on commit SHA.
Storing secrets in plaintext in CI variables: Even "masked" variables appear in logs. Use an external secret manager.
Ignoring Resource Requests/Limits: Without limits, a pod can consume all node resources. Always define realistic requests and limits.
Testing only in production: Post-deployment tests don't replace pre-deployment tests. Validate in staging before production.
Take Action: Build Your GitOps Pipeline
Mastering CI/CD pipelines for Kubernetes represents a key skill for any Kubernetes infrastructure engineer. With the Kubernetes market reaching $8.41 billion by 2031 (Mordor Intelligence), these skills guarantee your employability.
To deepen Kubernetes application development and prepare for CKAD certification, SFEIR Institute offers intensive training led by practitioners.
Recommended training:
- LFD459 Kubernetes for Application Developers: 3 days to master containerized application deployment and prepare for CKAD
- LFS458 Kubernetes Administration: 4 days for infrastructure engineers preparing for CKA
- Kubernetes Fundamentals: 1 day to discover container orchestration
Consult the complete Kubernetes Training guide to identify the path suited to your profile.