Key Takeaways
- ✓59% of organizations now develop the majority of their applications as cloud-native (CNCF Research 2025)
- ✓7 essential steps: microservices, images, resources, secrets, health checks
- ✓Cloud-native design determines success or failure in production
Designing containerized applications for Kubernetes determines the success or failure of your production deployments. According to the CNCF Annual Survey 2025, 82% of container users run Kubernetes in production. This massive adoption implies a responsibility: design applications that actually leverage the orchestrator's capabilities rather than simply "containerizing" legacy code.
TL;DR: This guide details the 7 essential steps for designing a cloud-native application ready for Kubernetes. You'll learn how to structure your microservices, optimize Docker images, configure resources, manage secrets, implement health checks, and prepare observability. Each step includes verifiable commands and common errors to avoid.
Developers who want to master these skills follow the LFD459 Kubernetes for Application Developers training.
Prerequisites: What You Need Before Starting
Before designing your application for Kubernetes, validate these technical prerequisites:
# Check Docker
docker --version
# Expected output: Docker version 24.x or higher
# Check kubectl
kubectl version --client
# Expected output: Client Version: v1.29.x
# Check cluster access
kubectl cluster-info
# Expected output: Kubernetes control plane is running at https://...
Required skills:
- Mastery of Docker and multi-stage Dockerfiles
- Understanding of Kubernetes concepts (Pods, Deployments, Services)
- Familiarity with YAML and Kubernetes manifests
If you're a beginner, first check the Kubernetes application development section to acquire fundamentals.
Step 1: How to Apply Cloud-Native Design Principles?
Kubernetes cloud-native development relies on precise architectural principles.
The 12 Factors Adapted to Kubernetes
Kubernetes container architecture is based on the 12-Factor methodology:
| Factor | Kubernetes Application | Implementation |
|---|---|---|
| Codebase | One repo per microservice | Git + CI/CD |
| Config | Externalize configuration | ConfigMaps, Secrets |
| Backing services | Attachable services | Kubernetes Services |
| Processes | Stateless | Ephemeral Pods |
| Port binding | Self-contained | containerPort |
| Logs | Stdout/stderr | Centralized collection |
# Example: properly designed stateless application
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-service
spec:
replicas: 3
selector:
matchLabels:
app: api-service
template:
spec:
containers:
- name: api
image: myapp/api:v1.2.0
ports:
- containerPort: 8080
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: db-credentials
key: url
Remember: A cloud-native application never stores state locally. All persistence goes through external services (databases, caches, object storage).
Step 2: How to Structure the Application as Microservices?
According to CNCF Research 2025, 59% of organizations now develop the majority of their applications as cloud-native. This transition requires rigorous structuring.
Define Service Boundaries
A microservice is an independently deployable unit that encapsulates a single business functionality. Each service has its own database and communicates via APIs.
# Recommended project structure
tree ./kubernetes-app
# Expected output:
# kubernetes-app/
# ├── services/
# │ ├── api-gateway/
# │ │ ├── Dockerfile
# │ │ ├── k8s/
# │ │ │ ├── deployment.yaml
# │ │ │ └── service.yaml
# │ │ └── src/
# │ ├── user-service/
# │ └── order-service/
# ├── charts/
# │ └── app/
# └── skaffold.yaml
Configure Inter-Service Communication
# Service discovery via Kubernetes DNS
apiVersion: v1
kind: Service
metadata:
name: user-service
spec:
selector:
app: user-service
ports:
- port: 80
targetPort: 8080
---
# The api-gateway service can call: http://user-service.default.svc.cluster.local
For deeper configuration management, see ConfigMaps and Secrets Kubernetes: configuration best practices.
Step 3: How to Optimize Dockerfiles for Kubernetes?
Image optimization directly impacts deployment times and security. According to Orca Security 2025, 70% of organizations use Kubernetes in cloud environments, making image optimization critical.
Multi-Stage Build to Reduce Size
# Dockerfile optimized for Kubernetes
FROM golang:1.22-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -o /app/server
FROM gcr.io/distroless/static:nonroot
COPY --from=builder /app/server /server
USER 65532:65532
EXPOSE 8080
ENTRYPOINT ["/server"]
Verify image size:
docker build -t myapp:optimized .
docker images myapp:optimized
# Expected output:
# REPOSITORY TAG SIZE
# myapp optimized 15MB # vs 800MB+ with golang:1.22
Scan for Vulnerabilities Before Deployment
# Use Trivy to scan the image
trivy image myapp:optimized
# Expected output: vulnerabilities classified by severity
# Total: 0 (UNKNOWN: 0, LOW: 0, MEDIUM: 0, HIGH: 0, CRITICAL: 0)
Remember: Use distroless or Alpine base images. Always run as non-root user (USER 65532).
Step 4: How to Configure Kubernetes Resources Correctly?
Resource management determines your applications' stability. A Kubernetes software engineer must master requests and limits.
Define Requests and Limits
Requests guarantee minimum resources. Limits cap consumption.
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-service
spec:
template:
spec:
containers:
- name: api
image: myapp/api:v1.2.0
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "500m"
Verify allocated resources:
kubectl describe pod api-service-xxxx | grep -A 6 "Limits:"
# Expected output:
# Limits:
# cpu: 500m
# memory: 256Mi
# Requests:
# cpu: 100m
# memory: 128Mi
Calculate Required Resources
| Application Size | CPU requests | Memory requests | CPU limits | Memory limits |
|---|---|---|---|---|
| Small API | 50m-100m | 64Mi-128Mi | 200m-500m | 256Mi |
| Business service | 100m-250m | 128Mi-256Mi | 500m-1000m | 512Mi |
| Intensive worker | 250m-500m | 256Mi-512Mi | 1000m-2000m | 1Gi |
For complex deployments, discover Kubernetes Helm Charts: essential commands cheat sheet.
Step 5: How to Manage Configuration and Secrets?
Separating configuration from code is a fundamental principle. As Chris Aniszczyk, CNCF CTO states: "Kubernetes is no longer experimental but foundational."
Create ConfigMaps
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
LOG_LEVEL: "info"
API_TIMEOUT: "30s"
FEATURE_FLAGS: |
{
"new_checkout": true,
"beta_api": false
}
kubectl apply -f configmap.yaml
kubectl get configmap app-config -o yaml
# Expected output: ConfigMap created with data
Manage Secrets Securely
# Create a secret from literal values
kubectl create secret generic db-credentials \
--from-literal=username=admin \
--from-literal=password='S3cur3P@ss!'
# Verify (values are base64 encoded)
kubectl get secret db-credentials -o jsonpath='{.data.username}' | base64 -d
# Expected output: admin
Remember: Never commit secrets to Git. Use tools like Sealed Secrets or External Secrets Operator for production environments.
Step 6: How to Implement Health Checks?
Kubernetes probes determine if your application is ready to receive traffic. Without health checks, Kubernetes cannot auto-heal your workloads.
Configure the Three Types of Probes
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-service
spec:
template:
spec:
containers:
- name: api
image: myapp/api:v1.2.0
ports:
- containerPort: 8080
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 15
periodSeconds: 10
failureThreshold: 3
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
startupProbe:
httpGet:
path: /healthz
port: 8080
failureThreshold: 30
periodSeconds: 10
| Probe | Objective | Action on Failure |
|---|---|---|
| livenessProbe | Is application alive? | Container restart |
| readinessProbe | Ready for traffic? | Removal from Service |
| startupProbe | Is startup complete? | Blocks other probes |
Verify probe status:
kubectl describe pod api-service-xxxx | grep -A 3 "Liveness:"
# Expected output:
# Liveness: http-get http://:8080/healthz delay=15s timeout=1s period=10s
# Readiness: http-get http://:8080/ready delay=5s timeout=1s period=5s
For complete observability coverage, see Observability and monitoring of Kubernetes applications.
Step 7: How to Prepare Observability from Design?
With 15.6 million developers using cloud-native technologies according to CNCF and SlashData, observability is becoming an industry standard.
Implement Structured Logs
# Application configuration for JSON logs
apiVersion: v1
kind: ConfigMap
metadata:
name: logging-config
data:
config.yaml: |
logging:
format: json
level: info
fields:
service: api-service
version: v1.2.0
Example log output:
kubectl logs api-service-xxxx | head -1
# Expected output (structured JSON):
# {"timestamp":"2026-02-28T10:15:30Z","level":"info","msg":"Request processed","method":"GET","path":"/api/users","duration_ms":45}
Expose Prometheus Metrics
// Example /metrics endpoint in Go
import (
"github.com/prometheus/client_golang/prometheus/promhttp"
)
func main() {
http.Handle("/metrics", promhttp.Handler())
http.ListenAndServe(":8080", nil)
}
# Annotations for Prometheus scraping
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-service
spec:
template:
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
prometheus.io/path: "/metrics"
Remember: Instrument from development. Add Kubernetes labels (app, version, environment) to your metrics for easy filtering.
Troubleshooting: Solving Common Errors
Error: CrashLoopBackOff
Symptom: Pod restarts in a loop.
kubectl get pods
# NAME READY STATUS RESTARTS
# api-service-xxxx 0/1 CrashLoopBackOff 5
Diagnosis:
kubectl logs api-service-xxxx --previous
kubectl describe pod api-service-xxxx | grep -A 5 "Last State:"
Common causes:
- livenessProbe too aggressive (insufficient initialDelaySeconds)
- Missing dependency (database not accessible)
- Configuration error (missing environment variable)
Error: ImagePullBackOff
kubectl describe pod api-service-xxxx | grep -A 3 "Events:"
# Warning Failed pull access denied for myapp/api
Solution: Verify registry credentials:
kubectl create secret docker-registry regcred \
--docker-server=ghcr.io \
--docker-username=<user> \
--docker-password=<token>
Error: OOMKilled
Symptom: Container killed for memory overflow.
kubectl describe pod api-service-xxxx | grep OOMKilled
# Reason: OOMKilled
Solution: Increase memory limits after analysis:
kubectl top pod api-service-xxxx
# Analyze actual consumption before adjusting
For deeper troubleshooting, explore the Kubernetes Training Certifications section resources.
Summary and Next Steps
| Step | Validation | Verification Command | |
|---|---|---|---|
| Cloud-native design | Stateless application | kubectl exec -it pod -- ls /tmp (empty) | |
| Microservices | DNS communication | kubectl exec -it pod -- nslookup service-name | |
| Optimized images | Size < 100MB | docker images | |
| Resources | Requests/limits defined | kubectl describe pod | |
| Secrets | Externalized | kubectl get secrets | |
| Health checks | 3 probes configured | `kubectl describe pod \ | grep Probe` |
| Observability | Metrics exposed | curl pod-ip:8080/metrics |
For further progression in your journey, check the LFD459 training detailed program or the complete Kubernetes Training guide.
Take Action: Develop Your Kubernetes Skills
This guide covers the fundamentals of designing applications for Kubernetes. To master these skills in real conditions with practitioner trainers:
- LFD459 Kubernetes for Application Developers: 3 days to prepare for CKAD certification and design cloud-native applications
- Kubernetes Fundamentals: 1 day to discover the Kubernetes ecosystem
- LFS458 Kubernetes Administration: 4 days to master cluster administration and prepare for CKA