Comprehensive guide8 min read

Deploy a Microservices Application on Kubernetes: Complete Tutorial

SFEIR Institute

Key Takeaways

  • Deploy a complete application (frontend, API, database) in 45 minutes
  • Microservices architectures on Kubernetes reduce deployment times by 50% through service isolation
  • Maîtrisez Deployments, Services, Ingress et ConfigMaps pour structurer vos microservices

This tutorial walks you through step by step to deploy a microservices application on Kubernetes. You'll learn to structure your services, configure inter-pod networking, manage ConfigMaps and Secrets, and expose your application to the outside world.

According to the CNCF Annual Survey 2025, 82% of container users run Kubernetes in production. This guide transforms that statistic into concrete skills.

TL;DR: Deploy a complete microservices application (frontend, backend API, database) on Kubernetes in 45 minutes. You'll master Deployments, Services, Ingress, ConfigMaps, and inter-service debugging.

To master these skills, discover the LFD459 Kubernetes for Application Developers training.

Why This Guide for Deploying Microservices on Kubernetes?

Microservices architectures dominate cloud-native development. Yet, the transition from Docker Compose to Kubernetes confuses many developers. This guide bridges that gap.

The problem solved: you have a working application locally with Docker Compose. You need to deploy it to a production Kubernetes cluster. This tutorial shows you exactly how to proceed.

According to Mordor Intelligence, the Kubernetes market will reach $8.41 billion by 2031 (21.85% CAGR). Mastering these deployments becomes a strategic skill.

Key takeaway: This guide covers complete deployment of a 3-tier stack: React frontend, Node.js API, PostgreSQL database. Each component illustrates a different Kubernetes pattern.

Prerequisites: Environment and Required Tools

Before starting, verify your local environment. See our local Kubernetes installation guide if needed.

Required Versions and Tools

ToolMinimum VersionVerification Command
kubectl1.28+kubectl version --client
minikube/kind1.32+ / 0.20+minikube version
Docker24.0+docker --version
Helm3.14+helm version
# Quick environment verification
kubectl cluster-info
kubectl get nodes

Start minikube with enough resources:

minikube start --cpus=4 --memory=8192 --driver=docker
minikube addons enable ingress
minikube addons enable metrics-server

Microservices Project Structure

Our sample application includes three services:

microservices-demo/
├── frontend/           # React SPA
│   ├── Dockerfile
│   └── k8s/
│       ├── deployment.yaml
│       └── service.yaml
├── api/                # Node.js Express API
│   ├── Dockerfile
│   └── k8s/
│       ├── deployment.yaml
│       ├── service.yaml
│       └── configmap.yaml
├── database/           # PostgreSQL
│   └── k8s/
│       ├── statefulset.yaml
│       ├── service.yaml
│       └── secret.yaml
└── ingress.yaml
Key takeaway: Organize your Kubernetes manifests by service, not by resource type. This structure facilitates independent deployment of each microservice.

Deploying the Database

Let's start with PostgreSQL. A database requires a StatefulSet to guarantee persistence and stable pod identity.

Create the Secret for Credentials

# database/k8s/secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: postgres-secret
namespace: microservices
type: Opaque
stringData:
POSTGRES_USER: appuser
POSTGRES_PASSWORD: SecureP@ss2026!
POSTGRES_DB: microservices_db

Apply the namespace and secret:

kubectl create namespace microservices
kubectl apply -f database/k8s/secret.yaml -n microservices

Configure Persistent Storage

See our complete guide on Kubernetes persistent volumes to explore this topic.

# database/k8s/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
namespace: microservices
spec:
serviceName: postgres
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:16.2
ports:
- containerPort: 5432
envFrom:
- secretRef:
name: postgres-secret
volumeMounts:
- name: postgres-data
mountPath: /var/lib/postgresql/data
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
volumeClaimTemplates:
- metadata:
name: postgres-data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 5Gi

Headless Service for PostgreSQL

# database/k8s/service.yaml
apiVersion: v1
kind: Service
metadata:
name: postgres
namespace: microservices
spec:
clusterIP: None
selector:
app: postgres
ports:
- port: 5432
targetPort: 5432
kubectl apply -f database/k8s/ -n microservices
kubectl get pods -n microservices -w

Wait for the postgres-0 pod to show Running and 1/1 ready.

Deploy the Node.js Backend API

The backend API illustrates using ConfigMaps for external configuration and probes for health checking.

ConfigMap for Configuration

# api/k8s/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: api-config
namespace: microservices
data:
NODE_ENV: "production"
PORT: "3000"
DB_HOST: "postgres.microservices.svc.cluster.local"
DB_PORT: "5432"
LOG_LEVEL: "info"

Deployment with Probes and Resources

# api/k8s/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
namespace: microservices
spec:
replicas: 3
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: your-registry/microservices-api:v1.0.0
ports:
- containerPort: 3000
envFrom:
- configMapRef:
name: api-config
- secretRef:
name: postgres-secret
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 3000
initialDelaySeconds: 5
periodSeconds: 5
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"
Key takeaway: Always define resource requests and limits. Without them, a pod can consume all node resources and impact other workloads.

ClusterIP Service for API

# api/k8s/service.yaml
apiVersion: v1
kind: Service
metadata:
name: api
namespace: microservices
spec:
selector:
app: api
ports:
- port: 80
targetPort: 3000
kubectl apply -f api/k8s/ -n microservices
kubectl rollout status deployment/api -n microservices

Deploy the React Frontend

The static frontend uses Nginx as web server. Configuration differs slightly from backend services.

Optimized Frontend Deployment

# frontend/k8s/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
namespace: microservices
spec:
replicas: 2
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: your-registry/microservices-frontend:v1.0.0
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 10
periodSeconds: 30
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "100m"
# frontend/k8s/service.yaml
apiVersion: v1
kind: Service
metadata:
name: frontend
namespace: microservices
spec:
selector:
app: frontend
ports:
- port: 80
targetPort: 80

Configure Ingress for External Exposure

Ingress routes external traffic to your services. See our Ingress Controller guide for advanced configuration.

# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: microservices-ingress
namespace: microservices
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
ingressClassName: nginx
rules:
- host: microservices.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: 80
- path: /api
pathType: Prefix
backend:
service:
name: api
port:
number: 80
kubectl apply -f ingress.yaml -n microservices

# Get Ingress IP
kubectl get ingress -n microservices

# Add hosts entry (Linux/Mac)
echo "$(minikube ip) microservices.local" | sudo tee -a /etc/hosts

Verify and Debug the Deployment

According to Cloud Native Now, IT teams spend 34 working days per year resolving Kubernetes problems. These commands reduce that time.

Essential Diagnostic Commands

# Overview of all pods
kubectl get pods -n microservices -o wide

# Logs from a specific pod
kubectl logs -n microservices deployment/api --tail=100

# Real-time logs
kubectl logs -n microservices -l app=api -f

# Describe an erroring pod
kubectl describe pod -n microservices <pod-name>

# Shell access for debugging
kubectl exec -it -n microservices deployment/api -- /bin/sh

Test Inter-Service Connectivity

# From an API pod, test connection to PostgreSQL
kubectl exec -it -n microservices deployment/api -- \
nc -zv postgres.microservices.svc.cluster.local 5432

# Test API endpoint
kubectl run curl-test --rm -it --image=curlimages/curl -- \
curl http://api.microservices.svc.cluster.local/health

See our Kubernetes cheatsheet for other useful commands.

Key takeaway: Kubernetes internal DNS follows the format ..svc.cluster.local. Use this format for inter-service communications.

Manage Updates with Rolling Updates

Kubernetes handles deployments without interruption through rolling updates.

Deployment Strategy

# Add to Deployment
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0

Update Image

# Update API image
kubectl set image deployment/api -n microservices \
api=your-registry/microservices-api:v1.1.0

# Follow deployment
kubectl rollout status deployment/api -n microservices

# In case of problem, rollback
kubectl rollout undo deployment/api -n microservices

# Revision history
kubectl rollout history deployment/api -n microservices

Scaling and High Availability

Horizontal scaling responds to load variations.

# Manual scaling
kubectl scale deployment/api -n microservices --replicas=5

# Automatic scaling with HPA
kubectl autoscale deployment/api -n microservices \
--min=3 --max=10 --cpu-percent=70

Complete HPA Manifest

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: api-hpa
namespace: microservices
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: api
minReplicas: 3
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80

Simplify with Helm

For recurring deployments, Helm standardizes your configurations. See our guide Getting started with Helm.

According to Orca Security, 70% of organizations using cloud Kubernetes use Helm.

# Install chart from repository
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install postgres bitnami/postgresql -n microservices \
--set auth.postgresPassword=SecureP@ss2026!
Key takeaway: Helm lets you version your deployments like code. A helm upgrade with --atomic ensures automatic rollback on failure.

Production Best Practices

Before going to production, check these essential points.

Pre-Production Checklist

CriterionVerified
Requests/Limits defined on all containers
Liveness and readiness probes configured
Secrets stored in external manager
Restrictive Network Policies applied
PodDisruptionBudget configured
Centralized logs (Loki, ELK)
Metrics exposed (Prometheus)

Restrictive Network Policy

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: api-network-policy
namespace: microservices
spec:
podSelector:
matchLabels:
app: api
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 3000
egress:
- to:
- podSelector:
matchLabels:
app: postgres
ports:
- protocol: TCP
port: 5432

Common Errors and Solutions

ErrorProbable CauseSolution
ImagePullBackOffImage not found or missing credentialsCheck image name and create imagePullSecret
CrashLoopBackOffApplication crashing at startupCheck logs: kubectl logs
PendingInsufficient resourcesCheck requests vs cluster capacity
Connection refusedMisconfigured serviceCheck selector and ports
"Don't let your knowledge remain theoretical - set up a real Kubernetes environment to solidify your skills."
- TealHQ Kubernetes DevOps Guide

Going Further: Additional Resources

This tutorial covers the fundamentals. To go deeper, explore our Kubernetes Tutorials and Practical Guides hub.

CKAD certification validates these deployment skills. The LFD459 training prepares for this exam in 3 days (21h) with practical labs similar to this tutorial.

Key takeaway: 71% of Fortune 100 companies run Kubernetes in production (CNCF Project Journey Report). These skills are directly valuable in the market.

Take Action: SFEIR Trainings

You've deployed your first microservices application. To master Kubernetes in depth:

LFD459 Kubernetes for Application Developers training: 3 days to prepare for CKAD certification. Practical labs, advanced deployments, debugging.

Kubernetes Fundamentals: 1 day to discover the Kubernetes ecosystem if you're a beginner.

LFS458 Kubernetes Administration training: 4 days to administer production clusters and prepare for CKA.

Consult our advisors to build your certification path.