Comprehensive guide7 min read

Configure Kubernetes Cluster Networking: CNI, Services and Ingress

SFEIR Institute

Key Takeaways

  • 68% of Kubernetes production incidents involve network issues (CNCF 2024)
  • Calico, ClusterIP/LoadBalancer Services, and NGINX Ingress in 45 minutes

Kubernetes CNI network configuration determines how your Pods communicate with each other and with the outside world. Without proper configuration, your applications fail silently. According to the CNCF Survey 2024, 68% of Kubernetes production incidents involve network problems. This guide walks you through step by step configuring CNI, Services, and Ingress on your cluster.

TL;DR: You will install Calico as the CNI plugin, create ClusterIP and LoadBalancer Services, then configure an NGINX Ingress Controller to expose your applications. Estimated time: 45 minutes.

To master these skills in depth, discover the LFS458 Kubernetes Administration training.

Prerequisites Before Kubernetes CNI Network Configuration

Before starting, verify that you have the following:

Verify your cluster access:

kubectl cluster-info
# Expected output:
# Kubernetes control plane is running at https://192.168.1.10:6443
# CoreDNS is running at https://192.168.1.10:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Remember: Without a CNI plugin installed, your Pods remain in Pending state. Networking is mandatory, not optional.

Step 1: Install Calico Flannel Kubernetes CNI Plugins

Why Choose Calico for Your Cluster?

Calico is the most deployed CNI plugin in production according to Datadog State of Kubernetes 2024. It offers native Network Policies, BGP support, and network performance superior to Flannel. In 2026, Calico 3.28 includes eBPF support by default.

CNI PluginNetwork PoliciesPerformanceComplexity
CalicoNativeHigh (eBPF)Medium
FlannelNoMediumLow
CiliumNative + L7Very highHigh
WeavePartialMediumLow

Install Calico on Your Cluster

You will download and apply the Calico manifest. Adapt the CIDR to your kubeadm configuration:

# Download Calico v3.28 manifest
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/calico.yaml

# Verify deployment
kubectl get pods -n kube-system -l k8s-app=calico-node
# Expected output:
# NAME                READY   STATUS    RESTARTS   AGE
# calico-node-abc12   1/1     Running   0          2m
# calico-node-def34   1/1     Running   0          2m

If you use a different CIDR than 192.168.0.0/16, modify the CALICO_IPV4POOL_CIDR variable:

# First download the manifest
curl -O https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/calico.yaml

# Modify the CIDR (example: 10.244.0.0/16)
sed -i 's/192.168.0.0\/16/10.244.0.0\/16/g' calico.yaml

# Apply the modified manifest
kubectl apply -f calico.yaml
Remember: The Calico CIDR must exactly match the one specified during kubeadm initialization with --pod-network-cidr.

To explore Kubernetes node management further, see our dedicated guide.

Step 2: Configure Kubernetes Services to Expose Your Applications

Understanding Service Types

A Kubernetes Service is an abstraction that exposes an application running on a set of Pods. You have three main types:

  • ClusterIP: accessible only from inside the cluster
  • NodePort: exposes the Service on a static port of each node
  • LoadBalancer: provisions an external load balancer (cloud providers)

Create a ClusterIP Service

First deploy a test application, then expose it via a ClusterIP Service:

# nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-test
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.25
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: ClusterIP
selector:
app: nginx
ports:
- port: 80
targetPort: 80

Apply and verify:

kubectl apply -f nginx-deployment.yaml

kubectl get svc nginx-service
# Expected output:
# NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
# nginx-service   ClusterIP   10.96.142.58    <none>        80/TCP    30s

# Test connectivity from a Pod
kubectl run test-curl --rm -it --image=curlimages/curl -- curl nginx-service
# Expected output: nginx HTML page

This skill is part of the CKA certification program. The LFS458 Kubernetes Administration training prepares you for these practical scenarios.

Create a LoadBalancer Service

For cloud environments (AWS, GCP, Azure), you use a LoadBalancer Service:

# nginx-lb.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-lb
spec:
type: LoadBalancer
selector:
app: nginx
ports:
- port: 80
targetPort: 80
kubectl apply -f nginx-lb.yaml

kubectl get svc nginx-lb -w
# Wait for EXTERNAL-IP to change from <pending> to a public IP
# NAME       TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)        AGE
# nginx-lb   LoadBalancer   10.96.58.142   203.0.113.50     80:31234/TCP   2m
Remember: On an on-premise cluster, install MetalLB to get external IPs without a cloud provider.

Step 3: Configure Ingress for Advanced HTTP Routing

Install the NGINX Ingress Controller

Ingress is a resource that manages external HTTP/HTTPS access to your Services. You must first install an Ingress Controller. NGINX is the most widespread according to the CNCF 2024 report.

# Install NGINX Ingress Controller via Helm
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update

helm install nginx-ingress ingress-nginx/ingress-nginx \
--namespace ingress-nginx \
--create-namespace \
--set controller.replicaCount=2

# Verify installation
kubectl get pods -n ingress-nginx
# Expected output:
# NAME                                        READY   STATUS    RESTARTS   AGE
# nginx-ingress-controller-5d4cf7b9b4-abc12   1/1     Running   0          1m
# nginx-ingress-controller-5d4cf7b9b4-def34   1/1     Running   0          1m

Create an Ingress Resource

Now configure an Ingress rule to route traffic to your nginx-service Service:

# nginx-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-service
port:
number: 80
kubectl apply -f nginx-ingress.yaml

kubectl get ingress
# Expected output:
# NAME            CLASS   HOSTS             ADDRESS         PORTS   AGE
# nginx-ingress   nginx   app.example.com   203.0.113.100   80      1m

To test locally, add the entry in /etc/hosts:

echo "203.0.113.100 app.example.com" | sudo tee -a /etc/hosts

curl http://app.example.com
# Expected output: nginx HTML page

As a Kubernetes system administrator, you often configure multiple backends via a single Ingress. See the complete guide on Kubernetes cluster administration for advanced cases.

Step 4: Verify Complete Kubernetes CNI Network Configuration

Run these commands to validate your configuration:

# Verify all Pods have an IP
kubectl get pods -o wide
# Each Pod should have an IP in the configured CIDR

# Test internal DNS resolution
kubectl run dns-test --rm -it --image=busybox:1.36 -- nslookup nginx-service
# Expected output:
# Name:      nginx-service
# Address 1: 10.96.142.58 nginx-service.default.svc.cluster.local

# Check Service endpoints
kubectl get endpoints nginx-service
# Expected output:
# NAME            ENDPOINTS                                      AGE
# nginx-service   10.244.1.5:80,10.244.2.3:80,10.244.2.4:80     10m

Troubleshooting Common Network Issues

Pods Cannot Communicate Between Nodes

Verify the CNI plugin is working correctly:

# Check Calico logs
kubectl logs -n kube-system -l k8s-app=calico-node --tail=50

# Check network interfaces on a node
kubectl debug node/worker-1 -it --image=busybox -- ip addr show

See our detailed guide for diagnosing and resolving network problems.

Service Does Not Route to Pods

Verify labels match:

# Check Pod labels
kubectl get pods --show-labels

# Check Service selector
kubectl describe svc nginx-service | grep Selector
Remember: A Service without endpoints indicates a selector error. Labels must match exactly.

Ingress Returns 502 Error

This issue occurs when the Ingress Controller cannot reach the backend:

# Verify Service exists and has endpoints
kubectl get endpoints nginx-service

# Check Ingress Controller logs
kubectl logs -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx --tail=100

A Kubernetes infrastructure engineer must master these diagnostics. CKA certification covers these troubleshooting scenarios.

Configure Network Policies to Secure Your Network

In 2026, Network Policies are mandatory for any production cluster. You define incoming and outgoing traffic rules:

# network-policy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-nginx-ingress
spec:
podSelector:
matchLabels:
app: nginx
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: ingress-nginx
ports:
- protocol: TCP
port: 80
kubectl apply -f network-policy.yaml

# Verify the policy is active
kubectl get networkpolicies

To deepen network security, discover Kubernetes fundamentals and the complete Kubernetes training.

Take Action: Master Kubernetes Network Administration

You now know how to configure a complete Kubernetes network with CNI, Services, and Ingress. As a Kubernetes infrastructure engineer, these skills are essential for CKA certification and production environments.

To go further in mastering Kubernetes networking:

Contact our advisors to define your training path and check upcoming available sessions.