Introduction-to-Kubernetes-and-Cloud-Native-Technologies

Module 6

Networking in Kubernetes

In Module 5, we learned how to manage applications with deployments and expose them using services. Now, we’ll dive into networking in Kubernetes, exploring how pods communicate with each other and how applications are accessed from outside the cluster. By the end of this module, you’ll understand Kubernetes networking basics and set up a service to expose an application.

Learning Objectives:

Kubernetes Networking Basics

Kubernetes networking is like the road system in a city (our cluster analogy from Module 2). It ensures that every apartment (pod) can talk to others and that customers (users) can reach the services they need. Kubernetes has a unique networking model that simplifies communication.

Key Networking Principles:

Analogy: Think of pods as food trucks (from Module 5) moving around the city. Each has a temporary phone number (IP address). A service is like a central hotline that connects customers to the nearest truck, no matter where it’s parked.

Deep Dive into Services

We introduced services in Module 5 as a way to expose pods. Let’s explore them further. A service is a Kubernetes resource that groups pods (based on labels) and provides a stable IP address or DNS name to access them.

Types of Services:

How Services Work:

For example, if you have three Nginx pods, a service ensures traffic is distributed to them, even if one pod is replaced.

Introduction to Ingress

An ingress is a Kubernetes resource that manages external access to services, typically for HTTP/HTTPS traffic. It’s like a city gateway that directs web visitors to the right service based on the URL.

Why Use Ingress?

It supports features like SSL and path-based routing (e.g., /api to one service, /web to another).

How Ingress Works:

We’ll introduce ingress conceptually here, as setting up an ingress controller in a sandbox is complex. In a real-world scenario, you’d use a cloud provider or local setup like Minikube.

Hands-On Exercise: Expose an Application with a Service

Let’s deploy an Nginx application (like in Module 5) and expose it using a NodePort service, reinforcing how services enable networking. We’ll use a free sandbox to keep it beginner-friendly.

Step 1: Set Up Your Environment

  1. Go to Play with Kubernetes

  2. Start a Kubernetes cluster (takes 1-2 minutes).

  3. Verify the cluster:

kubectl get nodes

What to expect: Nodes listed with status Ready.

Step 2: Create a Deployment

Create a deployment with two Nginx pods. Save this as nginx-deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx

Apply it:

kubectl apply -f nginx-deployment.yaml

Check the deployment and pods:

kubectl get deployments
kubectl get pods

What to expect: A deployment with 2/2 replicas and two pods running.

Step 3: Expose the Deployment with a Service

Create a NodePort service. Save this as nginx-service.yaml:

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  type: NodePort
  selector:
    app: nginx
  ports:
  - port: 80
    targetPort: 80
    nodePort: 30080

Apply it:

kubectl apply -f nginx-service.yaml

Check the service:

kubectl get services

What to expect: nginx-service with a NodePort (e.g., 80:30080/TCP).

Step 4: Access the Application

In Play with Kubernetes, click the link like http://<node-ip>:30080 to see the Nginx welcome page.

To test pod-to-pod communication (simulating internal networking):

  1. Start a temporary pod to act as a client:
kubectl run curl-pod --image=radial/busyboxplus:curl --restart=Never --rm -it -- sh
  1. Inside the pod’s shell, run:curl nginx-service

What to expect: The Nginx welcome page HTML, showing the client pod can reach the service’s stable address.

Exit the shell with exit.

Step 5: Clean Up

Delete the resources:

kubectl delete -f nginx-deployment.yaml

kubectl delete -f nginx-service.yaml

Optional: Using kubectl Commands

For a quicker test:

kubectl create deployment nginx-deployment --image=nginx --replicas=2
kubectl expose deployment nginx-deployment --type=NodePort --port=80 --target-port=80 --name=nginx-service

Note for Beginners: YAML files are the standard for Kubernetes, but the command-line approach is simpler for learning. Try the YAML method if possible—it’s what you’ll use in practice. If you’re not ready for hands-on, just follow along.

Optional: Local Setup with Minikube

If you want to try locally:

  1. Install Minikube (https://minikube.sigs.k8s.io/docs/start/) and kubectl.

  2. Start Minikube: minikube start.

  3. Apply the YAML files or use the kubectl create/expose commands.

  4. Access the service: minikube service nginx-service --url.

  5. Clean up: Delete resources, then minikube stop.

Troubleshooting Tips

Quiz

  1. What ensures pods can communicate across nodes?
  1. What does a ClusterIP service do?
  1. What is an ingress used for?

Answers: 1-A, 2-B, 3-B

Further Reading

What’s Next?

In Module 7, we’ll explore Storage in Kubernetes, learning how to provide persistent storage for applications using volumes and persistent volume claims. You’ll set up storage for a simple app.

Proceed to Module 7