Introduction-to-Kubernetes-and-Cloud-Native-Technologies

Module 5

Kubernetes Resources: Pods, Deployments, and Services

In Module 4, we deployed a simple Nginx web server to a Kubernetes cluster using kubectl. Now, we’ll explore three key Kubernetes resources—pods, deployments, and services—to understand how Kubernetes manages and exposes applications at scale. You’ll learn how to run multiple copies of an app and make it accessible within the cluster. By the end of this module, you’ll be able to create a deployment and expose it with a service.

Learning Objectives:

Revisiting Pods

As we learned in Module 2, a pod is the smallest unit in Kubernetes, like an apartment housing one or more containers. Pods are temporary—Kubernetes creates, deletes, or replaces them as needed. For example, if a pod running your web server crashes, Kubernetes can start a new one.

Key Points:

Analogy: Think of a pod as a single food truck serving a dish (your app). It’s great for one-off tasks, but to serve a crowd, you need a better system—like a chain of food trucks.

What Are Deployments?

A deployment is a Kubernetes resource that manages multiple pods to ensure your application runs reliably and at scale. It’s like a franchise manager who oversees a chain of food trucks, making sure there are enough trucks, they’re serving the right food, and broken trucks are replaced.

Why Use Deployments?

Scalability: Run multiple copies (replicas) of a pod to handle more users.

Updates: Roll out new versions of your app without downtime.

Reliability: Automatically replace failed pods.

For example, if you want three copies of your Nginx web server running, a deployment ensures all three pods are active and restarts them if they crash.

What Are Services?

A service is a Kubernetes resource that makes your application accessible, either within the cluster or to the outside world. Since pods are temporary and their IP addresses change, a service provides a stable address to reach them.

Analogy: Imagine your food trucks (pods) keep moving around the city. A service is like a phone number that customers can call to find the nearest truck, no matter where it is.

Types of Services (we’ll focus on one for now):

ClusterIP: Creates a stable IP address for pods within the cluster (default type).

NodePort: Exposes the app on a port of each node (useful for external access in sandboxes).

LoadBalancer: Exposes the app externally via a cloud provider (we’ll cover this later).

In this module, we’ll use a NodePort service to access our app, as it’s simple and works well in sandboxes.

Hands-On Exercise: Deploy and Expose an Nginx Application

Let’s deploy an Nginx web server using a deployment and expose it with a service. We’ll use a free online sandbox to keep it beginner-friendly, with optional instructions for local setups.

Step 1: Set Up Your Environment

  1. Go to Play with Kubernetes

  2. Start a Kubernetes cluster as guided (takes 1-2 minutes).

  3. Verify the cluster:

    kubectl get nodes
    

Step 2: Create a Deployment

Create a deployment with three replicas of an Nginx pod using a YAML file. Save this as nginx-deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx

Apply the deployment:

kubectl apply -f nginx-deployment.yaml

What it does:

Check the deployment:

kubectl get deployments

What to expect: See nginx-deployment with 3/3 replicas ready.

Check the pods:

kubectl get pods

What to expect: Three pods (e.g., nginx-deployment-xxx-xxx) with status Running.

Step 3: Expose the Deployment with a Service

Create a service to access the Nginx pods. Save this as nginx-service.yaml:

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  type: NodePort
  selector:
    app: nginx
  ports:
  - port: 80
    targetPort: 80
    nodePort: 30080

Apply the service:

kubectl apply -f nginx-service.yaml

What it does:

Check the service:

kubectl get services

Step 4: Access the Nginx Application

In Play with Kubernetes, a link like http://<node-ip>:30080 will appear. Click it to see the Nginx welcome page (“Welcome to nginx!”). In Katacoda, visit the provided node IP with port 30080 (e.g., http://<node-ip>:30080).

Step 5: Clean Up

Delete the deployment and service:

kubectl delete -f nginx-deployment.yaml
kubectl delete -f nginx-service.yaml

What to expect: The deployment, pods, and service are removed (confirm with kubectl get pods and kubectl get services).

Optional: Using kubectl Commands Instead of YAML

For a quicker approach (less common in production but good for learning):

kubectl create deployment nginx-deployment --image=nginx --replicas=3
kubectl expose deployment nginx-deployment --type=NodePort --port=80 --target-port=80 --name=nginx-service

Note for Beginners: YAML files are the standard way to manage Kubernetes resources, but the command-line approach is simpler for quick tests. Try the YAML method if you can—it’s what you’ll see in real-world scenarios. If you’re not ready for hands-on, just follow along.

Optional: Local Setup with Minikube

If you want to try locally:

  1. Install Minikube and kubectl .

  2. Start Minikube: minikube start.

  3. Apply the YAML files or use the kubectl create/expose commands above.

  4. Access the service: minikube service nginx-service --url (opens the URL in your browser).

  5. Clean up: Delete the deployment and service, then minikube stop.

Troubleshooting Tips

Quiz

1. What does a deployment do?

2. What is the purpose of a service?

3. What does the replicas field in a deployment specify?

Answers: 1-C, 2-B, 3-B

Further Reading

What’s Next?

In Module 6, we’ll explore Networking in Kubernetes, learning how pods communicate within a cluster and how to expose applications using services and ingress. You’ll set up a simple app with proper networking.

Proceed to Module 6