This bonus module provides an overview of other important containerization and orchestration technologies beyond Kubernetes. Here, you’ll find introductions to alternative tools and platforms that are valuable in the cloud native ecosystem. Exploring these options will broaden your understanding of the container landscape and help you make informed decisions when selecting tools for different use cases.
By the end of this module, you’ll be able to:
While Docker dominated the early container landscape, the ecosystem has evolved to include several specialized container runtimes, each solving specific problems and serving different use cases.
Podman (Pod Manager) is a daemonless container runtime that provides a Docker-compatible command-line interface. Think of it as Docker’s security-conscious cousin who doesn’t need a background service running constantly.
Key Advantages:
podman run
, podman build
, etc.)Best Use Cases:
Problems It Solves:
containerd is a container runtime that focuses on simplicity, robustness, and portability. It’s the engine that powers Docker and many Kubernetes installations. Think of it as the reliable mechanic who just wants to run containers efficiently without unnecessary bells and whistles.
Key Features:
Best Use Cases:
Problems It Solves:
CRI-O (Container Runtime Interface for Open Container Initiative) is designed specifically for Kubernetes. It’s like a specialized tool built precisely for one job and doing it exceptionally well.
Key Characteristics:
Best Use Cases:
Problems It Solves:
While Kubernetes has become the dominant orchestration platform, several alternatives serve specific needs and use cases effectively.
Docker Swarm is Docker’s native orchestration solution that prioritizes simplicity over feature richness. It’s like choosing a reliable, easy-to-drive car over a feature-packed but complex vehicle.
Key Advantages:
Best Use Cases:
Problems It Solves:
Nomad by HashiCorp is a workload orchestrator that can manage not just containers but also virtual machines, Java applications, and other workload types. Think of it as a universal scheduler that doesn’t discriminate between different types of applications.
Key Features:
Best Use Cases:
Problems It Solves:
Apache Mesos abstracts CPU, memory, storage, and other resources across a cluster, allowing multiple frameworks to share resources efficiently. It’s like having a smart building manager who allocates office space based on current needs.
Key Characteristics:
Best Use Cases:
Problems It Solves:
The container and orchestration landscape continues to evolve rapidly. Here are the key trends shaping the ecosystem as of July 2025:
WebAssembly is increasingly being integrated into container runtimes, offering near-native performance with enhanced security. Projects like WasmEdge and Wasmtime are making it possible to run WebAssembly modules alongside traditional containers.
Why It Matters:
Areas to Explore:
The boundary between containers and serverless computing continues to blur with platforms like Knative and OpenFaaS leading the charge.
Key Developments:
Technologies to Watch:
Container orchestration is expanding beyond traditional datacenters to edge locations and IoT devices.
Emerging Patterns:
Key Projects:
Security is becoming a primary design consideration rather than an afterthought.
Security Innovations:
Notable Projects:
Let’s explore these alternative tools through practical exercises. Each exercise includes setup instructions, basic usage, and links to additional resources.
Prerequisites:
Installation:
For Ubuntu/Debian:
sudo apt update
sudo apt install podman
For CentOS/RHEL:
sudo yum install podman
For Fedora:
sudo dnf install podman
Basic Usage:
First, let’s run a simple container to verify Podman is working:
podman run hello-world
This command downloads and runs a test container, similar to Docker’s hello-world image.
Now, let’s run a web server in rootless mode (one of Podman’s key advantages):
podman run -d -p 8080:80 --name my-web-server nginx
Check the running container:
podman ps
Access the web server by opening a browser and navigating to http://localhost:8080
.
Working with Pods (Podman’s Special Feature):
Create a pod with multiple containers:
# Create a pod
podman pod create --name my-app-pod -p 8080:80
# Add a web server to the pod
podman run -d --pod my-app-pod --name web-server nginx
# Add a database to the same pod
podman run -d --pod my-app-pod --name database postgres:alpine
List pods:
podman pod ls
Stop and remove the pod:
podman pod stop my-app-pod
podman pod rm my-app-pod
Advanced: Rootless Containers:
Run a container as a non-root user:
# This runs without requiring sudo
podman run -d -p 8081:80 --name rootless-nginx nginx
The container runs with your user privileges, enhancing security.
Resources for Further Learning:
Prerequisites:
Setting Up a Single-Node Swarm:
Initialize Docker Swarm:
docker swarm init
This command sets up your machine as a Swarm manager node. You’ll see output with a token for joining worker nodes.
Verify the swarm:
docker node ls
Deploying a Service:
Create a simple web service:
docker service create --name web-service --replicas 3 -p 8080:80 nginx
This creates a service with 3 replicas of nginx containers.
Check the service:
docker service ls
docker service ps web-service
Scaling the Service:
Scale up the service:
docker service scale web-service=5
Scale down:
docker service scale web-service=2
Creating a Stack with Docker Compose:
Create a file called docker-compose.yml
:
version: '3.8'
services:
web:
image: nginx
ports:
- "8080:80"
deploy:
replicas: 3
database:
image: postgres:alpine
environment:
POSTGRES_PASSWORD: mypassword
deploy:
replicas: 1
Deploy the stack:
docker stack deploy -c docker-compose.yml my-app
List stacks and services:
docker stack ls
docker service ls
Cleanup:
Remove the stack:
docker stack rm my-app
Remove the service:
docker service rm web-service
Leave the swarm:
docker swarm leave --force
Resources for Further Learning:
Prerequisites:
Installation:
Download and install Nomad:
# Download the latest version (adjust URL for your OS)
wget https://releases.hashicorp.com/nomad/1.6.0/nomad_1.6.0_linux_amd64.zip
# Extract and install
unzip nomad_1.6.0_linux_amd64.zip
sudo mv nomad /usr/local/bin/
Verify installation:
nomad version
Starting a Development Environment:
Start Nomad in development mode:
nomad agent -dev
This starts a single-node Nomad cluster suitable for learning and development.
Deploying a Job:
Create a job file called web-app.nomad
:
job "web-app" {
datacenters = ["dc1"]
type = "service"
group "web" {
count = 2
task "nginx" {
driver = "docker"
config {
image = "nginx:latest"
port_map {
http = 80
}
}
resources {
cpu = 500
memory = 256
network {
mbits = 10
port "http" {
static = 8080
}
}
}
}
}
}
Submit the job:
nomad job run web-app.nomad
Check job status:
nomad job status web-app
Managing Jobs:
View all jobs:
nomad job status
Stop a job:
nomad job stop web-app
Working with Different Workload Types:
Create a job for a Java application:
job "java-app" {
datacenters = ["dc1"]
type = "batch"
group "java" {
task "hello-world" {
driver = "java"
config {
class_path = "/tmp"
class = "HelloWorld"
}
artifact {
source = "https://example.com/HelloWorld.jar"
}
resources {
cpu = 100
memory = 128
}
}
}
}
Resources for Further Learning:
Prerequisites:
Installation:
Install WasmEdge:
curl -sSf https://raw.githubusercontent.com/WasmEdge/WasmEdge/master/utils/install.sh | bash
Source the environment:
source ~/.bashrc
Running a WebAssembly Application:
Create a simple WebAssembly application. First, create a file called hello.wat
(WebAssembly Text format):
(module
(import "wasi_snapshot_preview1" "fd_write" (func $fd_write (param i32 i32 i32 i32) (result i32)))
(memory 1)
(export "memory" (memory 0))
(data (i32.const 8) "Hello, WebAssembly!\n")
(func $main (export "_start")
(i32.store (i32.const 0) (i32.const 8))
(i32.store (i32.const 4) (i32.const 20))
(call $fd_write
(i32.const 1)
(i32.const 0)
(i32.const 1)
(i32.const 20)
)
drop
)
)
Compile and run:
# If you have wabt tools installed
wat2wasm hello.wat -o hello.wasm
# Run with WasmEdge
wasmedge hello.wasm
Running a More Complex Example:
Download and run a pre-built WebAssembly application:
# Download a simple HTTP server written in Rust and compiled to WebAssembly
wget https://github.com/WasmEdge/WasmEdge/releases/download/0.13.0/wasmedge_hyper_server.wasm
# Run the server
wasmedge wasmedge_hyper_server.wasm
Integration with Kubernetes:
Install Krustlet (Kubernetes kubelet that runs WebAssembly):
# Download Krustlet
wget https://github.com/krustlet/krustlet/releases/download/v1.0.0-alpha.1/krustlet-v1.0.0-alpha.1-linux-amd64.tar.gz
# Extract and install
tar -xzf krustlet-v1.0.0-alpha.1-linux-amd64.tar.gz
sudo mv krustlet-wasi /usr/local/bin/
Resources for Further Learning:
As the containerization landscape continues to evolve, several emerging areas warrant attention:
Confidential computing protects data in use by performing computation in a hardware-based trusted execution environment. This is becoming increasingly important for sensitive workloads.
Key Projects to Watch:
Organizations are increasingly adopting multi-cloud strategies, requiring orchestration tools that work seamlessly across different cloud providers.
Emerging Solutions:
The growing importance of AI and machine learning workloads is driving the development of specialized orchestration tools.
Notable Projects:
Environmental concerns are driving the development of more energy-efficient container platforms and orchestration tools.
Areas of Focus:
The container and orchestration ecosystem extends far beyond Docker and Kubernetes, offering specialized tools for different use cases and requirements. Understanding these alternatives helps you make informed decisions about the right tools for your specific needs.
Key takeaways from this module:
Container Runtimes like Podman, containerd, and CRI-O each solve specific problems around security, performance, and integration. Choose based on your security requirements, orchestration platform, and operational constraints.
Orchestration Platforms such as Docker Swarm, Nomad, and Mesos offer different approaches to managing containerized applications. Consider your scale, complexity requirements, and existing tool ecosystem when selecting an orchestration platform.
Emerging Trends like WebAssembly, edge computing, and confidential computing are shaping the future of containerization. Stay informed about these developments to remain current with industry evolution.
The hands-on exercises in this module provide starting points for exploring these technologies. Remember that the best way to understand these tools is through practical experience and experimentation.
As you continue your cloud-native journey, consider how these alternative tools might complement or replace components in your current technology stack. The diversity of options in the container ecosystem ensures that there’s likely a solution that fits your specific requirements and constraints.
This bonus module complements the main Introduction to Kubernetes and Cloud-Native Technologies course by providing broader context and alternative approaches to containerization and orchestration challenges.