Skip to content
LinkedIn GitHub X

Docker

8 posts with the tag “Docker”

Exploring Load Balancing in Caddy Using Docker

Cover Images

Hello there! In this post, we’ll dive into the world of Caddy, a modern and powerful web server. Built using Go, Caddy offers a range of built-in features, including reverse proxy and load balancing.

By the way, in this project, I’ll use Caddy 2, the latest version, which comes with a completely rewritten architecture and enhanced features. To make our setup and testing as smooth as possible, we’ll leverage Docker. Docker allows us to create a reproducible environment with all the necessary components easily.

The main focus of this post will be on understanding and experimenting with Caddy’s load balancing algorithms. We’ll explore how Caddy intelligently distributes incoming traffic, the different strategies it offers (such as Round Robin, Least Connection, Random, and more), and how you can configure the for various use cases.

By using Docker, we’ll quickly spin up multiple backend services (which are called “workers”) and route requests through a single Caddy instance acting as our load balancer.

The Project Setup

To get started, let’s look at the project structure. Our setup is straightforward, allowing us to focus on the core concepts of load balancing.

Here’s the project structure:

Terminal window
├── caddy
└── Caddyfile
├── docker-compose.yml
└── src
├── Dockerfile
├── go.mod
└── main.go

Our setup is composed of three main parts:

1. Backend Workers (src): These are simple Go applications the will handle the requests. Each worker will simply return a “Hello form [hostname]” message, allowing us to sess which server is handling the request. This is the perfect way to visualize how Caddy distributes the load.

2. Caddy Load Balancer (caddy): Our Caddy instance will act as the reverse proxy, forwarding requests to the worker services. We’ll use the Caddyfile to define our load balancing rules.

3. Docker Compose (docker.compose.yml): This file orchestrates everything, defining and running our multi-container application with a single-command.

Understanding the Components

Let’s break down the files in our project.

src/main.go

This is a simple Go HTTP server. It listens on port 8081 and responds to any request by printing its hostname. This is our primary tool for observing the load balancing behavior.

package main
import (
"fmt"
"net/http"
"os"
)
func handler(w http.ResponseWriter, r *http.Request) {
hostname, _ := os.Hostname()
fmt.Fprintf(w, "Hello from %s\n", hostname)
}
func main() {
http.HandleFunc("/", handler)
http.ListenAndServe(":8081", nil)
}

src/Dockerfile

This Dockerfile builds our Go application into a lightweight, self-contained Docker image.

FROM golang:1.24 AS builder
WORKDIR /go/src/app
COPY . .
RUN go mod download
RUN CGO_ENABLED=0 go build -o /go/bin/app
FROM golang:1.24-alpine
COPY --from=builder /go/bin/app /
EXPOSE 8081
ENV PORT 8081
CMD ["/app"]

docker-compose.yml

This file ties everything together. We define a service for our Caddy load balancer and three worker services.

version: "3.8"
services:
load_balancer:
image: caddy:2.10-alpine
container_name: load_balancer
ports:
- "8082:80"
volumes:
- ./caddy/Caddyfile:/etc/caddy/Caddyfile
worker_1:
build: ./src
container_name: worker_1
hostname: worker_1
expose:
- "8081"
worker_2:
build: ./src
container_name: worker_2
hostname: worker_2
expose:
- "8081"
worker_3:
build: ./src
container_name: worker_3
hostname: worker_3
expose:
- "8081"

caddy/Caddyfile

This is where the magic happens! The Caddyfile is Caddy’s configuration file. We’ll define a reverse proxy that routes to our workers. The lb_policy directive is where we’ll specify our load balancing algorithms.

:80 {
reverse_proxy worker_1:8081 worker_2:8081 worker_3:8081 {
lb_policy round_robin
health_uri /
health_interval 3s
}
}

Experimenting with Load Balancing Algorithms

Now that our project is set up, we can start experimenting. To run the project, simply execute docker-compose up in your terminal. You can then send requests to http://127.0.0.1:8082 and observe which worker responds.

1. Round Robin (lb_policy round_robin)

This is the most common and simplest load balancing algorithm. Caddy distributes incoming requests to the backend servers in a sequential, rotating manner. It’s a fair and predictable method, assuming all servers are equally capable of handling the load.

How to Configure:

Modify your caddy/Caddyfile to use the round_robin policy.

:80 {
reverse_proxy worker_1:8081 worker_2:8081 worker_3:8081 {
lb_policy round_robin
health_uri /
health_interval 3s
}
}

How to Test:

After running docker-compose up -d --build, open your terminal and send a few requests using curl. You should see that Caddy distributes the traffic evenly among the three workers.

Terminal window
$ curl http://127.0.0.1:8082
# Output: Hello from worker_1
$ curl http://127.0.0.1:8082
# Output: Hello from worker_2
$ curl http://127.0.0.1:8082
# Output: Hello from worker_3
$ curl http://127.0.0.1:8082
# Output: Hello from worker_1

Now, let’s test Caddy’s fault tolerance. In a real-world scenario, a server might crash or become unresponsive. We’ll simulate this by manually stopping one of the workers.

Run the following command in your terminal to stop worker_1:

Terminal window
$ docker stop worker_1

After a few moments (the health_interval you set, e.g., 3 seconds), Caddy will perform its next health check, detect that worker_1 is unresponsive, and automatically mark it as unhealthy.

Now, send a few more requests. What do you expect to happen? With worker_1 down, Caddy should intelligently stop routing traffic to it and redirect all requests to the remaining healthy servers (worker_2 and worker_3).

Demo Round Robin

2. Weighted Round Robin (lb_policy weighted_round_robin)

This algorithm is a more advanced version of Round Robin. It allows you to assign a “weight” to each backend server, which determines its share of the requests. Servers with a higher weight will receive more traffic than those with a lower weight. This is ideal when you have servers with varying capacities, for example, a new, more powerful server and an older, less powerful one.

You can also use this policy to gradually drain traffic from an old server or ramp up traffic to a new one during deployments, making it a very useful strategy.

How to Configure:

To use this policy, you need to add the weight to each server’s address in the Caddyfile. For our example, let’s give worker_1 a higher weight of 3, while worker_2 and worker_3 each have a weight of 1. This means worker_1 should handle three out of every five requests.

:80 {
reverse_proxy worker_1:8081 worker_2:8081 worker_3:8081 {
lb_policy weighted_round_robin 3 1 1
health_uri /
health_interval 3s
}
}

After updating the Caddyfile, make sure to reload or restart your Caddy container to apply the changes. You can do this with docker-compose up -d --build.

How to Test:

Now, let’s send a few requests to our load balancer and see how Caddy distributes the traffic according to the assigned weights. Send a few requests using curl and observe the responses.

Terminal window
$ curl http://127.0.0.1:8082
# Output: Hello from worker_1
$ curl http://127.0.0.1:8082
# Output: Hello from worker_1
$ curl http://127.0.0.1:8082
# Output: Hello from worker_2
$ curl http://127.0.0.1:8082
# Output: Hello from worker_3
$ curl http://127.0.0.1:8082
# Output: Hello from worker_1

Weighted Round Robin Demo

3. Least Connection (lb_policy ip_hash)

Aight, let’s dive into Least Connection. Unlike Round Robin, which is a simple, sequential algorithm, Least Connection is a dynamic and more intelligent load balancing policy. It chooses the backend server with the fewest number of currently active requests. This policy is excellent for situations where your requests have a highly variable processing time.

For example, if one of your servers gets a handful of complex, long-running requests while the others are handling many small, quick ones, this algorithm will automatically route new traffic to the servers that are less burdened, preventing a single server from becoming a bottleneck. If there’s a tie, meaning two or more servers have the same lowest number of connections, Caddy will randomly choose one of them.

How to Configure:

Configuring this policy is simple. You just need to change the lb_policy directive in your Caddyfile.

:80 {
reverse_proxy worker_1:8081 worker_2:8081 worker_3:8081 {
lb_policy least_conn
health_uri /
health_interval 3s
}
}

After updating your Caddyfile, make sure to restart your Caddy container with docker-compose up -d --build to apply the changes.

How to Test:

To demonstrate the Least Connection algorithm, you’ll need to modify your Go code to simulate a long-running request. This will allow you to see how Caddy intelligently routes traffic away from the busy worker.

- Update Your Go Code

Open your src/main.go file and add a new handler that will simulate a task with a significant delay. This will act as our “long-running request.”

package main
import (
"fmt"
"net/http"
"os"
"time"
)
func handler(w http.ResponseWriter, r *http.Request) {
hostname, _ := os.Hostname()
fmt.Fprintf(w, "Hello from %s\n", hostname)
}
func longHandler(w http.ResponseWriter, r *http.Request) {
hostname, _ := os.Hostname()
time.Sleep(10 * time.Second)
fmt.Fprintf(w, "Hello! long running request finished from %s\n", hostname)
}
func main() {
http.HandleFunc("/", handler)
http.HandleFunc("/long", longHandler)
http.ListenAndServe(":8081", nil)
}

- Rebuild and Run Docker Compose

After updating your code, you must rebuild and run your containers to apply the changes.

Terminal window
$ docker-compose up -d --build

- Test the Scenario

Now, you can test the Least Connection algorithm using two separate terminals.

Terminal 1 (Long-Running Request):

Start a request to the /long endpoint. This will open a connection to one of the workers and hold it for 10 seconds. Caddy will detect that this worker has an active, ongoing connection.

Terminal window
$ curl http://127.0.0.1:8082/long

Terminal 2 (Normal Requests):

Immediately after running the command in Terminal 1, switch to Terminal 2 and send several quick requests to the root endpoint (/).

Terminal window
$ curl http://127.0.0.1:8082/
# Output: Hello from worker_2
$ curl http://127.0.0.1:8082/
# Output: Hello from worker_3
$ curl http://127.0.0.1:8082/
# Output: Hello from worker_2

You will observe that Caddy will not send requests to the worker that is currently busy with the long-running request. Instead, all new requests will be routed to the other two workers, demonstrating how the least_conn algorithm effectively balances load dynamically. Once the long-running request is complete, that worker will once again be available to handle new requests.

Least Connection

4. IP Hash (lb_policy ip_hash)

The IP Hash load balancing algorithm is different from the previous ones because it’s focused on session persistence. Instead of distributing requests based on a sequential or random order, it creates a hash from the client’s IP address and uses that hash to consistently route all requests from that same client to the same backend server.

How to Configure:

Configuring the IP Hash policy is straightforward. You simply need to replace the lb_policy directive in your Caddyfile with ip_hash.

:80 {
reverse_proxy worker_1:8081 worker_2:8081 worker_3:8081 {
lb_policy ip_hash
health_uri /
health_interval 3s
}
}

After updating your Caddyfile, make sure to restart your Caddy container with docker-compose up -d --build to apply the changes.

How to Test:

To test this algorithm, you’ll need to send requests from different “clients” (i.e., different IP addresses) and observe where they are routed. The easiest way to simulate this is by sending requests from your local machine and then using a proxy or a different network to see if the requests are routed to a different server.

IP Hash Demo

No matter how many times you run curl from the same machine, the requests will always be routed to the same worker. This is because Caddy is hashing your local IP address (127.0.0.1 or the container’s internal IP) and consistently mapping it to that specific worker.

This demonstrates how IP Hash ensures session stickiness without needing to share session data across all servers. It’s a powerful tool for maintaining a consistent user experience.

5. Random (lb_policy random)

The Random load balancing policy is the simplest and most unpredictable of all the algorithms. As its name suggests, it selects a backend server at random for each new request. There is no sequential pattern or special logic; every request has an equal chance of being routed to any of the available servers.

While it may seem less sophisticated than other algorithms, the Random policy is surprisingly effective in many scenarios. It’s fast, has a very low overhead, and can be a great choice for distributing traffic evenly across a large pool of homogenous servers. It naturally avoids the “thundering herd” problem that can sometimes occur with Round Robin on first-come-first-served requests, as it prevents all clients from hitting the same server at the same time.

How to Configure:

Configuring this policy is the easiest. Simply replace the lb_policy directive in your Caddyfile with random.

:80 {
reverse_proxy worker_1:8081 worker_2:8081 worker_3:8081 {
lb_policy random
health_uri /
health_interval 3s
}
}

After updating your Caddyfile, make sure to restart your Caddy container with docker-compose up -d --build to apply the changes.

How to Test:

To test the Random policy, send a series of quick requests and observe the output. Unlike the predictable pattern of Round Robin or the consistent output of IP Hash, the responses will come from different workers in an unpredictable order.

Terminal window
curl http://127.0.0.1:8082
# Output: Hello from worker_3
curl http://127.0.0.1:8082
# Output: Hello from worker_1
curl http://127.0.0.1:8082
# Output: Hello from worker_2
curl http://127.0.0.1:8082
# Output: Hello from worker_1

Demo Random

Conclusion

Throughout this post, we’ve explored the core capabilities of Caddy as a powerful web server and a flexible load balancer. Using a simple Docker setup, we were able to quickly demonstrate five different load balancing algorithms, each with its own unique advantages:

- Round Robin: The classic, simple approach for evenly distributing traffic in a predictable sequence.

- Weighted Round Robin: A smarter version of Round Robin that allows you to prioritize traffic to more powerful servers.

- Least Connection: A dynamic algorithm that routes traffic based on real-time load, preventing a single server from becoming a bottleneck.

- IP Hash: The ideal choice for session stickiness, ensuring a consistent user experience by always routing a client to the same backend server.

- Random: A straightforward and fast algorithm for scattering traffic across servers, effective for a large pool of homogenous workers.

This hands-on experience proved that Caddy is not just a simple web server but a robust tool for building scalable and reliable applications. Caddy’s elegant syntax and powerful features make it an excellent choice for anyone looking to simplify their server configurations without sacrificing control or performance. Whether you’re a seasoned developer or just starting, Caddy offers a smooth and intuitive experience that can handle everything from a single website to a complex, distributed application.

References:

- Caddy Documentation: Reverse Proxy

- Caddy Community Wiki

Dockerizing Go API and Caddy

Cover Image

Hello! In this post, I’ll walk through how to use Caddy as a reverse proxy and Docker for containerization to deploy a simple Go API. This method offers a quick and modern way to get your Go API up and running.

Before we dive into the deployment steps, let’s briefly discuss why Docker and Caddy are an excellent combination.

- Docker is a containerization platform that packages your app and all its dependencies into an isolated unit. This guarantees that your app runs consistently everywhere, eliminating the classic it works on my machine problem.

it works on my machine meme

- Dockerile is the blueprint that defines how your Docker image is built. It specifies the base image, the steps to compile your application, and how the container should run.

- Docker Compose is a tool for defining and running multi-container Docker applications. Instead of starting each container manually, we can describe the entire stack in a single YAML file.

- Caddy is a modern reverse proxy and web server built using Go. Caddy is renowned for its ease of use, especially its automatic HTTPS feature. Its simple configuration makes it an ideal choice for serving our API.

Requirements

Before we start, you’ll need to install Go, Docker and Docker Compose on your system.

1. Go

Make sure Go is installed. You can check your version with:

$ go version
go version go1.24.0 linux/amd6

If it’s not installed, you can download it from the official site.

2. Docker & Docker Compose

Make sure Docker & Docker Compose are installed. You can check your docker version with:

Terminal window
$ docker --version
Docker version 27.5.1, build 9f9e405
Terminal window
$ docker compose version
Docker Compose version v2.3.3

Follow the official guides to install Docker and Docker Compose for your operating system if you haven’t installed them before. The official site.

Setting Up the Project

Step 1: Create and Run a Simple Go API

  • First, create a new directory for the project and initialize a Go module:
Terminal window
$ mkdir go-api && cd go-api
$ go mod init example.com/go-api

This command creates a Go module (go.mod) named example.com/go-api, which helps manage dependencies and makes the project reproducible.

- Next, create a new file main.go and define a simple HTTP server using Chi as the router.

The server exposes two routes:

1. Root path / -> return an ASCII banner generated with the go-figure package.

func handler(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "text/plain")
asciiArt := figure.NewFigure("Go API - Caddy", "", true).String()
fmt.Fprintln(w, asciiArt)
}

2. /api/hello -> returns a simple JSON response.

func apiHandler(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
fmt.Fprintf(w, `{"message":"Hello, World!"}`)
}

3. In the main function, we:

- Initialize the Chi router and register both routes.

- Configure an HTTP server to listen on port 8081.

- Run the server in a goroutine and listen for shutdown signals (SIGINT, SIGTERM).

- Gracefully shut down the server with a 5-second timeout when a termination signal is received.

func main() {
r := chi.NewRouter()
// API Route
r.Get("/", handler)
r.Get("/api/hello", apiHandler)
srv := &http.Server{
Addr: ":8081",
Handler: r,
}
stop := make(chan os.Signal, 1)
signal.Notify(stop, syscall.SIGINT, syscall.SIGTERM)
go func() {
fmt.Println("Server started at :8081")
if err := srv.ListenAndServe(); err != nil && err != http.ErrServerClosed {
fmt.Printf("Could not listen on port 8081: %v\n", err)
}
}()
<-stop
fmt.Println("Shutting down server...")
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := srv.Shutdown(ctx); err != nil {
fmt.Printf("Server shutdown failed: %v\n", err)
}
}

Here is the full code:

package main
import (
"context"
"fmt"
"net/http"
"os"
"os/signal"
"syscall"
"time"
"github.com/common-nighthawk/go-figure"
"github.com/go-chi/chi/v5"
)
func handler(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "text/plain")
asciiArt := figure.NewFigure("Go API - Caddy", "", true).String()
fmt.Fprintln(w, asciiArt)
}
func apiHandler(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
fmt.Fprintf(w, `{"message":"Hello, World!"}`)
}
func main() {
r := chi.NewRouter()
// API Route
r.Get("/", handler)
r.Get("/api/hello", apiHandler)
srv := &http.Server{
Addr: ":8081",
Handler: r,
}
stop := make(chan os.Signal, 1)
signal.Notify(stop, syscall.SIGINT, syscall.SIGTERM)
go func() {
fmt.Println("Server started at :8081")
if err := srv.ListenAndServe(); err != nil && err != http.ErrServerClosed {
fmt.Printf("Could not listen on port 8081: %v\n", err)
}
}()
<-stop
fmt.Println("Shutting down server...")
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := srv.Shutdown(ctx); err != nil {
fmt.Printf("Server shutdown failed: %v\n", err)
}
}

- Then, try run the main.go using go run:

$ go run main.go

- After that, you should see this message in the terminal:

Server started at :8081

- Finally, you can test the API using curl or access from the browser on http://127.0.0.1:8081:

Terminal window
$ curl 127.0.0.1:8081
____ _ ____ ___ ____ _ _
/ ___| ___ / \ | _ \ |_ _| / ___| __ _ __| | __| | _ _
| | _ / _ \ / _ \ | |_) | | | _____ | | / _` | / _` | / _` | | | | |
| |_| | | (_) | / ___ \ | __/ | | |_____| | |___ | (_| | | (_| | | (_| | | |_| |
\____| \___/ /_/ \_\ |_| |___| \____| \__,_| \__,_| \__,_| \__, |
|___/
Terminal window
$ curl 127.0.0.1:8081/api/hello
{"message":"Hello, World!"}

Step 2: The Dockerfile

We’ll use a multi-stage build to create a minimal and secure Docker image. This process compiles our Go application in one stage and then copies only the final binary to a much smaller final image. This keeps our final image size low.

Create a Dockerfile in your project directory and add the following code:

# First stage: builder
FROM golang:1.24 AS builder
WORKDIR /go/src/app
COPY . .
RUN go mod download
RUN CGO_ENABLED=0 go build -o /go/bin/app

- In the builder stage, we use the official Go image to compile our code.

- We copy the entire project into the container and run go mod download to fetch dependencies.

- Then we build the binary with CGO_ENABLED=0 to ensure it’s statically compiled and portable. The binary is placed in /go/bin/app.

# Second stage: final image
FROM golang:1.24-alpine
# Copy only the compiled binary from builder stage
COPY --from=builder /go/bin/app /
EXPOSE 8081
ENV PORT 8081
CMD ["/app"]

- In the final stage, only the compiled binary is copied over from the builder stage. This keeps the image small because source code, dependencies, and build tools are excluded.

- We expose port 8081 so Docker knows which port the app listens on.

- CMD ["/app"] runs the binary as the container’s main process.

Here the full code:

FROM golang:1.24 AS builder
WORKDIR /go/src/app
COPY . .
RUN go mod download
RUN CGO_ENABLED=0 go build -o /go/bin/app
FROM golang:1.24-alpine
COPY --from=builder /go/bin/app /
EXPOSE 8081
ENV PORT 8081
CMD ["/app"]

Step 3: Configure Caddy

Caddy will act as a reverse proxy that forwards incoming requests to the Go API container. This allows us to:

  • Tells Caddy to listen on port 80 (HTTP).
  • Forwards requests to the Go API container, using the Docker service name go-api and port 8081.

Create a file named Caddyfile in your project directory:

Terminal window
:80 {
reverse_proxy go-api:8081
}

💡 If you later have a domain (e.g. api.example.com), you can replace :80 with your domain

Terminal window
api.example.com {
reverse_proxy go-api:8081
}

Step 4: Configure Docker Compose

Create a file named docker-compose.yml in your project directory:

version: "3.8"
services:
go-api:
build: .
container_name: go-api
expose:
- "8081" # Expose port internally (only visible to other services in this network)
restart: unless-stopped
caddy:
image: caddy:2.10-alpine
container_name: caddy
ports:
- "8082:80" # Map host port 8082 -> container port 80
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
restart: unless-stopped
depends_on:
- go-api

Explanation

go-api service

- Built from your local Dockerfile.

- The Go API will listen on :8081, but only Caddy can access it.

caddy service

- Uses the official lightweight caddy:2.10-alpine image.

- ports: "8082:80" -> exposes port 80 from the container to port 8082 on your host machine. So when you open http://127.0.0.1:8082, requests are routed through Caddy.

- The Caddyfile is mounted so you can configure reverse proxy behavior.

- depends_on ensures Caddy waits for the Go API container to start.

Step 5: Test the Setup

Once all files are ready (main.go, Dockerfile, Caddyfile, docker-compose.yml), run the stack using docker compose up command:

Terminal window
├── Caddyfile
├── docker-compose.yml
├── Dockerfile
├── go.mod
├── go.sum
└── main.go
Terminal window
$ docker compose up -d --build

Make sure the containers are running using docker ps command:

Terminal window
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
NAMES
6b6b35487d77 caddy:2.10-alpine "caddy run --config …" 9 minutes ago Up 9 minutes 443/tcp, 2019/tcp, 443/udp, 0.0.0.0:8082->80/tcp, [::]:8082->80/tcp caddy
20cbb2209fe7 docker-go-api-caddy_go-api "/app" 9 minutes ago Up 9 minutes 0.0.0.0:32770->8081/tcp, [::]:32770->8081/tcp

Now, test the endpoints through Caddy (reverse proxy):

Terminal window
$ curl http://127.0.0.1:8082

Expected output: the ASCII art rendered by go-figure.

Terminal window
$ curl http://127.0.0.1:8082/api/hello

Expected output:

Terminal window
{"message":"Hello, World!"}

Conclusion

In this guide, we successfully deployed a simple Go API containerized approach. By leveraging Docker, we created an isolated and reproducible environment for our application. We used a multi-stage build in our Dockerfile to keep the final image lightweight and secure, and Docker Compose to easily manage both our Go API and Caddy services with a single command.

Feel free to explore further by adding more features to your API or by setting up a domain with Caddy’s automatic HTTPS. Happy deployment!🚀

References:

- Docker Documentation

- Docker Compose Documenation

- Caddy Documentation

K3D: Getting Started with ArgoCD

Cover Image

K3D: Getting Started with ArgoCD

Intro

ArgoCD is a GitOps tool with a straightforward but powerful objective: to declaratively deploy applications to Kubernetes by managing application resources directly from version control systems, such as Git repositories. Every commit to the repository represents a change, which ArgoCD can apply to the Kubernetes cluster either manually or automatically. This approach ensures that deployment processes are fully controlled through version-controlled files, fostering an explicit and auditable release process.

For example, releasing a new application version involves updating the image tag in the resource files and committing the changes to the repository. ArgoCD syncs with the repository and seamlessly deploys the new version to the cluster.

Since ArgoCD itself operates on Kubernetes, it is straightforward to set up and integrates seamlessly with lightweight Kubernetes distributions like K3s. In this tutorial, we will demonstrate how to configure a local Kubernetes cluster using K3D and deploy applications with ArgoCD, utilizing the argocd-example-apps repository as a practical example.

Prerequisites

Before we begin, ensure you have the following installed:

- Docker

- Kubectl

- K3D

- ArgoCD CLI

Step 1: Set Up a K3D Cluster

Create a new Kubernetes cluster using K3D:

Terminal window
$ k3d cluster create argocluster --agents 2
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-argocluster'
INFO[0000] Created image volume k3d-argocluster-images
INFO[0000] Starting new tools node...
INFO[0000] Starting node 'k3d-argocluster-tools'
INFO[0001] Creating node 'k3d-argocluster-server-0'
INFO[0001] Creating node 'k3d-argocluster-agent-0'
INFO[0001] Creating node 'k3d-argocluster-agent-1'
INFO[0001] Creating LoadBalancer 'k3d-argocluster-serverlb'
INFO[0001] Using the k3d-tools node to gather environment information
INFO[0001] HostIP: using network gateway 172.18.0.1 address
INFO[0001] Starting cluster 'argocluster'
INFO[0001] Starting servers...
INFO[0002] Starting node 'k3d-argocluster-server-0'
INFO[0009] Starting agents...
INFO[0009] Starting node 'k3d-argocluster-agent-0'
INFO[0009] Starting node 'k3d-argocluster-agent-1'
INFO[0017] Starting helpers...
INFO[0017] Starting node 'k3d-argocluster-serverlb'
INFO[0024] Injecting records for hostAliases (incl. host.k3d.internal) and for 4 network members into CoreDNS configmap...
INFO[0026] Cluster 'argocluster' created successfully!
INFO[0026] You can now use it like this:
kubectl cluster-info

Verify that your cluster is running:

Terminal window
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k3d-argocluster-agent-0 Ready <none> 63s v1.30.4+k3s1
k3d-argocluster-agent-1 Ready <none> 62s v1.30.4+k3s1
k3d-argocluster-server-0 Ready control-plane,master 68s v1.30.4+k3s1

Step 2: Install ArgoCD

Install ArgoCD in your K3D cluster:

Terminal window
$ kubectl create namespace argocd
$ kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
customresourcedefinition.apiextensions.k8s.io/applications.argoproj.io created
customresourcedefinition.apiextensions.k8s.io/applicationsets.argoproj.io created
customresourcedefinition.apiextensions.k8s.io/appprojects.argoproj.io created
serviceaccount/argocd-application-controller created
serviceaccount/argocd-applicationset-controller created
serviceaccount/argocd-dex-server created
serviceaccount/argocd-notifications-controller created
serviceaccount/argocd-redis created
serviceaccount/argocd-repo-server created
serviceaccount/argocd-server created
role.rbac.authorization.k8s.io/argocd-application-controller created
role.rbac.authorization.k8s.io/argocd-applicationset-controller created
role.rbac.authorization.k8s.io/argocd-dex-server created
role.rbac.authorization.k8s.io/argocd-notifications-controller created
role.rbac.authorization.k8s.io/argocd-redis created
role.rbac.authorization.k8s.io/argocd-server created
...

Check the status of ArgoCD pods:

Terminal window
$ kubectl get pods -n argocd
NAME READY STATUS RESTARTS AGE
argocd-application-controller-0 1/1 Running 0 29m
argocd-applicationset-controller-684cd5f5cc-78cc8 1/1 Running 0 29m
argocd-dex-server-77c55fb54f-bw956 1/1 Running 0 29m
argocd-notifications-controller-69cd888b56-g7z4r 1/1 Running 0 29m
argocd-redis-55c76cb574-72mdh 1/1 Running 0 29m
argocd-repo-server-584d45d88f-2mdlc 1/1 Running 0 29m
argocd-server-8667f8577-prx6s 1/1 Running 0 29m

Expose the ArgoCD API server locally, then try to accessing the dashboard:

Terminal window
$ kubectl port-forward svc/argocd-server -n argocd 8080:443

ArgoCD

Step 3: Configure ArgoCD

Log in to ArgoCD

Retrieve the initial admin password:

Terminal window
kubectl get secret argocd-initial-admin-secret -n argocd -o jsonpath="{.data.password}" | base64 -d

Log in using the admin username and the password above.

Log in into ArgoCD Dashboard

Connect a Git Repository

Just clone the argocd-example-apps repository:

Terminal window
git clone https://github.com/argoproj/argocd-example-apps.git

Specify the ArgoCD server address in your CLI configuration:

Terminal window
$ argocd login localhost:8080
WARNING: server certificate had error: tls: failed to verify certificate: x509: certificate signed by unknown authorit
y. Proceed insecurely (y/n)? y
Username: admin
Password:
'admin:login' logged in successfully
Context 'localhost:8080' updated

Create a new ArgoCD application using the repository:

Terminal window
$ $ argocd app create example-app \
--repo https://github.com/argoproj/argocd-example-apps.git \
--path ./guestbook \
--dest-server https://kubernetes.default.svc \
--dest-namespace default
application 'example-app' created

Sync the application:

Terminal window
$ argocd app sync example-app
TIMESTAMP GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAG
E
2025-01-10T11:31:11+07:00 Service default guestbook-ui OutOfSync Missing
2025-01-10T11:31:11+07:00 apps Deployment default guestbook-ui OutOfSync Missing
2025-01-10T11:31:11+07:00 Service default guestbook-ui Synced Healthy
2025-01-10T11:31:11+07:00 Service default guestbook-ui Synced Healthy service/guestbook-ui created
2025-01-10T11:31:11+07:00 apps Deployment default guestbook-ui OutOfSync Missing deployment.apps/guestbook-ui created
2025-01-10T11:31:11+07:00 apps Deployment default guestbook-ui Synced Progressing deployment.apps/guestbook-ui created
...

Step 4: Verify the Deployment

Check that the application is deployed successfully:

Terminal window
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
guestbook-ui-649789b49c-zwjt8 1/1 Running 0 5m36s

Access the deployed application by exposing it via a NodePort or LoadBalancer:

Terminal window
$ kubectl port-forward svc/guestbook-ui 8081:80
Forwarding from 127.0.0.1:8081 -> 80
Forwarding from [::1]:8081 -> 80

Guestbook UI

Dashboard App List

Dashboard App

Conclusion

In this tutorial, you’ve set up a local Kubernetes cluster using K3D and deployed applications with ArgoCD. This setup provides a simple and powerful way to practice GitOps workflows locally. By leveraging tools like ArgoCD, you can ensure your deployments are consistent, auditable, and declarative. Happy GitOps-ing!

References

- GitOps on a Laptop with K3D and ArgoCD. https://www.sokube.io/en/blog/gitops-on-a-laptop-with-k3d-and-argocd-en.

- Take Argo CD for a spin with K3s and k3d. https://www.bekk.christmas/post/2020/13/take-argo-cd-for-a-spin-with-k3s-and-k3d.

- Application Deploy to Kubernetes with Argo CD and K3d. https://yashguptaa.medium.com/application-deploy-to-kubernetes-with-argo-cd-and-k3d-8e29cf4f83ee

K3D: Monitoring Your Service using Kubernetes Dashboard or Octant

Cover

K3D is a lightweight wrapper around k3s that allows you to run Kubernetes clusters inside Docker containers. While K3D is widely used for local development and testing, effective monitoring of services running on Kubernetes clusters is essential for debugging, performance tuning, and understanding resource usage.

In this blog, I will explore two popular monitoring tools for Kubernetes: Kubernetes Dashboard, the official web-based UI for Kubernetes, and Octant, a local, real-time, standalone dashboard developed by VMware. Both tools have unique strengths, and this guide will help you understand when to use one over the other.

Setting Up Kubernetes Dashboard On K3D

First you need to create a cluster using k3d cluster create:

Terminal window
$ k3d cluster create dashboard --servers 1 --agents 2
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-dashboard'
INFO[0000] Created image volume k3d-dashboard-images
INFO[0000] Starting new tools node...
INFO[0000] Starting node 'k3d-dashboard-tools'
INFO[0001] Creating node 'k3d-dashboard-server-0'
INFO[0001] Creating node 'k3d-dashboard-agent-0'
INFO[0001] Creating node 'k3d-dashboard-agent-1'
INFO[0001] Creating LoadBalancer 'k3d-dashboard-serverlb'
INFO[0001] Using the k3d-tools node to gather environment information
INFO[0001] HostIP: using network gateway 172.18.0.1 address
INFO[0001] Starting cluster 'dashboard'
INFO[0001] Starting servers...
INFO[0001] Starting node 'k3d-dashboard-server-0'
INFO[0008] Starting agents...
INFO[0008] Starting node 'k3d-dashboard-agent-0'
INFO[0008] Starting node 'k3d-dashboard-agent-1'
INFO[0015] Starting helpers...
INFO[0016] Starting node 'k3d-dashboard-serverlb'
INFO[0022] Injecting records for hostAliases (incl. host.k3d.internal) and for 4 network members into CoreDNS configmap...
INFO[0024] Cluster 'dashboard' created successfully!
INFO[0024] You can now use it like this:
kubectl cluster-info

Next, deploy Kubernetes Dashboard:

Terminal window
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml

Ensure the pods are running.

Terminal window
$ kubectl get pods -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
dashboard-metrics-scraper-795895d745-kcbkw 1/1 Running 0 10m
kubernetes-dashboard-56cf4b97c5-fg92n 1/1 Running 0 10m

Then, create a service account and bind role, to access the dashboard, you need a service account with the proper permissions. Create a service account and cluster role binding using the following YAML:

apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard

Apply this configuration:

Terminal window
$ kubectl apply -f admin-user.yaml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created

Then, Retrieve the token for login using:

Terminal window
$ kubectl -n kubernetes-dashboard create token admin-user

Use kubectl proxy to access the dashboard:

Terminal window
$ kubectl proxy
Starting to serve on 127.0.0.1:8001

Finally, open your browser and navigate to:

Terminal window
http://127.0.0.1:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login

Login to Kubernetes Dashboard

See the services

Note: Don’t forget to copy and paste the token when prompted.

Setting Up Octant on K3D

First, you need to install Octant, You can install Octant using a package manager or download it directly from the official releases. For example, on macOS, you can use Homebrew:

Terminal window
$ brew install octant

Next, on Linux just download the appropriate binary and move it to your path:

Terminal window
$ wget https://github.com/vmware-tanzu/octant/releases/download/v0.25.1/octant_0.25.1_Linux-64bit.tar.gz
$ tar -xvzf octant_0.25.1_Linux-64bit.tar.gz && mv octant_0.25.1_Linux-64bit octant
$ rm octant_0.25.1_Linux-64bit.tar.gz

Then, to start Octant,simply run the binary:

Terminal window
$ cd octant
$ ./octant

Finally, you can see the dashboard:

Octant Dashboard

Comparison: Kubernetes Dashboard vs Octant

FeatureKubernetes DashboardOctant
InstallationRequires deployment on the clusterLocal installation
AccessVia web proxyLocalhost
Real-Time UpdatesPartial (requires manual refresh)Full real-time updates
Ease of SetupModerate (requires token and RBAC)Easy (just run the binary)

Conclusion

Both Kubernetes Dashboard and Octant offer valuable features for monitoring Kubernetes clusters in K3D. If you need a quick and easy way to monitor your local cluster with minimal setup, Octant is a great choice. On the other hand, if you want an experience closer to managing a production environment, Kubernetes Dashboard is the better option.

References:

Exploring K3S on Docker using K3D

Cover Image

In this post, I’ll show you how to start with K3D, an awesome tool for running lightweight Kubernetes clusters using K3S on Docker. I hope this post will help you quickly set up and understand K3D. Let’s dive in!

What is K3S?

Before starting with K3D we need to know about K3S. Developed by Rancher Labs, K3S is a lightweight Kubernetes distribution designed for IoT and edge environments. It is a fully conformant Kubernetes distribution but optimized to work in resource-constrained settings by reducing its resource footprint and dependencies.

Key highlights of K3S include:

- Optimized for Edge: Ideal for small clusters and resource-limited environments.

- Built-In Tools: Networking (Flannel), ServiceLB Load-Balancer controller and Ingress (Traefik) are included, minimizing setup complexity.

- Compact Design: K3S simplifies Kubernetes by bundling everything into a single binary and removing unnecessary components like legacy APIs.

Now let’s dive into K3D.

What is K3D?

K3D acts as a wrapper for K3S, making it possible to run K3S clusters inside Docker containers. It provides a convenient way to manage these clusters, offering speed, simplicity, and scalability for local Kubernetes environments.

Docker meme

Here’s why K3D is popular:

- Ease of Use: Quickly spin up and tear down clusters with simple commands.

- Resource Efficiency: Run multiple clusters on a single machine without significant overhead.

- Development Focus: Perfect for local development, CI/CD pipelines, and testing.

Let’s move on to how you can set up K3D and start using it.

Requirements

Before starting with K3D, make sure you have installed the following prerequisites based on your operating system.

- Docker

Follow the Docker installation guide for your operating system. Alternatively, you can simplify the process with these commands:

Terminal window
$ curl -fsSL https://get.docker.com -o get-docker.sh
$ sudo sh get-docker.sh
$ sudo usermod -aG docker $USER #add user to the docker group
$ docker version
Client: Docker Engine - Community
Version: 27.4.0
API version: 1.47
Go version: go1.22.10
Git commit: bde2b89
Built: Sat Dec 7 10:38:40 2024
OS/Arch: linux/amd64
Context: default
...

- Kubectl

The Kubernetes command-line tool, kubectl, is required to interact with your K3D cluster. Install it by following the instructions on the official Kubernetes documentation. Or you can follow this step:

Terminal window
$ curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
$ sudo install kubectl /usr/local/bin/kubectl
$ kubectl version --client
Client Version: v1.32.0
Kustomize Version: v5.5.0

- K3D

Install K3D by referring the official documentation or using the following command:

Terminal window
$ curl -s https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash
$ k3d --version
k3d version v5.7.4
k3s version v1.30.4-k3s1 (default)

NB: I use Ubuntu 22.04 to install the requirements.

Create Your First Cluster

Basic

Terminal window
$ k3d cluster create mycluster
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-mycluster'
INFO[0000] Created image volume k3d-mycluster-images
INFO[0000] Starting new tools node...
INFO[0001] Creating node 'k3d-mycluster-server-0'
INFO[0002] Pulling image 'ghcr.io/k3d-io/k3d-tools:5.7.4'
INFO[0005] Pulling image 'docker.io/rancher/k3s:v1.30.4-k3s1'
INFO[0006] Starting node 'k3d-mycluster-tools'
INFO[0030] Creating LoadBalancer 'k3d-mycluster-serverlb'
INFO[0033] Pulling image 'ghcr.io/k3d-io/k3d-proxy:5.7.4'
INFO[0045] Using the k3d-tools node to gather environment information
INFO[0045] HostIP: using network gateway 172.18.0.1 address
INFO[0045] Starting cluster 'mycluster'
INFO[0045] Starting servers...
INFO[0045] Starting node 'k3d-mycluster-server-0'
INFO[0053] All agents already running.
INFO[0053] Starting helpers...
INFO[0053] Starting node 'k3d-mycluster-serverlb'
INFO[0060] Injecting records for hostAliases (incl. host.k3d.internal) and for 2 network members into CoreDNS configmap...
INFO[0062] Cluster 'mycluster' created successfully!
INFO[0062] You can now use it like this:
kubectl cluster-info

This command will create a cluster named mycluster with one control plane node.

Once the cluster is created, check its status using kubectl:

Terminal window
$ kubectl cluster-info
Kubernetes control plane is running at https://0.0.0.0:43355
CoreDNS is running at https://0.0.0.0:43355/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://0.0.0.0:43355/api/v1/namespaces/kube-system/services/https:metrics-server:https/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

To ensure that the nodes in the cluster are active, run:

Terminal window
$ kubectl get nodes --output wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k3d-mycluster-server-0 Ready control-plane,master 5m14s v1.30.4+k3s1 172.18.0.2 <none> K3s v1.30.4+k3s1 6.8.0-50-generic containerd://1.7.20-k3s1

To list all the clusters created with K3D, use the following command:

Terminal window
$ k3d cluster list
NAME SERVERS AGENTS LOADBALANCER
mycluster 1/1 0/0 true

Stop, start & delete your cluster, use the following command:

Terminal window
$ k3d cluster stop mycluster
INFO[0000] Stopping cluster 'mycluster'
INFO[0020] Stopped cluster 'mycluster
Terminal window
$ k3d cluster start mycluster
INFO[0000] Using the k3d-tools node to gather environment information
INFO[0000] Starting new tools node...
INFO[0000] Starting node 'k3d-mycluster-tools'
INFO[0001] HostIP: using network gateway 172.18.0.1 address
INFO[0001] Starting cluster 'mycluster'
INFO[0001] Starting servers...
INFO[0001] Starting node 'k3d-mycluster-server-0'
INFO[0005] All agents already running.
INFO[0005] Starting helpers...
INFO[0005] Starting node 'k3d-mycluster-serverlb'
INFO[0012] Injecting records for hostAliases (incl. host.k3d.internal) and for 2 network members into CoreDNS configmap...
INFO[0014] Started cluster 'mycluster'
Terminal window
$ k3d cluster delete mycluster
INFO[0000] Deleting cluster 'mycluster'
INFO[0001] Deleting cluster network 'k3d-mycluster'
INFO[0001] Deleting 1 attached volumes...
INFO[0001] Removing cluster details from default kubeconfig...
INFO[0001] Removing standalone kubeconfig file (if there is one)...
INFO[0001] Successfully deleted cluster mycluster!

If you want to start a cluster with extra server and worker nodes, then extend the creation command like this:

Terminal window
$ k3d cluster create mycluster --servers 2 --agents 4

After creating the cluster, you can verify its status using these commands:

Terminal window
$ k3d cluster list
NAME SERVERS AGENTS LOADBALANCER
mycluster 2/2 4/4 true
Terminal window
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k3d-mycluster-agent-0 Ready <none> 51s v1.30.4+k3s1
k3d-mycluster-agent-1 Ready <none> 52s v1.30.4+k3s1
k3d-mycluster-agent-2 Ready <none> 53s v1.30.4+k3s1
k3d-mycluster-agent-3 Ready <none> 51s v1.30.4+k3s1
k3d-mycluster-server-0 Ready control-plane,etcd,master 81s v1.30.4+k3s1
k3d-mycluster-server-1 Ready control-plane,etcd,master 64s v1.30.4+k3s1

Bootstrapping Cluster

Bootstrapping a cluster with configuration files allows you to automate and customize the process of setting up a K3D cluster. By using a configuration file, you can easily specify cluster details such as node count, roles, ports, volumes, and more, making it easy to recreate or modify clusters.

Here’s an example of a basic cluster configuration file my-cluster.yaml that sets up a K3D cluster with multiple nodes:

apiVersion: k3d.io/v1alpha5
kind: Simple
metadata:
name: my-cluster
servers: 1
agents: 2
image: rancher/k3s:v1.30.4-k3s1
ports:
- port: 30000-30100:30000-30100
nodeFilters:
- server:*
options:
k3s:
extraArgs:
- arg: --disable=traefik
nodeFilters:
- server:*

A K3D config to create a cluster named my-cluster with 1 server, 2 agents, K3S version v1.30.4-k3s1, host-to-server port mapping (30000-30100), and Traefik disabled on server nodes.

Terminal window
k3d create cluster --config my-cluster.yaml

The result after creation:

Terminal window
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k3d-my-cluster-agent-0 Ready <none> 14s v1.30.4+k3s1
k3d-my-cluster-agent-1 Ready <none> 15s v1.30.4+k3s1
k3d-my-cluster-server-0 Ready control-plane,master 19s v1.30.4+k3s1
Terminal window
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
NAMES
9c7a53f40065 ghcr.io/k3d-io/k3d-proxy:5.7.4 "/bin/sh -c nginx-pr…" About a minute ago Up 46 seconds 80/tcp,
0.0.0.0:30000-30100->30000-30100/tcp, :::30000-30100->30000-30100/tcp, 0.0.0.0:34603->6443/tcp k3d-my-cluster-server
lb
41f544fa9f8e rancher/k3s:v1.30.4-k3s1 "/bin/k3d-entrypoint…" About a minute ago Up 55 seconds
k3d-my-cluster-agent-
1
48acdbaa0734 rancher/k3s:v1.30.4-k3s1 "/bin/k3d-entrypoint…" About a minute ago Up 55 seconds
k3d-my-cluster-agent-
0
0e2799145367 rancher/k3s:v1.30.4-k3s1 "/bin/k3d-entrypoint…" About a minute ago Up 59 seconds
k3d-my-cluster-server
-0

Create Simple Deployment

Once your K3D cluster is up and running, you can deploy applications onto the cluster. A deployment in Kubernetes ensures that a specified number of pod replicas are running, and it manages updates to those pods.

Use the kubectl create deployment command to define and start a deployment. For example:

Terminal window
$ kubectl create deployment nginx --image=nginx
deployment.apps/nginx created

Check deployment status using kubectl get deplyment command:

Terminal window
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 0/1 1 0 2m58s

Expose the deployment:

Terminal window
$ kubectl expose deployment nginx --port=80 --type=LoadBalancer
service/nginx exposed

Verify the Pod and Service:

Terminal window
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-bf5d5cf98-p6mpj 1/1 Running 0 69s
Terminal window
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 95s
nginx LoadBalancer 10.43.122.4 172.18.0.2,172.18.0.3,172.18.0.4 80:30893/TCP 66s

Try to access using browser by using LoadBalancer EXTERNAL-IP:

Terminal window
$ http://172.18.0.2:30893

Access service

Conclusion

K3D simplifies the process of running Kubernetes clusters with K3S on Docker, making it ideal for local development and testing. By setting up essential tools like Docker, kubectl, and K3D, you can easily create and manage clusters. You can deploy applications with just a few commands, expose them, and access them locally. K3D offers a flexible and lightweight solution for Kubernetes, allowing developers to experiment and work with clusters in an efficient way.

Thank you for taking the time to read this guide. I hope it was helpful in getting you started with K3D and Kubernetes!😀

Automating Flask & PostgreSQL Deployment on KVM with Terraform & Ansible

Cover Image

😀 Intro

Hi, in this post, we will use Libvirt with Terraform to provision 2 KVM locally and after that, we will Deploy Flask App & PostgreSQL using Ansible.

Content

📝 Project Architecture

So we will create 2 VMs using Terraform, then deploying a flask project and the database using Ansible.

Project Architecture

🔨 Requirements

I used Ubuntu 22.04 LTS as the OS for this project. If you’re using a different OS, please make the necessary adjustments when installing the required dependencies.

The major pre-requisite for this setup is KVM hypervisor. So you need to install KVM in your system. If you use Ubuntu you can follow this step:

Terminal window
$ sudo apt -y install bridge-utils cpu-checker libvirt-clients libvirt-daemon qemu qemu-kvm

Execute the following command to make sure your processor supports virtualisation capabilities:

Terminal window
$ kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used

Install Terraform

Terminal window
$ wget -O - https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
$ echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
$ sudo apt update && sudo apt install terraform -y

Verify installation:

Terminal window
$ terraform version
Terraform v1.9.8
on linux_amd64

Install Ansible

Terminal window
$ sudo apt update
$ sudo apt install software-properties-common
$ sudo add-apt-repository --yes --update ppa:ansible/ansible
$ sudo apt install ansible -y

Verify installation:

Terminal window
$ ansible --version
ansible [core 2.15.1]
...

Create KVM

we will use the libvirt provider with Terraform to deploy a KVM Virtual Machine.

Create main.tf, just specify the provider and version you want to use:

terraform {
required_providers {
libvirt = {
source = "dmacvicar/libvirt"
version = "0.8.1"
}
}
}
provider "libvirt" {
uri = "qemu:///system"
}

Thereafter, run terraform init command to initialize the environment:

$ terraform init
Initializing the backend...
Initializing provider plugins...
- Reusing previous version of hashicorp/template from the dependency lock file
- Reusing previous version of dmacvicar/libvirt from the dependency lock file
- Reusing previous version of hashicorp/null from the dependency lock file
- Using previously-installed hashicorp/template v2.2.0
- Using previously-installed dmacvicar/libvirt v0.8.1
- Using previously-installed hashicorp/null v3.2.3
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Now create our variables.tf. This variables.tf file defines inputs for the libvirt disk pool path, the Ubuntu 20.04 image URL as OS for the VMs , and a list of VM hostnames.

variable "libvirt_disk_path" {
description = "path for libvirt pool"
default = "default"
}
variable "ubuntu_20_img_url" {
description = "ubuntu 20.04 image"
default = "https://cloud-images.ubuntu.com/releases/focal/release/ubuntu-20.04-server-cloudimg-amd64.img"
}
variable "vm_hostnames" {
description = "List of VM hostnames"
default = ["vm1", "vm2"]
}

Let’s update our main.tf:

resource "null_resource" "cache_image" {
provisioner "local-exec" {
command = "wget -O /tmp/ubuntu-20.04.qcow2 ${var.ubuntu_20_img_url}"
}
}
resource "libvirt_volume" "base" {
name = "base.qcow2"
source = "/tmp/ubuntu-20.04.qcow2"
pool = var.libvirt_disk_path
format = "qcow2"
depends_on = [null_resource.cache_image]
}
resource "libvirt_volume" "ubuntu20-qcow2" {
count = length(var.vm_hostnames)
name = "ubuntu20-${count.index}.qcow2"
base_volume_id = libvirt_volume.base.id
pool = var.libvirt_disk_path
size = 10737418240 # 10GB
}
data "template_file" "user_data" {
count = length(var.vm_hostnames)
template = file("${path.module}/config/cloud_init.yml")
}
data "template_file" "network_config" {
count = length(var.vm_hostnames)
template = file("${path.module}/config/network_config.yml")
}
resource "libvirt_cloudinit_disk" "commoninit" {
count = length(var.vm_hostnames)
name = "commoninit-${count.index}.iso"
user_data = data.template_file.user_data[count.index].rendered
network_config = data.template_file.network_config[count.index].rendered
pool = var.libvirt_disk_path
}
resource "libvirt_domain" "domain-ubuntu" {
count = length(var.vm_hostnames)
name = var.vm_hostnames[count.index]
memory = "1024"
vcpu = 1
cloudinit = libvirt_cloudinit_disk.commoninit[count.index].id
network_interface {
network_name = "default"
wait_for_lease = true
hostname = var.vm_hostnames[count.index]
}
console {
type = "pty"
target_port = "0"
target_type = "serial"
}
console {
type = "pty"
target_type = "virtio"
target_port = "1"
}
disk {
volume_id = libvirt_volume.ubuntu20-qcow2[count.index].id
}
graphics {
type = "spice"
listen_type = "address"
autoport = true
}
}

the script will provisions multiple KVM VMs using the Libvirt provider. It downloads an Ubuntu 20.04 base image, clones it for each VM, configures cloud-init for user and network settings, and deploys VMs with specified hostnames, 1GB memory, and SPICE graphics. The setup dynamically adapts based on the number of hostnames provided in var.vm_hostnames.

As I’ve mentioned, I’m using cloud-init, so lets setup the network config and cloud init under the config directory:

Terminal window
$ mkdir config/

Then create our config/cloud_init.yml, just make sure that you configure your public ssh key for ssh access in the config:

#cloud-config
runcmd:
- sed -i '/PermitRootLogin/d' /etc/ssh/sshd_config
- echo "PermitRootLogin yes" >> /etc/ssh/sshd_config
- systemctl restart sshd
ssh_pwauth: true
disable_root: false
chpasswd:
list: |
root:cloudy24
expire: false
users:
- name: ubuntu
gecos: ubuntu
groups:
- sudo
sudo:
- ALL=(ALL) NOPASSWD:ALL
home: /home/ubuntu
shell: /bin/bash
lock_passwd: false
ssh_authorized_keys:
- ssh-rsa AAAA...

And then network config, in config/network_config.yml:

version: 2
ethernets:
ens3:
dhcp4: true

Our project structure should look like this:

Terminal window
$ tree
.
├── config
│   ├── cloud_init.yml
│   └── network_config.yml
├── main.tf
└── variables.tf

Now run a plan, to see what will be done:

Terminal window
$ terraform plan
data.template_file.user_data[1]: Reading...
data.template_file.user_data[0]: Reading...
data.template_file.network_config[1]: Reading...
data.template_file.network_config[0]: Reading...
...
Plan: 8 to add, 0 to change, 0 to destroy

And run terraform apply to run our deployment:

$ terraform apply
...
null_resource.cache_image: Creation complete after 10m36s [id=4239391010009470471]
libvirt_volume.base: Creating...
libvirt_volume.base: Creation complete after 3s [id=/var/lib/libvirt/images/base.qcow2]
libvirt_volume.ubuntu20-qcow2[1]: Creating...
libvirt_volume.ubuntu20-qcow2[0]: Creating...
libvirt_volume.ubuntu20-qcow2[1]: Creation complete after 0s [id=/var/lib/libvirt/images/ubuntu20-1.qcow2]
libvirt_volume.ubuntu20-qcow2[0]: Creation complete after 0s [id=/var/lib/libvirt/images/ubuntu20-0.qcow2]
libvirt_domain.domain-ubuntu[1]: Creating...
...
libvirt_domain.domain-ubuntu[1]: Creation complete after 51s [id=6221f782-48b7-49a4-9eb9-fc92970f06a2]
Apply complete! Resources: 8 added, 0 changed, 0 destroyed

Verify VM creation using virsh command:

Terminal window
$ virsh list
Id Name State
----------------------
1 vm1 running
2 vm2 running

Get instances IP address:

Terminal window
$ virsh net-dhcp-leases --network default
Expiry Time MAC address Protocol IP address Hostname Client ID or DUID
-----------------------------------------------------------------------------------------------------------------------------------------------
2024-12-09 19:50:00 52:54:00:2e:0e:86 ipv4 192.168.122.19/24 vm1 ff:b5:5e:67:ff:00:02:00:00:ab:11:b0:43:6a:d8:bc:16:30:0d
2024-12-09 19:50:00 52:54:00:86:d4:ca ipv4 192.168.122.15/24 vm2 ff:b5:5e:67:ff:00:02:00:00:ab:11:39:24:8c:4a:7e:6a:dd:78

Try to access the vm using ubuntu user:

Terminal window
$ ssh ubuntu@192.168.122.15
The authenticity of host '192.168.122.15 (192.168.122.15)' can't be established.
ED25519 key fingerprint is SHA256:Y20zaCcrlOZvPTP+/qLLHc7vJIOca7QjTinsz9Bj6sk.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.122.15' (ED25519) to the list of known hosts.
Welcome to Ubuntu 20.04.6 LTS (GNU/Linux 5.4.0-200-generic x86_64)
...
ubuntu@ubuntu:~$

Create Ansible Playbook

Now let’s create the Ansible Playbook to deploy Flask & Postgresql on Docker. First you need to create ansible directory and ansible.cfg file:

Terminal window
$ mkdir ansible && cd ansible
Terminal window
[defaults]
inventory = hosts
host_key_checking = True
deprecation_warnings = False
collections = ansible.posix, community.general, community.postgresql

Then create inventory file called hosts:

Terminal window
[vm1]
192.168.122.19 ansible_user=ubuntu
[vm2]
192.168.122.15 ansible_user=ubuntu

checking our VMs using ansible ping command:

Terminal window
$ ansible -m ping all
192.168.122.15 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
192.168.122.19 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}

Now create playbook.yml and roles, this playbook will install and configure Docker, Flask and PostgreSQL:

Terminal window
---
- name: Deploy Flask
hosts: vm1
become: true
remote_user: ubuntu
roles:
- docker
- flask
- name: Deploy Postgresql
hosts: vm2
become: true
remote_user: ubuntu
roles:
- docker
- psql

Playbook to install Docker

Now create new directory called roles/docker:

Terminal window
$ mkdir roles
$ mkdir roles/docker

Create a new directory in docker called tasks, then create new file main.yml. This file will install Docker & Docker Compose:

Terminal window
$ mkdir docker/tasks
$ vim main.yml
---
- name: Run update
ansible.builtin.apt:
name: aptitude
state: latest
update_cache: true
- name: Install dependencies
ansible.builtin.apt:
name:
- net-tools
- apt-transport-https
- ca-certificates
- curl
- software-properties-common
- python3-pip
- virtualenv
- python3-setuptools
- gnupg-agent
- autoconf
- dpkg-dev
- file
- g++
- gcc
- libc-dev
- make
- pkg-config
- re2c
- wget
state: present
update_cache: true
- name: Add Docker GPG apt Key
ansible.builtin.apt_key:
url: https://download.docker.com/linux/ubuntu/gpg
state: present
- name: Add repository into sources list
ansible.builtin.apt_repository:
repo: deb [arch=amd64] https://download.docker.com/linux/ubuntu {{ ansible_lsb.codename }} stable
state: present
filename: docker
- name: Install Docker
ansible.builtin.apt:
name:
- docker-ce
- docker-ce-cli
state: present
update_cache: true
- name: Add non-root to docker group
user:
name: ubuntu
groups: [docker]
append: true
- name: Install Docker module for Python
ansible.builtin.pip:
name: docker
- name: Install Docker-Compose
ansible.builtin.get_url:
url: https://github.com/docker/compose/releases/download/1.29.2/docker-compose-Linux-x86_64
dest: /usr/local/bin/docker-compose
mode: '755'
- name: Create Docker-Compose symlink
ansible.builtin.command:
cmd: ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
creates: /usr/bin/docker-compose
- name: Restart Docker
ansible.builtin.service:
name: docker
state: restarted
enabled: true

Playbook to install and configure postgresql

Then create new directory called psql, create subdirectory called vars, tempalates & tasks:

Terminal window
$ mkdir psql
$ mkdir psql/vars
$ mkdir psql/templates
$ mkdir psql/tasks

After that, in vars, create main.yml. These are variables used to set username, passwords, etc:

---
db_port: 5433
db_user: admin
db_password: dbPassword
db_name: todo

Next, we will create jinja file called docker-compose.yml.j2. With this file we will create postgresql container:

version: '3.7'
services:
postgres:
image: postgres:13
container_name: db
restart: unless-stopped
ports:
- {{ db_port }}:5432
networks:
- flask_network
environment:
- POSTGRES_USER={{ db_user }}
- POSTGRES_PASSWORD={{ db_password }}
- POSTGRES_DB={{ db_name }}
volumes:
- postgres_data:/var/lib/postgresql/data
networks:
flask_network:
volumes:
postgres_data:

Next, create main.yml to tasks. So we will copy docker-compose.yml.j2 and run using docker compose:

---
- name: Add Postgresql Compose
ansible.builtin.template:
src: docker-compose.yml.j2
dest: /home/ubuntu/docker-compose.yml
mode: preserve
- name: Docker-compose up
ansible.builtin.shell: docker-compose up -d --build
args:
chdir: /home/ubuntu

Playbook to deploy Flask App

First, you need to create directory called flask, then create sub-directory again:

Terminal window
$ mkdir flask
$ mkdir flask/vars
$ mkdir flask/templates
$ mkdir flask/tasks

Next, add main.yml to vars. This file refer to posgtresql variable before, with addition IP address of VM2(database VM):

---
db_port: 5433
db_user: admin
db_password: dbPassword
db_name: todo
db_host: 192.168.122.15

Next, create config.py.j2 to templates. This file will replace the old config file from Flask project:

DEV_DB = 'sqlite:///task.db'
pg_user = "{{ db_user }}"
pg_pass = "{{ db_password }}"
pg_db = "{{ db_name }}"
pg_port = {{ db_port }}
pg_host = "{{ db_host }}"
PROD_DB = f'postgresql://{pg_user}:{pg_pass}@{pg_host}:{pg_port}/{pg_db}'

Next, create docker-compose.yml.j2 to templates. With this file we will create a container using docker compose:

version: '3.7'
services:
flask:
build: flask
container_name: app
restart: unless-stopped
ports:
- 5000:5000
environment:
- DEBUG=0
networks:
- flask_network
networks:
flask_network:

Next, create main.yml in tasks. With this file we will clone flask project, add compose file, replace config.py and create new container using docker compose:

---
- name: Clone Flask project
changed_when: false
ansible.builtin.git:
repo: https://github.com/danielcristho/Flask_TODO.git
dest: /home/ubuntu/Flask_TODO
clone: true
- name: Add Flask Compose
ansible.builtin.template:
src: docker-compose.yml.j2
dest: /home/ubuntu/Flask_TODO/docker-compose.yml
mode: preserve
- name: Update config.py
ansible.builtin.template:
src: config.py.j2
dest: /home/ubuntu/Flask_TODO/flask/app/config.py
mode: preserve
- name: Run docker-compose up
shell: docker-compose up -d --build
args:
chdir: /home/ubuntu/Flask_TODO

Our project structure should look like this:

Terminal window
├── ansible-flask-psql
│   ├── ansible.cfg
│   ├── hosts
│   ├── playbook.yml
│   └── roles
│   ├── docker
│   │   └── tasks
│   │   └── main.yml
│   ├── flask
│   │   ├── tasks
│   │   │   └── main.yml
│   │   ├── templates
│   │   │   ├── config.py.j2
│   │   │   └── docker-compose.yml.j2
│   │   └── vars
│   │   └── main.yml
│   └── psql
│   ├── tasks
│   │   └── main.yml
│   ├── templates
│   │   └── docker-compose.yml.j2
│   └── vars
│   └── main.yml
├── libvirt-kvm
│   ├── config
│   │   ├── cloud_init.yml
│   │   └── network_config.yml
│   ├── main.tf
│   ├── variables.tf

Run Playbook and testing

Finally, let’s run ansible-playbook to deploy PostgreSQL and Flask:

Terminal window
$ ls
ansible.cfg hosts playbook.yml roles
$ ansible-playbook -i host playbook.yml
_____________________
< PLAY [Deploy Flask] >
---------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
________________________
< TASK [Gathering Facts] >
------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| |
...
____________
< PLAY RECAP >
------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
192.168.122.15 : ok=13 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
192.168.122.19 : ok=15 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

After complete, just make sure there is no error. Then you see there are two containers created. In VM1 is Flask and VM2 is Postgresql:

Terminal window
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
NAMES
f3978427e34c flask_todo_flask "python run.py" About a minute ago Up About a minute 0.0.0.0:5000->5000/tcp, :
::5000->5000/tcp app
$ docker ps
fbebdff75a6e postgres:13 "docker-entrypoint.s…" 4 minutes ago Up 4 minutes 0.0.0.0:5433->5432/tcp, [::]:5433
->5432/tcp db

Try to access the app using browsers, just type http://<vm1_ip>:5000:

Access flask app using browser

Try to add a new task and then the data will be added to the database:

pgsql content

Conclusion

Finally we have created 2 VMs and deploy Flask Project with database.

Thank you for reading this article. Feel free to leave a comment if you have any questions, suggestions, or feedback.

NB: Project Repo: danielcristho/that-i-write/terrafrom-ansible-flask

Kubernetes 102: Kubernetes & Containerization

Kubernetes Cover

Okay, let’s start again. Kubernetes and container are two things that cannot be separated from each other.

Understanding container is essential in the context of Kubernetes, because Kubernetes is an orchestration system for managing containers. Knowing about containers will help you understand how Kubernetes organizes and distributes containerized applications, optimizes resource usage, and ensures isolation and failure recovery.

🤔 Why Container?

VM Meme

The old way of deploying an application was to install the application on a machine (VM) and then install various libraries or dependencies using a package manager. This creates dependencies between executables (files that can be run), configuration, and other dependencies. It takes a lot of time to do this.

Container meme

A new way to overcome this problem is by deploying the applications using containers. By the way, containers are virtualization at the operating system level, not at the hardware level.

Containers support isolation, both between the containers and also with the machine on which the container is placed. Then, You can’t see another process in another container because each container has its file system.

There are advantages and disadvantages when using containers:

➕ Pros

- Portability, containers encapsulate all dependencies and configurations, allowing applications to run consistently across different environments.

- Efficiency, containers share the host OS kernel, making them more lightweight and resource-efficient compared to virtual machines.

- Scalability, containers can be easily scaled up or down to handle varying loads, and orchestration tools like Kubernetes automate this process.

- Isolation, containers provide process and resource isolation, enhancing security and stability by limiting the impact of failures to individual containers.

➖ Cons

- Complexity, managing containerized applications can be complex, especially at scale, requiring robust orchestration and monitoring tools.

- Security, containers share the host OS kernel, which can pose security risks if a vulnerability in the kernel is exploited.

- Compatibility, not all applications are suitable for containerization, especially those with complex dependencies or those that require direct access to hardware.

📦 Container Architecture

Container Architecture

Containers have several main components you need to know about, likes:

1. Container Runtime

A container runtime is a software component responsible for running containers. It provides the tools and services necessary to create, start, stop, and manage the lifecycle of containers. Container runtimes ensure that containers are isolated from each other and from the host system while sharing the host operating system kernel. There are various kinds of runtimes, such as Docker, containerd, CRI-O, and several other types of runtime implementations in support of the Kubernetes CRI (Container Runtime Interface).

2. Container Image

A container image is a lightweight, standalone, executable software package that includes everything needed to run a piece of software, including code, runtime, system tools, libraries, and settings. Container images are used to create containers.

3. Application Container

This is the result of the new image, including any code changes. Then build the container using the new image and re-run it.

☸️ Kubernetes Architecture

Kubernetes clusters consist of worker nodes that run applications in containers. Each cluster has at least one worker node. Pods are a workload component of the application. The control plane manages the worker nodes and pods in the cluster.

Kubernetes Architecture

🎛 Control Plane Components

1. Kube-apiserver

The control plane component exposing the Kubernetes API as the front-end, and this component is designed for horizontal scaling.

2. Etcd

It is a consistent key-value store that is used as data storage for Kubernetes clusters, so you need to pay attention to the mechanism for backing it up on Kubernetes clusters.

3. Kube-scheduler

The Kube scheduler is a core component of Kubernetes, responsible for assigning pods to nodes within a cluster. It ensures efficient resource utilization and adherence to various scheduling policies by filtering out nodes that don’t meet the pod’s requirements, scoring the remaining nodes based on criteria such as resource availability and affinity rules, and then binding the pod to the highest-scoring node. This process ensures that pods are placed on appropriate nodes, maintaining a balanced distribution of resources and adhering to constraints and priorities.

4. Kube-controller-manager

The Kube controller manager is a key component of Kubernetes, responsible for running various controllers that regulate the state of the cluster. It includes controllers that handle tasks such as node management, replication, and endpoint updates.

5. Cloud-controller-manager

The cloud controller manager is a component in Kubernetes that integrates the cluster with the underlying cloud services. It runs controllers specific to cloud environments that handle tasks such as node lifecycle, route management, and service load balancers.

💻 Node Components

1. Kubelet

Kubelet is a critical agent that runs on each node in a Kubernetes cluster and is responsible for managing the state of the pods on that node. It ensures that the containers described in the pod specifications are running and healthy, by interacting with the container runtime.

2. Kube-proxy

Kube-proxy is a network component in Kubernetes that runs on each node and is responsible for maintaining network rules and facilitating communication between services. It handles traffic routing to ensure that requests are properly routed to the appropriate pods, supporting both internal and external service access. Kube-proxy uses methods such as iptables, IPVS, or userspace proxying to manage and balance network traffic, ensuring reliable and efficient connectivity within the Kubernetes cluster.

3. Container runtime

A container runtime is software that manages the lifecycle of containers, including creating, starting, stopping, and deleting them. It provides the tools and services necessary to run containerized applications, ensuring that they are isolated from each other and the host system. Popular container runtimes include Docker, containerd, rktlet, and CRI-O. In Kubernetes, the container runtime interfaces with the kubelet to manage containers as part of the orchestration of the cluster.

🍨Addons Components

Other components are pods and services that implement the functions required by the cluster.

1. DNS

DNS, or Domain Name System, is an important Kubernetes add-on that provides service discovery and name resolution within a cluster. It allows pods to communicate with each other, and with external services, by translating human-readable service names into IP addresses.

2. Web UI

Web UI add-ons in Kubernetes are additional components that provide graphical interfaces for managing and monitoring the cluster. These tools, such as the Kubernetes Dashboard, provide an easy-to-use web-based interface that allows administrators to interact with the cluster, view resource utilization, manage deployments, and troubleshoot issues.

3. Container Resource Monitoring

Container resource monitoring addons are tools or services used in Kubernetes to track and analyze container resource usage, such as CPU, memory, and disk I/O. These add-ons collect metrics time-series from running containers and provide insight into their performance and resource consumption, improving resource allocation, scaling decisions, and troubleshooting.

4. Cluster-level logging

Cluster-level logging is an add-on component in Kubernetes that collects and aggregates log data from all nodes and pods in the cluster. It helps centralize logs, making it easier to monitor, analyze, and troubleshoot applications and infrastructure issues.

References:

- KUBERNETES UNTUK PEMULA. https://github.com/ngoprek-kubernetes/buku-kubernetes-pemula.

- Cluster Architecture. https://kubernetes.io/docs/concepts/architecture.

I think that’s all from this post, maybe next I will explain Kubernetes Object and the other components.

Install Docker using Ansible

Setup

First, you need to install Ansible. Just follow this link to install Ansible on your operating system installation guide.

After installation, create new directory called ansible-docker.

Terminal window
$ mkdir ansible-docker && cd ansible-docker

Create a new file called ansible.cfg as the Ansible configuration setting and then define the inventory file.

[defaults]
inventory = hosts
host_key_checking = True
deprecation_warnings = False
collections = ansible.posix, community.general

Then create a new file called hosts, where the name is defined on ansible.cfg.

Terminal window
[example-server]
0.0.0.0 ansible_user=root

NB: Don’t forget to change the IP Address and host name.

After setup the Ansible configuration setting & inventory file, let’s create a YAML file called playbook.yml

---
- name: Setup Docker on Ubuntu Server 22.04
hosts: all
become: true
remote_user: root
roles:
- config
- docker

Then create roles directory:

  • Config, On this directory I will create a directory called tasks. After that, I should create yaml file called main.yml to run update, upgrade & install many dependencies.
---
- name: Update&Upgrade
ansible.builtin.apt:
name: aptitude
state: present
update_cache: true
- name: Install dependencies
ansible.builtin.apt:
name:
- net-tools
- apt-transport-https
- ca-certificates
- curl
- software-properties-common
- python3-pip
- virtualenv
- python3-setuptools
- gnupg-agent
- autoconf
- dpkg-dev
- file
- g++
- gcc
- libc-dev
- make
- pkg-config
- re2c
- wget
state: present
update_cache: true
  • Docker, On this directory create 2 directories called tasks & templates.

On tasks directory create new file called main.yml. This file contains Docker installation, Docker Compose installation & private registry setup.

---
- name: Add Docker GPG apt Key
ansible.builtin.apt_key:
url: https://download.docker.com/linux/ubuntu/gpg
state: present
- name: Add repository into sources list
ansible.builtin.apt_repository:
repo: deb [arch=amd64] https://download.docker.com/linux/ubuntu {{ ansible_lsb.codename }} stable
state: present
filename: docker
- name: Install Docker 23.0.1-1
ansible.builtin.apt:
name:
- docker-ce=5:23.0.1-1~ubuntu.22.04~jammy
- docker-ce-cli=5:23.0.1-1~ubuntu.22.04~jammy
- containerd.io
state: present
update_cache: true
- name: Setup docker user
ansible.builtin.user:
name: docker
groups: "docker"
append: true
sudo_user: yes
- name: Install Docker module for Python
ansible.builtin.pip:
name: docker
- name: Install Docker-Compose&Set Permission
ansible.builtin.get_url:
url: https://github.com/docker/compose/releases/download/1.29.2/docker-compose-Linux-x86_64
dest: /usr/local/bin/docker-compose
mode: '755'
- name: Create Docker-Compose symlink
ansible.builtin.command:
cmd: ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
creates: /usr/bin/docker-compose
- name: Add private registry
ansible.builtin.template:
src: daemon.j2
dest: /etc/docker/daemon.json
mode: preserve
- name: Restart Docker
ansible.builtin.service:
name: docker
state: restarted
enabled: true

In the template, create a template file using a jinja file named daemon.j2. This file contains configuration for private registry settings (optional).

{
"insecure-registries" : ["http://0.0.0.0:5000"]
}

NB: Field the IP using your remote server private IP

After all setup, Your project directory should look like this:

Terminal window
$ tree
.
├── ansible.cfg
├── config
└── tasks
└── main.yml
├── docker
├── tasks
└── main.yml
└── templates
└── daemon.j2
├── hosts
└── playbook.yml

Test & Run

Okay, now test Your playbook.yml file using this command.

Terminal window
$ ansible-playbook --syntax-check playbook.yml

If You don’t have any errors, run the playbook using this command.

Terminal window
$ ansible-playbook -i hosts playbook.yml

Wait until finish.

Terminal window
____________________________________________
< PLAY [Setup Docker on Ubuntu Server 22.04] >
--------------------------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
________________________
< TASK [Gathering Facts] >
------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||

Conclusion

In this post, I just show you how to install Docker in a specific version using Ansible Playbook when you have one or more servers.

Thank You for reading this post, If You have suggestions or questions please leave them below. Thanks

NB: In this case, I just set the user as root. I installed the Docker on Ubuntu Server 22.04. For full code follow this link ansible-docker.