Skip to content

Docker

6 posts with the tag “Docker”

K3D: Getting Started with ArgoCD

Cover Image

K3D: Getting Started with ArgoCD

Intro

ArgoCD is a GitOps tool with a straightforward but powerful objective: to declaratively deploy applications to Kubernetes by managing application resources directly from version control systems, such as Git repositories. Every commit to the repository represents a change, which ArgoCD can apply to the Kubernetes cluster either manually or automatically. This approach ensures that deployment processes are fully controlled through version-controlled files, fostering an explicit and auditable release process.

For example, releasing a new application version involves updating the image tag in the resource files and committing the changes to the repository. ArgoCD syncs with the repository and seamlessly deploys the new version to the cluster.

Since ArgoCD itself operates on Kubernetes, it is straightforward to set up and integrates seamlessly with lightweight Kubernetes distributions like K3s. In this tutorial, we will demonstrate how to configure a local Kubernetes cluster using K3D and deploy applications with ArgoCD, utilizing the argocd-example-apps repository as a practical example.

Prerequisites

Before we begin, ensure you have the following installed:

- Docker

- Kubectl

- K3D

- ArgoCD CLI

Step 1: Set Up a K3D Cluster

Create a new Kubernetes cluster using K3D:

Terminal window
$ k3d cluster create argocluster --agents 2
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-argocluster'
INFO[0000] Created image volume k3d-argocluster-images
INFO[0000] Starting new tools node...
INFO[0000] Starting node 'k3d-argocluster-tools'
INFO[0001] Creating node 'k3d-argocluster-server-0'
INFO[0001] Creating node 'k3d-argocluster-agent-0'
INFO[0001] Creating node 'k3d-argocluster-agent-1'
INFO[0001] Creating LoadBalancer 'k3d-argocluster-serverlb'
INFO[0001] Using the k3d-tools node to gather environment information
INFO[0001] HostIP: using network gateway 172.18.0.1 address
INFO[0001] Starting cluster 'argocluster'
INFO[0001] Starting servers...
INFO[0002] Starting node 'k3d-argocluster-server-0'
INFO[0009] Starting agents...
INFO[0009] Starting node 'k3d-argocluster-agent-0'
INFO[0009] Starting node 'k3d-argocluster-agent-1'
INFO[0017] Starting helpers...
INFO[0017] Starting node 'k3d-argocluster-serverlb'
INFO[0024] Injecting records for hostAliases (incl. host.k3d.internal) and for 4 network members into CoreDNS configmap...
INFO[0026] Cluster 'argocluster' created successfully!
INFO[0026] You can now use it like this:
kubectl cluster-info

Verify that your cluster is running:

Terminal window
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k3d-argocluster-agent-0 Ready <none> 63s v1.30.4+k3s1
k3d-argocluster-agent-1 Ready <none> 62s v1.30.4+k3s1
k3d-argocluster-server-0 Ready control-plane,master 68s v1.30.4+k3s1

Step 2: Install ArgoCD

Install ArgoCD in your K3D cluster:

Terminal window
$ kubectl create namespace argocd
$ kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
customresourcedefinition.apiextensions.k8s.io/applications.argoproj.io created
customresourcedefinition.apiextensions.k8s.io/applicationsets.argoproj.io created
customresourcedefinition.apiextensions.k8s.io/appprojects.argoproj.io created
serviceaccount/argocd-application-controller created
serviceaccount/argocd-applicationset-controller created
serviceaccount/argocd-dex-server created
serviceaccount/argocd-notifications-controller created
serviceaccount/argocd-redis created
serviceaccount/argocd-repo-server created
serviceaccount/argocd-server created
role.rbac.authorization.k8s.io/argocd-application-controller created
role.rbac.authorization.k8s.io/argocd-applicationset-controller created
role.rbac.authorization.k8s.io/argocd-dex-server created
role.rbac.authorization.k8s.io/argocd-notifications-controller created
role.rbac.authorization.k8s.io/argocd-redis created
role.rbac.authorization.k8s.io/argocd-server created
...

Check the status of ArgoCD pods:

Terminal window
$ kubectl get pods -n argocd
NAME READY STATUS RESTARTS AGE
argocd-application-controller-0 1/1 Running 0 29m
argocd-applicationset-controller-684cd5f5cc-78cc8 1/1 Running 0 29m
argocd-dex-server-77c55fb54f-bw956 1/1 Running 0 29m
argocd-notifications-controller-69cd888b56-g7z4r 1/1 Running 0 29m
argocd-redis-55c76cb574-72mdh 1/1 Running 0 29m
argocd-repo-server-584d45d88f-2mdlc 1/1 Running 0 29m
argocd-server-8667f8577-prx6s 1/1 Running 0 29m

Expose the ArgoCD API server locally, then try to accessing the dashboard:

Terminal window
$ kubectl port-forward svc/argocd-server -n argocd 8080:443

ArgoCD

Step 3: Configure ArgoCD

Log in to ArgoCD

Retrieve the initial admin password:

Terminal window
kubectl get secret argocd-initial-admin-secret -n argocd -o jsonpath="{.data.password}" | base64 -d

Log in using the admin username and the password above.

Log in into ArgoCD Dashboard

Connect a Git Repository

Just clone the argocd-example-apps repository:

Terminal window
git clone https://github.com/argoproj/argocd-example-apps.git

Specify the ArgoCD server address in your CLI configuration:

Terminal window
$ argocd login localhost:8080
WARNING: server certificate had error: tls: failed to verify certificate: x509: certificate signed by unknown authorit
y. Proceed insecurely (y/n)? y
Username: admin
Password:
'admin:login' logged in successfully
Context 'localhost:8080' updated

Create a new ArgoCD application using the repository:

Terminal window
$ $ argocd app create example-app \
--repo https://github.com/argoproj/argocd-example-apps.git \
--path ./guestbook \
--dest-server https://kubernetes.default.svc \
--dest-namespace default
application 'example-app' created

Sync the application:

Terminal window
$ argocd app sync example-app
TIMESTAMP GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAG
E
2025-01-10T11:31:11+07:00 Service default guestbook-ui OutOfSync Missing
2025-01-10T11:31:11+07:00 apps Deployment default guestbook-ui OutOfSync Missing
2025-01-10T11:31:11+07:00 Service default guestbook-ui Synced Healthy
2025-01-10T11:31:11+07:00 Service default guestbook-ui Synced Healthy service/guestbook-ui created
2025-01-10T11:31:11+07:00 apps Deployment default guestbook-ui OutOfSync Missing deployment.apps/guestbook-ui created
2025-01-10T11:31:11+07:00 apps Deployment default guestbook-ui Synced Progressing deployment.apps/guestbook-ui created
...

Step 4: Verify the Deployment

Check that the application is deployed successfully:

Terminal window
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
guestbook-ui-649789b49c-zwjt8 1/1 Running 0 5m36s

Access the deployed application by exposing it via a NodePort or LoadBalancer:

Terminal window
$ kubectl port-forward svc/guestbook-ui 8081:80
Forwarding from 127.0.0.1:8081 -> 80
Forwarding from [::1]:8081 -> 80

Guestbook UI

Dashboard App List

Dashboard App

Conclusion

In this tutorial, you’ve set up a local Kubernetes cluster using K3D and deployed applications with ArgoCD. This setup provides a simple and powerful way to practice GitOps workflows locally. By leveraging tools like ArgoCD, you can ensure your deployments are consistent, auditable, and declarative. Happy GitOps-ing!

References

- GitOps on a Laptop with K3D and ArgoCD. https://www.sokube.io/en/blog/gitops-on-a-laptop-with-k3d-and-argocd-en.

- Take Argo CD for a spin with K3s and k3d. https://www.bekk.christmas/post/2020/13/take-argo-cd-for-a-spin-with-k3s-and-k3d.

- Application Deploy to Kubernetes with Argo CD and K3d. https://yashguptaa.medium.com/application-deploy-to-kubernetes-with-argo-cd-and-k3d-8e29cf4f83ee

K3D: Monitoring Your Service using Kubernetes Dashboard or Octant

Cover

K3D is a lightweight wrapper around k3s that allows you to run Kubernetes clusters inside Docker containers. While K3D is widely used for local development and testing, effective monitoring of services running on Kubernetes clusters is essential for debugging, performance tuning, and understanding resource usage.

In this blog, I will explore two popular monitoring tools for Kubernetes: Kubernetes Dashboard, the official web-based UI for Kubernetes, and Octant, a local, real-time, standalone dashboard developed by VMware. Both tools have unique strengths, and this guide will help you understand when to use one over the other.

Setting Up Kubernetes Dashboard On K3D

First you need to create a cluster using k3d cluster create:

Terminal window
$ k3d cluster create dashboard --servers 1 --agents 2
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-dashboard'
INFO[0000] Created image volume k3d-dashboard-images
INFO[0000] Starting new tools node...
INFO[0000] Starting node 'k3d-dashboard-tools'
INFO[0001] Creating node 'k3d-dashboard-server-0'
INFO[0001] Creating node 'k3d-dashboard-agent-0'
INFO[0001] Creating node 'k3d-dashboard-agent-1'
INFO[0001] Creating LoadBalancer 'k3d-dashboard-serverlb'
INFO[0001] Using the k3d-tools node to gather environment information
INFO[0001] HostIP: using network gateway 172.18.0.1 address
INFO[0001] Starting cluster 'dashboard'
INFO[0001] Starting servers...
INFO[0001] Starting node 'k3d-dashboard-server-0'
INFO[0008] Starting agents...
INFO[0008] Starting node 'k3d-dashboard-agent-0'
INFO[0008] Starting node 'k3d-dashboard-agent-1'
INFO[0015] Starting helpers...
INFO[0016] Starting node 'k3d-dashboard-serverlb'
INFO[0022] Injecting records for hostAliases (incl. host.k3d.internal) and for 4 network members into CoreDNS configmap...
INFO[0024] Cluster 'dashboard' created successfully!
INFO[0024] You can now use it like this:
kubectl cluster-info

Next, deploy Kubernetes Dashboard:

Terminal window
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml

Ensure the pods are running.

Terminal window
$ kubectl get pods -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
dashboard-metrics-scraper-795895d745-kcbkw 1/1 Running 0 10m
kubernetes-dashboard-56cf4b97c5-fg92n 1/1 Running 0 10m

Then, create a service account and bind role, to access the dashboard, you need a service account with the proper permissions. Create a service account and cluster role binding using the following YAML:

apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard

Apply this configuration:

Terminal window
$ kubectl apply -f admin-user.yaml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created

Then, Retrieve the token for login using:

Terminal window
$ kubectl -n kubernetes-dashboard create token admin-user

Use kubectl proxy to access the dashboard:

Terminal window
$ kubectl proxy
Starting to serve on 127.0.0.1:8001

Finally, open your browser and navigate to:

Terminal window
http://127.0.0.1:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login

Login to Kubernetes Dashboard

See the services

Note: Don’t forget to copy and paste the token when prompted.

Setting Up Octant on K3D

First, you need to install Octant, You can install Octant using a package manager or download it directly from the official releases. For example, on macOS, you can use Homebrew:

Terminal window
$ brew install octant

Next, on Linux just download the appropriate binary and move it to your path:

Terminal window
$ wget https://github.com/vmware-tanzu/octant/releases/download/v0.25.1/octant_0.25.1_Linux-64bit.tar.gz
$ tar -xvzf octant_0.25.1_Linux-64bit.tar.gz && mv octant_0.25.1_Linux-64bit octant
$ rm octant_0.25.1_Linux-64bit.tar.gz

Then, to start Octant,simply run the binary:

Terminal window
$ cd octant
$ ./octant

Finally, you can see the dashboard:

Octant Dashboard

Comparison: Kubernetes Dashboard vs Octant

FeatureKubernetes DashboardOctant
InstallationRequires deployment on the clusterLocal installation
AccessVia web proxyLocalhost
Real-Time UpdatesPartial (requires manual refresh)Full real-time updates
Ease of SetupModerate (requires token and RBAC)Easy (just run the binary)

Conclusion

Both Kubernetes Dashboard and Octant offer valuable features for monitoring Kubernetes clusters in K3D. If you need a quick and easy way to monitor your local cluster with minimal setup, Octant is a great choice. On the other hand, if you want an experience closer to managing a production environment, Kubernetes Dashboard is the better option.

References:

Exploring K3S on Docker using K3D

Cover Image

In this post, I’ll show you how to start with K3D, an awesome tool for running lightweight Kubernetes clusters using K3S on Docker. I hope this post will help you quickly set up and understand K3D. Let’s dive in!

What is K3S?

Before starting with K3D we need to know about K3S. Developed by Rancher Labs, K3S is a lightweight Kubernetes distribution designed for IoT and edge environments. It is a fully conformant Kubernetes distribution but optimized to work in resource-constrained settings by reducing its resource footprint and dependencies.

Key highlights of K3S include:

- Optimized for Edge: Ideal for small clusters and resource-limited environments.

- Built-In Tools: Networking (Flannel), ServiceLB Load-Balancer controller and Ingress (Traefik) are included, minimizing setup complexity.

- Compact Design: K3S simplifies Kubernetes by bundling everything into a single binary and removing unnecessary components like legacy APIs.

Now let’s dive into K3D.

What is K3D?

K3D acts as a wrapper for K3S, making it possible to run K3S clusters inside Docker containers. It provides a convenient way to manage these clusters, offering speed, simplicity, and scalability for local Kubernetes environments.

Docker meme

Here’s why K3D is popular:

- Ease of Use: Quickly spin up and tear down clusters with simple commands.

- Resource Efficiency: Run multiple clusters on a single machine without significant overhead.

- Development Focus: Perfect for local development, CI/CD pipelines, and testing.

Let’s move on to how you can set up K3D and start using it.

Requirements

Before starting with K3D, make sure you have installed the following prerequisites based on your operating system.

- Docker

Follow the Docker installation guide for your operating system. Alternatively, you can simplify the process with these commands:

Terminal window
$ curl -fsSL https://get.docker.com -o get-docker.sh
$ sudo sh get-docker.sh
$ sudo usermod -aG docker $USER #add user to the docker group
$ docker version
Client: Docker Engine - Community
Version: 27.4.0
API version: 1.47
Go version: go1.22.10
Git commit: bde2b89
Built: Sat Dec 7 10:38:40 2024
OS/Arch: linux/amd64
Context: default
...

- Kubectl

The Kubernetes command-line tool, kubectl, is required to interact with your K3D cluster. Install it by following the instructions on the official Kubernetes documentation. Or you can follow this step:

Terminal window
$ curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
$ sudo install kubectl /usr/local/bin/kubectl
$ kubectl version --client
Client Version: v1.32.0
Kustomize Version: v5.5.0

- K3D

Install K3D by referring the official documentation or using the following command:

Terminal window
$ curl -s https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash
$ k3d --version
k3d version v5.7.4
k3s version v1.30.4-k3s1 (default)

NB: I use Ubuntu 22.04 to install the requirements.

Create Your First Cluster

Basic

Terminal window
$ k3d cluster create mycluster
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-mycluster'
INFO[0000] Created image volume k3d-mycluster-images
INFO[0000] Starting new tools node...
INFO[0001] Creating node 'k3d-mycluster-server-0'
INFO[0002] Pulling image 'ghcr.io/k3d-io/k3d-tools:5.7.4'
INFO[0005] Pulling image 'docker.io/rancher/k3s:v1.30.4-k3s1'
INFO[0006] Starting node 'k3d-mycluster-tools'
INFO[0030] Creating LoadBalancer 'k3d-mycluster-serverlb'
INFO[0033] Pulling image 'ghcr.io/k3d-io/k3d-proxy:5.7.4'
INFO[0045] Using the k3d-tools node to gather environment information
INFO[0045] HostIP: using network gateway 172.18.0.1 address
INFO[0045] Starting cluster 'mycluster'
INFO[0045] Starting servers...
INFO[0045] Starting node 'k3d-mycluster-server-0'
INFO[0053] All agents already running.
INFO[0053] Starting helpers...
INFO[0053] Starting node 'k3d-mycluster-serverlb'
INFO[0060] Injecting records for hostAliases (incl. host.k3d.internal) and for 2 network members into CoreDNS configmap...
INFO[0062] Cluster 'mycluster' created successfully!
INFO[0062] You can now use it like this:
kubectl cluster-info

This command will create a cluster named mycluster with one control plane node.

Once the cluster is created, check its status using kubectl:

Terminal window
$ kubectl cluster-info
Kubernetes control plane is running at https://0.0.0.0:43355
CoreDNS is running at https://0.0.0.0:43355/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://0.0.0.0:43355/api/v1/namespaces/kube-system/services/https:metrics-server:https/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

To ensure that the nodes in the cluster are active, run:

Terminal window
$ kubectl get nodes --output wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k3d-mycluster-server-0 Ready control-plane,master 5m14s v1.30.4+k3s1 172.18.0.2 <none> K3s v1.30.4+k3s1 6.8.0-50-generic containerd://1.7.20-k3s1

To list all the clusters created with K3D, use the following command:

Terminal window
$ k3d cluster list
NAME SERVERS AGENTS LOADBALANCER
mycluster 1/1 0/0 true

Stop, start & delete your cluster, use the following command:

Terminal window
$ k3d cluster stop mycluster
INFO[0000] Stopping cluster 'mycluster'
INFO[0020] Stopped cluster 'mycluster
Terminal window
$ k3d cluster start mycluster
INFO[0000] Using the k3d-tools node to gather environment information
INFO[0000] Starting new tools node...
INFO[0000] Starting node 'k3d-mycluster-tools'
INFO[0001] HostIP: using network gateway 172.18.0.1 address
INFO[0001] Starting cluster 'mycluster'
INFO[0001] Starting servers...
INFO[0001] Starting node 'k3d-mycluster-server-0'
INFO[0005] All agents already running.
INFO[0005] Starting helpers...
INFO[0005] Starting node 'k3d-mycluster-serverlb'
INFO[0012] Injecting records for hostAliases (incl. host.k3d.internal) and for 2 network members into CoreDNS configmap...
INFO[0014] Started cluster 'mycluster'
Terminal window
$ k3d cluster delete mycluster
INFO[0000] Deleting cluster 'mycluster'
INFO[0001] Deleting cluster network 'k3d-mycluster'
INFO[0001] Deleting 1 attached volumes...
INFO[0001] Removing cluster details from default kubeconfig...
INFO[0001] Removing standalone kubeconfig file (if there is one)...
INFO[0001] Successfully deleted cluster mycluster!

If you want to start a cluster with extra server and worker nodes, then extend the creation command like this:

Terminal window
$ k3d cluster create mycluster --servers 2 --agents 4

After creating the cluster, you can verify its status using these commands:

Terminal window
$ k3d cluster list
NAME SERVERS AGENTS LOADBALANCER
mycluster 2/2 4/4 true
Terminal window
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k3d-mycluster-agent-0 Ready <none> 51s v1.30.4+k3s1
k3d-mycluster-agent-1 Ready <none> 52s v1.30.4+k3s1
k3d-mycluster-agent-2 Ready <none> 53s v1.30.4+k3s1
k3d-mycluster-agent-3 Ready <none> 51s v1.30.4+k3s1
k3d-mycluster-server-0 Ready control-plane,etcd,master 81s v1.30.4+k3s1
k3d-mycluster-server-1 Ready control-plane,etcd,master 64s v1.30.4+k3s1

Bootstrapping Cluster

Bootstrapping a cluster with configuration files allows you to automate and customize the process of setting up a K3D cluster. By using a configuration file, you can easily specify cluster details such as node count, roles, ports, volumes, and more, making it easy to recreate or modify clusters.

Here’s an example of a basic cluster configuration file my-cluster.yaml that sets up a K3D cluster with multiple nodes:

apiVersion: k3d.io/v1alpha5
kind: Simple
metadata:
name: my-cluster
servers: 1
agents: 2
image: rancher/k3s:v1.30.4-k3s1
ports:
- port: 30000-30100:30000-30100
nodeFilters:
- server:*
options:
k3s:
extraArgs:
- arg: --disable=traefik
nodeFilters:
- server:*

A K3D config to create a cluster named my-cluster with 1 server, 2 agents, K3S version v1.30.4-k3s1, host-to-server port mapping (30000-30100), and Traefik disabled on server nodes.

Terminal window
k3d create cluster --config my-cluster.yaml

The result after creation:

Terminal window
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k3d-my-cluster-agent-0 Ready <none> 14s v1.30.4+k3s1
k3d-my-cluster-agent-1 Ready <none> 15s v1.30.4+k3s1
k3d-my-cluster-server-0 Ready control-plane,master 19s v1.30.4+k3s1
Terminal window
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
NAMES
9c7a53f40065 ghcr.io/k3d-io/k3d-proxy:5.7.4 "/bin/sh -c nginx-pr…" About a minute ago Up 46 seconds 80/tcp,
0.0.0.0:30000-30100->30000-30100/tcp, :::30000-30100->30000-30100/tcp, 0.0.0.0:34603->6443/tcp k3d-my-cluster-server
lb
41f544fa9f8e rancher/k3s:v1.30.4-k3s1 "/bin/k3d-entrypoint…" About a minute ago Up 55 seconds
k3d-my-cluster-agent-
1
48acdbaa0734 rancher/k3s:v1.30.4-k3s1 "/bin/k3d-entrypoint…" About a minute ago Up 55 seconds
k3d-my-cluster-agent-
0
0e2799145367 rancher/k3s:v1.30.4-k3s1 "/bin/k3d-entrypoint…" About a minute ago Up 59 seconds
k3d-my-cluster-server
-0

Create Simple Deployment

Once your K3D cluster is up and running, you can deploy applications onto the cluster. A deployment in Kubernetes ensures that a specified number of pod replicas are running, and it manages updates to those pods.

Use the kubectl create deployment command to define and start a deployment. For example:

Terminal window
$ kubectl create deployment nginx --image=nginx
deployment.apps/nginx created

Check deployment status using kubectl get deplyment command:

Terminal window
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 0/1 1 0 2m58s

Expose the deployment:

Terminal window
$ kubectl expose deployment nginx --port=80 --type=LoadBalancer
service/nginx exposed

Verify the Pod and Service:

Terminal window
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-bf5d5cf98-p6mpj 1/1 Running 0 69s
Terminal window
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 95s
nginx LoadBalancer 10.43.122.4 172.18.0.2,172.18.0.3,172.18.0.4 80:30893/TCP 66s

Try to access using browser by using LoadBalancer EXTERNAL-IP:

Terminal window
$ http://172.18.0.2:30893

Access service

Conclusion

K3D simplifies the process of running Kubernetes clusters with K3S on Docker, making it ideal for local development and testing. By setting up essential tools like Docker, kubectl, and K3D, you can easily create and manage clusters. You can deploy applications with just a few commands, expose them, and access them locally. K3D offers a flexible and lightweight solution for Kubernetes, allowing developers to experiment and work with clusters in an efficient way.

Thank you for taking the time to read this guide. I hope it was helpful in getting you started with K3D and Kubernetes!😀

Automating Flask & PostgreSQL Deployment on KVM with Terraform & Ansible

Cover Image

😀 Intro

Hi, in this post, we will use Libvirt with Terraform to provision 2 KVM locally and after that, we will Deploy Flask App & PostgreSQL using Ansible.

Content

📝 Project Architecture

So we will create 2 VMs using Terraform, then deploying a flask project and the database using Ansible.

Project Architecture

🔨 Requirements

I used Ubuntu 22.04 LTS as the OS for this project. If you’re using a different OS, please make the necessary adjustments when installing the required dependencies.

The major pre-requisite for this setup is KVM hypervisor. So you need to install KVM in your system. If you use Ubuntu you can follow this step:

Terminal window
$ sudo apt -y install bridge-utils cpu-checker libvirt-clients libvirt-daemon qemu qemu-kvm

Execute the following command to make sure your processor supports virtualisation capabilities:

Terminal window
$ kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used

Install Terraform

Terminal window
$ wget -O - https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
$ echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
$ sudo apt update && sudo apt install terraform -y

Verify installation:

Terminal window
$ terraform version
Terraform v1.9.8
on linux_amd64

Install Ansible

Terminal window
$ sudo apt update
$ sudo apt install software-properties-common
$ sudo add-apt-repository --yes --update ppa:ansible/ansible
$ sudo apt install ansible -y

Verify installation:

Terminal window
$ ansible --version
ansible [core 2.15.1]
...

Create KVM

we will use the libvirt provider with Terraform to deploy a KVM Virtual Machine.

Create main.tf, just specify the provider and version you want to use:

terraform {
required_providers {
libvirt = {
source = "dmacvicar/libvirt"
version = "0.8.1"
}
}
}
provider "libvirt" {
uri = "qemu:///system"
}

Thereafter, run terraform init command to initialize the environment:

$ terraform init
Initializing the backend...
Initializing provider plugins...
- Reusing previous version of hashicorp/template from the dependency lock file
- Reusing previous version of dmacvicar/libvirt from the dependency lock file
- Reusing previous version of hashicorp/null from the dependency lock file
- Using previously-installed hashicorp/template v2.2.0
- Using previously-installed dmacvicar/libvirt v0.8.1
- Using previously-installed hashicorp/null v3.2.3
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Now create our variables.tf. This variables.tf file defines inputs for the libvirt disk pool path, the Ubuntu 20.04 image URL as OS for the VMs , and a list of VM hostnames.

variable "libvirt_disk_path" {
description = "path for libvirt pool"
default = "default"
}
variable "ubuntu_20_img_url" {
description = "ubuntu 20.04 image"
default = "https://cloud-images.ubuntu.com/releases/focal/release/ubuntu-20.04-server-cloudimg-amd64.img"
}
variable "vm_hostnames" {
description = "List of VM hostnames"
default = ["vm1", "vm2"]
}

Let’s update our main.tf:

resource "null_resource" "cache_image" {
provisioner "local-exec" {
command = "wget -O /tmp/ubuntu-20.04.qcow2 ${var.ubuntu_20_img_url}"
}
}
resource "libvirt_volume" "base" {
name = "base.qcow2"
source = "/tmp/ubuntu-20.04.qcow2"
pool = var.libvirt_disk_path
format = "qcow2"
depends_on = [null_resource.cache_image]
}
resource "libvirt_volume" "ubuntu20-qcow2" {
count = length(var.vm_hostnames)
name = "ubuntu20-${count.index}.qcow2"
base_volume_id = libvirt_volume.base.id
pool = var.libvirt_disk_path
size = 10737418240 # 10GB
}
data "template_file" "user_data" {
count = length(var.vm_hostnames)
template = file("${path.module}/config/cloud_init.yml")
}
data "template_file" "network_config" {
count = length(var.vm_hostnames)
template = file("${path.module}/config/network_config.yml")
}
resource "libvirt_cloudinit_disk" "commoninit" {
count = length(var.vm_hostnames)
name = "commoninit-${count.index}.iso"
user_data = data.template_file.user_data[count.index].rendered
network_config = data.template_file.network_config[count.index].rendered
pool = var.libvirt_disk_path
}
resource "libvirt_domain" "domain-ubuntu" {
count = length(var.vm_hostnames)
name = var.vm_hostnames[count.index]
memory = "1024"
vcpu = 1
cloudinit = libvirt_cloudinit_disk.commoninit[count.index].id
network_interface {
network_name = "default"
wait_for_lease = true
hostname = var.vm_hostnames[count.index]
}
console {
type = "pty"
target_port = "0"
target_type = "serial"
}
console {
type = "pty"
target_type = "virtio"
target_port = "1"
}
disk {
volume_id = libvirt_volume.ubuntu20-qcow2[count.index].id
}
graphics {
type = "spice"
listen_type = "address"
autoport = true
}
}

the script will provisions multiple KVM VMs using the Libvirt provider. It downloads an Ubuntu 20.04 base image, clones it for each VM, configures cloud-init for user and network settings, and deploys VMs with specified hostnames, 1GB memory, and SPICE graphics. The setup dynamically adapts based on the number of hostnames provided in var.vm_hostnames.

As I’ve mentioned, I’m using cloud-init, so lets setup the network config and cloud init under the config directory:

Terminal window
$ mkdir config/

Then create our config/cloud_init.yml, just make sure that you configure your public ssh key for ssh access in the config:

#cloud-config
runcmd:
- sed -i '/PermitRootLogin/d' /etc/ssh/sshd_config
- echo "PermitRootLogin yes" >> /etc/ssh/sshd_config
- systemctl restart sshd
ssh_pwauth: true
disable_root: false
chpasswd:
list: |
root:cloudy24
expire: false
users:
- name: ubuntu
gecos: ubuntu
groups:
- sudo
sudo:
- ALL=(ALL) NOPASSWD:ALL
home: /home/ubuntu
shell: /bin/bash
lock_passwd: false
ssh_authorized_keys:
- ssh-rsa AAAA...

And then network config, in config/network_config.yml:

version: 2
ethernets:
ens3:
dhcp4: true

Our project structure should look like this:

Terminal window
$ tree
.
├── config
│   ├── cloud_init.yml
│   └── network_config.yml
├── main.tf
└── variables.tf

Now run a plan, to see what will be done:

Terminal window
$ terraform plan
data.template_file.user_data[1]: Reading...
data.template_file.user_data[0]: Reading...
data.template_file.network_config[1]: Reading...
data.template_file.network_config[0]: Reading...
...
Plan: 8 to add, 0 to change, 0 to destroy

And run terraform apply to run our deployment:

$ terraform apply
...
null_resource.cache_image: Creation complete after 10m36s [id=4239391010009470471]
libvirt_volume.base: Creating...
libvirt_volume.base: Creation complete after 3s [id=/var/lib/libvirt/images/base.qcow2]
libvirt_volume.ubuntu20-qcow2[1]: Creating...
libvirt_volume.ubuntu20-qcow2[0]: Creating...
libvirt_volume.ubuntu20-qcow2[1]: Creation complete after 0s [id=/var/lib/libvirt/images/ubuntu20-1.qcow2]
libvirt_volume.ubuntu20-qcow2[0]: Creation complete after 0s [id=/var/lib/libvirt/images/ubuntu20-0.qcow2]
libvirt_domain.domain-ubuntu[1]: Creating...
...
libvirt_domain.domain-ubuntu[1]: Creation complete after 51s [id=6221f782-48b7-49a4-9eb9-fc92970f06a2]
Apply complete! Resources: 8 added, 0 changed, 0 destroyed

Verify VM creation using virsh command:

Terminal window
$ virsh list
Id Name State
----------------------
1 vm1 running
2 vm2 running

Get instances IP address:

Terminal window
$ virsh net-dhcp-leases --network default
Expiry Time MAC address Protocol IP address Hostname Client ID or DUID
-----------------------------------------------------------------------------------------------------------------------------------------------
2024-12-09 19:50:00 52:54:00:2e:0e:86 ipv4 192.168.122.19/24 vm1 ff:b5:5e:67:ff:00:02:00:00:ab:11:b0:43:6a:d8:bc:16:30:0d
2024-12-09 19:50:00 52:54:00:86:d4:ca ipv4 192.168.122.15/24 vm2 ff:b5:5e:67:ff:00:02:00:00:ab:11:39:24:8c:4a:7e:6a:dd:78

Try to access the vm using ubuntu user:

Terminal window
$ ssh ubuntu@192.168.122.15
The authenticity of host '192.168.122.15 (192.168.122.15)' can't be established.
ED25519 key fingerprint is SHA256:Y20zaCcrlOZvPTP+/qLLHc7vJIOca7QjTinsz9Bj6sk.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.122.15' (ED25519) to the list of known hosts.
Welcome to Ubuntu 20.04.6 LTS (GNU/Linux 5.4.0-200-generic x86_64)
...
ubuntu@ubuntu:~$

Create Ansible Playbook

Now let’s create the Ansible Playbook to deploy Flask & Postgresql on Docker. First you need to create ansible directory and ansible.cfg file:

Terminal window
$ mkdir ansible && cd ansible
Terminal window
[defaults]
inventory = hosts
host_key_checking = True
deprecation_warnings = False
collections = ansible.posix, community.general, community.postgresql

Then create inventory file called hosts:

Terminal window
[vm1]
192.168.122.19 ansible_user=ubuntu
[vm2]
192.168.122.15 ansible_user=ubuntu

checking our VMs using ansible ping command:

Terminal window
$ ansible -m ping all
192.168.122.15 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}
192.168.122.19 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
}

Now create playbook.yml and roles, this playbook will install and configure Docker, Flask and PostgreSQL:

Terminal window
---
- name: Deploy Flask
hosts: vm1
become: true
remote_user: ubuntu
roles:
- docker
- flask
- name: Deploy Postgresql
hosts: vm2
become: true
remote_user: ubuntu
roles:
- docker
- psql

Playbook to install Docker

Now create new directory called roles/docker:

Terminal window
$ mkdir roles
$ mkdir roles/docker

Create a new directory in docker called tasks, then create new file main.yml. This file will install Docker & Docker Compose:

Terminal window
$ mkdir docker/tasks
$ vim main.yml
---
- name: Run update
ansible.builtin.apt:
name: aptitude
state: latest
update_cache: true
- name: Install dependencies
ansible.builtin.apt:
name:
- net-tools
- apt-transport-https
- ca-certificates
- curl
- software-properties-common
- python3-pip
- virtualenv
- python3-setuptools
- gnupg-agent
- autoconf
- dpkg-dev
- file
- g++
- gcc
- libc-dev
- make
- pkg-config
- re2c
- wget
state: present
update_cache: true
- name: Add Docker GPG apt Key
ansible.builtin.apt_key:
url: https://download.docker.com/linux/ubuntu/gpg
state: present
- name: Add repository into sources list
ansible.builtin.apt_repository:
repo: deb [arch=amd64] https://download.docker.com/linux/ubuntu {{ ansible_lsb.codename }} stable
state: present
filename: docker
- name: Install Docker
ansible.builtin.apt:
name:
- docker-ce
- docker-ce-cli
state: present
update_cache: true
- name: Add non-root to docker group
user:
name: ubuntu
groups: [docker]
append: true
- name: Install Docker module for Python
ansible.builtin.pip:
name: docker
- name: Install Docker-Compose
ansible.builtin.get_url:
url: https://github.com/docker/compose/releases/download/1.29.2/docker-compose-Linux-x86_64
dest: /usr/local/bin/docker-compose
mode: '755'
- name: Create Docker-Compose symlink
ansible.builtin.command:
cmd: ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
creates: /usr/bin/docker-compose
- name: Restart Docker
ansible.builtin.service:
name: docker
state: restarted
enabled: true

Playbook to install and configure postgresql

Then create new directory called psql, create subdirectory called vars, tempalates & tasks:

Terminal window
$ mkdir psql
$ mkdir psql/vars
$ mkdir psql/templates
$ mkdir psql/tasks

After that, in vars, create main.yml. These are variables used to set username, passwords, etc:

---
db_port: 5433
db_user: admin
db_password: dbPassword
db_name: todo

Next, we will create jinja file called docker-compose.yml.j2. With this file we will create postgresql container:

version: '3.7'
services:
postgres:
image: postgres:13
container_name: db
restart: unless-stopped
ports:
- {{ db_port }}:5432
networks:
- flask_network
environment:
- POSTGRES_USER={{ db_user }}
- POSTGRES_PASSWORD={{ db_password }}
- POSTGRES_DB={{ db_name }}
volumes:
- postgres_data:/var/lib/postgresql/data
networks:
flask_network:
volumes:
postgres_data:

Next, create main.yml to tasks. So we will copy docker-compose.yml.j2 and run using docker compose:

---
- name: Add Postgresql Compose
ansible.builtin.template:
src: docker-compose.yml.j2
dest: /home/ubuntu/docker-compose.yml
mode: preserve
- name: Docker-compose up
ansible.builtin.shell: docker-compose up -d --build
args:
chdir: /home/ubuntu

Playbook to deploy Flask App

First, you need to create directory called flask, then create sub-directory again:

Terminal window
$ mkdir flask
$ mkdir flask/vars
$ mkdir flask/templates
$ mkdir flask/tasks

Next, add main.yml to vars. This file refer to posgtresql variable before, with addition IP address of VM2(database VM):

---
db_port: 5433
db_user: admin
db_password: dbPassword
db_name: todo
db_host: 192.168.122.15

Next, create config.py.j2 to templates. This file will replace the old config file from Flask project:

DEV_DB = 'sqlite:///task.db'
pg_user = "{{ db_user }}"
pg_pass = "{{ db_password }}"
pg_db = "{{ db_name }}"
pg_port = {{ db_port }}
pg_host = "{{ db_host }}"
PROD_DB = f'postgresql://{pg_user}:{pg_pass}@{pg_host}:{pg_port}/{pg_db}'

Next, create docker-compose.yml.j2 to templates. With this file we will create a container using docker compose:

version: '3.7'
services:
flask:
build: flask
container_name: app
restart: unless-stopped
ports:
- 5000:5000
environment:
- DEBUG=0
networks:
- flask_network
networks:
flask_network:

Next, create main.yml in tasks. With this file we will clone flask project, add compose file, replace config.py and create new container using docker compose:

---
- name: Clone Flask project
changed_when: false
ansible.builtin.git:
repo: https://github.com/danielcristho/Flask_TODO.git
dest: /home/ubuntu/Flask_TODO
clone: true
- name: Add Flask Compose
ansible.builtin.template:
src: docker-compose.yml.j2
dest: /home/ubuntu/Flask_TODO/docker-compose.yml
mode: preserve
- name: Update config.py
ansible.builtin.template:
src: config.py.j2
dest: /home/ubuntu/Flask_TODO/flask/app/config.py
mode: preserve
- name: Run docker-compose up
shell: docker-compose up -d --build
args:
chdir: /home/ubuntu/Flask_TODO

Our project structure should look like this:

Terminal window
├── ansible-flask-psql
│   ├── ansible.cfg
│   ├── hosts
│   ├── playbook.yml
│   └── roles
│   ├── docker
│   │   └── tasks
│   │   └── main.yml
│   ├── flask
│   │   ├── tasks
│   │   │   └── main.yml
│   │   ├── templates
│   │   │   ├── config.py.j2
│   │   │   └── docker-compose.yml.j2
│   │   └── vars
│   │   └── main.yml
│   └── psql
│   ├── tasks
│   │   └── main.yml
│   ├── templates
│   │   └── docker-compose.yml.j2
│   └── vars
│   └── main.yml
├── libvirt-kvm
│   ├── config
│   │   ├── cloud_init.yml
│   │   └── network_config.yml
│   ├── main.tf
│   ├── variables.tf

Run Playbook and testing

Finally, let’s run ansible-playbook to deploy PostgreSQL and Flask:

Terminal window
$ ls
ansible.cfg hosts playbook.yml roles
$ ansible-playbook -i host playbook.yml
_____________________
< PLAY [Deploy Flask] >
---------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
________________________
< TASK [Gathering Facts] >
------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| |
...
____________
< PLAY RECAP >
------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
192.168.122.15 : ok=13 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
192.168.122.19 : ok=15 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

After complete, just make sure there is no error. Then you see there are two containers created. In VM1 is Flask and VM2 is Postgresql:

Terminal window
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
NAMES
f3978427e34c flask_todo_flask "python run.py" About a minute ago Up About a minute 0.0.0.0:5000->5000/tcp, :
::5000->5000/tcp app
$ docker ps
fbebdff75a6e postgres:13 "docker-entrypoint.s…" 4 minutes ago Up 4 minutes 0.0.0.0:5433->5432/tcp, [::]:5433
->5432/tcp db

Try to access the app using browsers, just type http://<vm1_ip>:5000:

Access flask app using browser

Try to add a new task and then the data will be added to the database:

pgsql content

Conclusion

Finally we have created 2 VMs and deploy Flask Project with database.

Thank you for reading this article. Feel free to leave a comment if you have any questions, suggestions, or feedback.

NB: Project Repo: danielcristho/that-i-write/terrafrom-ansible-flask

Kubernetes 102: Kubernetes & Containerization

Kubernetes Cover

Okay, let’s start again. Kubernetes and container are two things that cannot be separated from each other.

Understanding container is essential in the context of Kubernetes, because Kubernetes is an orchestration system for managing containers. Knowing about containers will help you understand how Kubernetes organizes and distributes containerized applications, optimizes resource usage, and ensures isolation and failure recovery.

🤔 Why Container?

VM Meme

The old way of deploying an application was to install the application on a machine (VM) and then install various libraries or dependencies using a package manager. This creates dependencies between executables (files that can be run), configuration, and other dependencies. It takes a lot of time to do this.

Container meme

A new way to overcome this problem is by deploying the applications using containers. By the way, containers are virtualization at the operating system level, not at the hardware level.

Containers support isolation, both between the containers and also with the machine on which the container is placed. Then, You can’t see another process in another container because each container has its file system.

There are advantages and disadvantages when using containers:

➕ Pros

- Portability, containers encapsulate all dependencies and configurations, allowing applications to run consistently across different environments.

- Efficiency, containers share the host OS kernel, making them more lightweight and resource-efficient compared to virtual machines.

- Scalability, containers can be easily scaled up or down to handle varying loads, and orchestration tools like Kubernetes automate this process.

- Isolation, containers provide process and resource isolation, enhancing security and stability by limiting the impact of failures to individual containers.

➖ Cons

- Complexity, managing containerized applications can be complex, especially at scale, requiring robust orchestration and monitoring tools.

- Security, containers share the host OS kernel, which can pose security risks if a vulnerability in the kernel is exploited.

- Compatibility, not all applications are suitable for containerization, especially those with complex dependencies or those that require direct access to hardware.

📦 Container Architecture

Container Architecture

Containers have several main components you need to know about, likes:

1. Container Runtime

A container runtime is a software component responsible for running containers. It provides the tools and services necessary to create, start, stop, and manage the lifecycle of containers. Container runtimes ensure that containers are isolated from each other and from the host system while sharing the host operating system kernel. There are various kinds of runtimes, such as Docker, containerd, CRI-O, and several other types of runtime implementations in support of the Kubernetes CRI (Container Runtime Interface).

2. Container Image

A container image is a lightweight, standalone, executable software package that includes everything needed to run a piece of software, including code, runtime, system tools, libraries, and settings. Container images are used to create containers.

3. Application Container

This is the result of the new image, including any code changes. Then build the container using the new image and re-run it.

☸️ Kubernetes Architecture

Kubernetes clusters consist of worker nodes that run applications in containers. Each cluster has at least one worker node. Pods are a workload component of the application. The control plane manages the worker nodes and pods in the cluster.

Kubernetes Architecture

🎛 Control Plane Components

1. Kube-apiserver

The control plane component exposing the Kubernetes API as the front-end, and this component is designed for horizontal scaling.

2. Etcd

It is a consistent key-value store that is used as data storage for Kubernetes clusters, so you need to pay attention to the mechanism for backing it up on Kubernetes clusters.

3. Kube-scheduler

The Kube scheduler is a core component of Kubernetes, responsible for assigning pods to nodes within a cluster. It ensures efficient resource utilization and adherence to various scheduling policies by filtering out nodes that don’t meet the pod’s requirements, scoring the remaining nodes based on criteria such as resource availability and affinity rules, and then binding the pod to the highest-scoring node. This process ensures that pods are placed on appropriate nodes, maintaining a balanced distribution of resources and adhering to constraints and priorities.

4. Kube-controller-manager

The Kube controller manager is a key component of Kubernetes, responsible for running various controllers that regulate the state of the cluster. It includes controllers that handle tasks such as node management, replication, and endpoint updates.

5. Cloud-controller-manager

The cloud controller manager is a component in Kubernetes that integrates the cluster with the underlying cloud services. It runs controllers specific to cloud environments that handle tasks such as node lifecycle, route management, and service load balancers.

💻 Node Components

1. Kubelet

Kubelet is a critical agent that runs on each node in a Kubernetes cluster and is responsible for managing the state of the pods on that node. It ensures that the containers described in the pod specifications are running and healthy, by interacting with the container runtime.

2. Kube-proxy

Kube-proxy is a network component in Kubernetes that runs on each node and is responsible for maintaining network rules and facilitating communication between services. It handles traffic routing to ensure that requests are properly routed to the appropriate pods, supporting both internal and external service access. Kube-proxy uses methods such as iptables, IPVS, or userspace proxying to manage and balance network traffic, ensuring reliable and efficient connectivity within the Kubernetes cluster.

3. Container runtime

A container runtime is software that manages the lifecycle of containers, including creating, starting, stopping, and deleting them. It provides the tools and services necessary to run containerized applications, ensuring that they are isolated from each other and the host system. Popular container runtimes include Docker, containerd, rktlet, and CRI-O. In Kubernetes, the container runtime interfaces with the kubelet to manage containers as part of the orchestration of the cluster.

🍨Addons Components

Other components are pods and services that implement the functions required by the cluster.

1. DNS

DNS, or Domain Name System, is an important Kubernetes add-on that provides service discovery and name resolution within a cluster. It allows pods to communicate with each other, and with external services, by translating human-readable service names into IP addresses.

2. Web UI

Web UI add-ons in Kubernetes are additional components that provide graphical interfaces for managing and monitoring the cluster. These tools, such as the Kubernetes Dashboard, provide an easy-to-use web-based interface that allows administrators to interact with the cluster, view resource utilization, manage deployments, and troubleshoot issues.

3. Container Resource Monitoring

Container resource monitoring addons are tools or services used in Kubernetes to track and analyze container resource usage, such as CPU, memory, and disk I/O. These add-ons collect metrics time-series from running containers and provide insight into their performance and resource consumption, improving resource allocation, scaling decisions, and troubleshooting.

4. Cluster-level logging

Cluster-level logging is an add-on component in Kubernetes that collects and aggregates log data from all nodes and pods in the cluster. It helps centralize logs, making it easier to monitor, analyze, and troubleshoot applications and infrastructure issues.

References:

- KUBERNETES UNTUK PEMULA. https://github.com/ngoprek-kubernetes/buku-kubernetes-pemula.

- Cluster Architecture. https://kubernetes.io/docs/concepts/architecture.

I think that’s all from this post, maybe next I will explain Kubernetes Object and the other components.

Install Docker using Ansible

Setup

First, you need to install Ansible. Just follow this link to install Ansible on your operating system installation guide.

After installation, create new directory called ansible-docker.

Terminal window
$ mkdir ansible-docker && cd ansible-docker

Create a new file called ansible.cfg as the Ansible configuration setting and then define the inventory file.

[defaults]
inventory = hosts
host_key_checking = True
deprecation_warnings = False
collections = ansible.posix, community.general

Then create a new file called hosts, where the name is defined on ansible.cfg.

Terminal window
[example-server]
0.0.0.0 ansible_user=root

NB: Don’t forget to change the IP Address and host name.

After setup the Ansible configuration setting & inventory file, let’s create a YAML file called playbook.yml

---
- name: Setup Docker on Ubuntu Server 22.04
hosts: all
become: true
remote_user: root
roles:
- config
- docker

Then create roles directory:

  • Config, On this directory I will create a directory called tasks. After that, I should create yaml file called main.yml to run update, upgrade & install many dependencies.
---
- name: Update&Upgrade
ansible.builtin.apt:
name: aptitude
state: present
update_cache: true
- name: Install dependencies
ansible.builtin.apt:
name:
- net-tools
- apt-transport-https
- ca-certificates
- curl
- software-properties-common
- python3-pip
- virtualenv
- python3-setuptools
- gnupg-agent
- autoconf
- dpkg-dev
- file
- g++
- gcc
- libc-dev
- make
- pkg-config
- re2c
- wget
state: present
update_cache: true
  • Docker, On this directory create 2 directories called tasks & templates.

On tasks directory create new file called main.yml. This file contains Docker installation, Docker Compose installation & private registry setup.

---
- name: Add Docker GPG apt Key
ansible.builtin.apt_key:
url: https://download.docker.com/linux/ubuntu/gpg
state: present
- name: Add repository into sources list
ansible.builtin.apt_repository:
repo: deb [arch=amd64] https://download.docker.com/linux/ubuntu {{ ansible_lsb.codename }} stable
state: present
filename: docker
- name: Install Docker 23.0.1-1
ansible.builtin.apt:
name:
- docker-ce=5:23.0.1-1~ubuntu.22.04~jammy
- docker-ce-cli=5:23.0.1-1~ubuntu.22.04~jammy
- containerd.io
state: present
update_cache: true
- name: Setup docker user
ansible.builtin.user:
name: docker
groups: "docker"
append: true
sudo_user: yes
- name: Install Docker module for Python
ansible.builtin.pip:
name: docker
- name: Install Docker-Compose&Set Permission
ansible.builtin.get_url:
url: https://github.com/docker/compose/releases/download/1.29.2/docker-compose-Linux-x86_64
dest: /usr/local/bin/docker-compose
mode: '755'
- name: Create Docker-Compose symlink
ansible.builtin.command:
cmd: ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
creates: /usr/bin/docker-compose
- name: Add private registry
ansible.builtin.template:
src: daemon.j2
dest: /etc/docker/daemon.json
mode: preserve
- name: Restart Docker
ansible.builtin.service:
name: docker
state: restarted
enabled: true

In the template, create a template file using a jinja file named daemon.j2. This file contains configuration for private registry settings (optional).

{
"insecure-registries" : ["http://0.0.0.0:5000"]
}

NB: Field the IP using your remote server private IP

After all setup, Your project directory should look like this:

Terminal window
$ tree
.
├── ansible.cfg
├── config
└── tasks
└── main.yml
├── docker
├── tasks
└── main.yml
└── templates
└── daemon.j2
├── hosts
└── playbook.yml

Test & Run

Okay, now test Your playbook.yml file using this command.

Terminal window
$ ansible-playbook --syntax-check playbook.yml

If You don’t have any errors, run the playbook using this command.

Terminal window
$ ansible-playbook -i hosts playbook.yml

Wait until finish.

Terminal window
____________________________________________
< PLAY [Setup Docker on Ubuntu Server 22.04] >
--------------------------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
________________________
< TASK [Gathering Facts] >
------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||

Conclusion

In this post, I just show you how to install Docker in a specific version using Ansible Playbook when you have one or more servers.

Thank You for reading this post, If You have suggestions or questions please leave them below. Thanks

NB: In this case, I just set the user as root. I installed the Docker on Ubuntu Server 22.04. For full code follow this link ansible-docker.