ArgoCD is a GitOps tool with a straightforward but powerful objective: to declaratively deploy applications to Kubernetes by managing application resources directly from version control systems, such as Git repositories. Every commit to the repository represents a change, which ArgoCD can apply to the Kubernetes cluster either manually or automatically. This approach ensures that deployment processes are fully controlled through version-controlled files, fostering an explicit and auditable release process.
For example, releasing a new application version involves updating the image tag in the resource files and committing the changes to the repository. ArgoCD syncs with the repository and seamlessly deploys the new version to the cluster.
Since ArgoCD itself operates on Kubernetes, it is straightforward to set up and integrates seamlessly with lightweight Kubernetes distributions like K3s. In this tutorial, we will demonstrate how to configure a local Kubernetes cluster using K3D and deploy applications with ArgoCD, utilizing the argocd-example-apps repository as a practical example.
Prerequisites
Before we begin, ensure you have the following installed:
- Docker
- Kubectl
- K3D
- ArgoCD CLI
Step 1: Set Up a K3D Cluster
Create a new Kubernetes cluster using K3D:
Verify that your cluster is running:
Step 2: Install ArgoCD
Install ArgoCD in your K3D cluster:
Check the status of ArgoCD pods:
Expose the ArgoCD API server locally, then try to accessing the dashboard:
Step 3: Configure ArgoCD
Log in to ArgoCD
Retrieve the initial admin password:
Log in using the admin username and the password above.
Connect a Git Repository
Just clone the argocd-example-apps repository:
Specify the ArgoCD server address in your CLI configuration:
Create a new ArgoCD application using the repository:
Sync the application:
Step 4: Verify the Deployment
Check that the application is deployed successfully:
Access the deployed application by exposing it via a NodePort or LoadBalancer:
Conclusion
In this tutorial, you’ve set up a local Kubernetes cluster using K3D and deployed applications with ArgoCD. This setup provides a simple and powerful way to practice GitOps workflows locally. By leveraging tools like ArgoCD, you can ensure your deployments are consistent, auditable, and declarative. Happy GitOps-ing!
K3D is a lightweight wrapper around k3s that allows you to run Kubernetes clusters inside Docker containers. While K3D is widely used for local development and testing, effective monitoring of services running on Kubernetes clusters is essential for debugging, performance tuning, and understanding resource usage.
In this blog, I will explore two popular monitoring tools for Kubernetes: Kubernetes Dashboard, the official web-based UI for Kubernetes, and Octant, a local, real-time, standalone dashboard developed by VMware. Both tools have unique strengths, and this guide will help you understand when to use one over the other.
Setting Up Kubernetes Dashboard On K3D
First you need to create a cluster using k3d cluster create:
Next, deploy Kubernetes Dashboard:
Ensure the pods are running.
Then, create a service account and bind role, to access the dashboard, you need a service account with the proper permissions. Create a service account and cluster role binding using the following YAML:
Apply this configuration:
Then, Retrieve the token for login using:
Use kubectl proxy to access the dashboard:
Finally, open your browser and navigate to:
Note: Don’t forget to copy and paste the token when prompted.
Setting Up Octant on K3D
First, you need to install Octant, You can install Octant using a package manager or download it directly from the official releases. For example, on macOS, you can use Homebrew:
Next, on Linux just download the appropriate binary and move it to your path:
Then, to start Octant,simply run the binary:
Finally, you can see the dashboard:
Comparison: Kubernetes Dashboard vs Octant
Feature
Kubernetes Dashboard
Octant
Installation
Requires deployment on the cluster
Local installation
Access
Via web proxy
Localhost
Real-Time Updates
Partial (requires manual refresh)
Full real-time updates
Ease of Setup
Moderate (requires token and RBAC)
Easy (just run the binary)
Conclusion
Both Kubernetes Dashboard and Octant offer valuable features for monitoring Kubernetes clusters in K3D. If you need a quick and easy way to monitor your local cluster with minimal setup, Octant is a great choice. On the other hand, if you want an experience closer to managing a production environment, Kubernetes Dashboard is the better option.
In this post, I’ll show you how to start with K3D, an awesome tool for running lightweight Kubernetes clusters using K3S on Docker. I hope this post will help you quickly set up and understand K3D. Let’s dive in!
What is K3S?
Before starting with K3D we need to know about K3S. Developed by Rancher Labs, K3S is a lightweight Kubernetes distribution designed for IoT and edge environments. It is a fully conformant Kubernetes distribution but optimized to work in resource-constrained settings by reducing its resource footprint and dependencies.
Key highlights of K3S include:
- Optimized for Edge: Ideal for small clusters and resource-limited environments.
- Built-In Tools: Networking (Flannel), ServiceLB Load-Balancer controller and Ingress (Traefik) are included, minimizing setup complexity.
- Compact Design: K3S simplifies Kubernetes by bundling everything into a single binary and removing unnecessary components like legacy APIs.
Now let’s dive into K3D.
What is K3D?
K3D acts as a wrapper for K3S, making it possible to run K3S clusters inside Docker containers. It provides a convenient way to manage these clusters, offering speed, simplicity, and scalability for local Kubernetes environments.
Here’s why K3D is popular:
- Ease of Use: Quickly spin up and tear down clusters with simple commands.
- Resource Efficiency: Run multiple clusters on a single machine without significant overhead.
- Development Focus: Perfect for local development, CI/CD pipelines, and testing.
Let’s move on to how you can set up K3D and start using it.
Requirements
Before starting with K3D, make sure you have installed the following prerequisites based on your operating system.
- Docker
Follow the Docker installation guide for your operating system. Alternatively, you can simplify the process with these commands:
- Kubectl
The Kubernetes command-line tool, kubectl, is required to interact with your K3D cluster. Install it by following the instructions on the official Kubernetes documentation. Or you can follow this step:
NB: I use Ubuntu 22.04 to install the requirements.
Create Your First Cluster
Basic
This command will create a cluster named mycluster with one control plane node.
Once the cluster is created, check its status using kubectl:
To ensure that the nodes in the cluster are active, run:
To list all the clusters created with K3D, use the following command:
Stop, start & delete your cluster, use the following command:
If you want to start a cluster with extra server and worker nodes, then extend the creation command like this:
After creating the cluster, you can verify its status using these commands:
Bootstrapping Cluster
Bootstrapping a cluster with configuration files allows you to automate and customize the process of setting up a K3D cluster. By using a configuration file, you can easily specify cluster details such as node count, roles, ports, volumes, and more, making it easy to recreate or modify clusters.
Here’s an example of a basic cluster configuration file my-cluster.yaml that sets up a K3D cluster with multiple nodes:
A K3D config to create a cluster named my-cluster with 1 server, 2 agents, K3S version v1.30.4-k3s1, host-to-server port mapping (30000-30100), and Traefik disabled on server nodes.
The result after creation:
Create Simple Deployment
Once your K3D cluster is up and running, you can deploy applications onto the cluster. A deployment in Kubernetes ensures that a specified number of pod replicas are running, and it manages updates to those pods.
Use the kubectl create deployment command to define and start a deployment. For example:
Check deployment status using kubectl get deplyment command:
Expose the deployment:
Verify the Pod and Service:
Try to access using browser by using LoadBalancer EXTERNAL-IP:
Conclusion
K3D simplifies the process of running Kubernetes clusters with K3S on Docker, making it ideal for local development and testing. By setting up essential tools like Docker, kubectl, and K3D, you can easily create and manage clusters. You can deploy applications with just a few commands, expose them, and access them locally. K3D offers a flexible and lightweight solution for Kubernetes, allowing developers to experiment and work with clusters in an efficient way.
Thank you for taking the time to read this guide. I hope it was helpful in getting you started with K3D and Kubernetes!😀
So we will create 2 VMs using Terraform, then deploying a flask project and the database using Ansible.
🔨 Requirements
I used Ubuntu 22.04 LTS as the OS for this project. If you’re using a different OS, please make the necessary adjustments when installing the required dependencies.
The major pre-requisite for this setup is KVM hypervisor. So you need to install KVM in your system. If you use Ubuntu you can follow this step:
Execute the following command to make sure your processor supports virtualisation capabilities:
Install Terraform
Verify installation:
Install Ansible
Verify installation:
Create KVM
we will use the libvirt provider with Terraform to deploy a KVM Virtual Machine.
Create main.tf, just specify the provider and version you want to use:
Thereafter, run terraform init command to initialize the environment:
Now create our variables.tf. This variables.tf file defines inputs for the libvirt disk pool path, the Ubuntu 20.04 image URL as OS for the VMs , and a list of VM hostnames.
Let’s update our main.tf:
the script will provisions multiple KVM VMs using the Libvirt provider. It downloads an Ubuntu 20.04 base image, clones it for each VM, configures cloud-init for user and network settings, and deploys VMs with specified hostnames, 1GB memory, and SPICE graphics. The setup dynamically adapts based on the number of hostnames provided in var.vm_hostnames.
As I’ve mentioned, I’m using cloud-init, so lets setup the network config and cloud init under the config directory:
Then create our config/cloud_init.yml, just make sure that you configure your public ssh key for ssh access in the config:
And then network config, in config/network_config.yml:
Our project structure should look like this:
Now run a plan, to see what will be done:
And run terraform apply to run our deployment:
Verify VM creation using virsh command:
Get instances IP address:
Try to access the vm using ubuntu user:
Create Ansible Playbook
Now let’s create the Ansible Playbook to deploy Flask & Postgresql on Docker. First you need to create ansible directory and ansible.cfg file:
Then create inventory file called hosts:
checking our VMs using ansible ping command:
Now create playbook.yml and roles, this playbook will install and configure Docker, Flask and PostgreSQL:
Playbook to install Docker
Now create new directory called roles/docker:
Create a new directory in docker called tasks, then create new file main.yml. This file will install Docker & Docker Compose:
Playbook to install and configure postgresql
Then create new directory called psql, create subdirectory called vars, tempalates & tasks:
After that, in vars, create main.yml. These are variables used to set username, passwords, etc:
Next, we will create jinja file called docker-compose.yml.j2. With this file we will create postgresql container:
Next, create main.yml to tasks. So we will copy docker-compose.yml.j2 and run using docker compose:
Playbook to deploy Flask App
First, you need to create directory called flask, then create sub-directory again:
Next, add main.yml to vars. This file refer to posgtresql variable before, with addition IP address of VM2(database VM):
Next, create config.py.j2 to templates. This file will replace the old config file from Flask project:
Next, create docker-compose.yml.j2 to templates. With this file we will create a container using docker compose:
Next, create main.yml in tasks. With this file we will clone flask project, add compose file, replace config.py and create new container using docker compose:
Our project structure should look like this:
Run Playbook and testing
Finally, let’s run ansible-playbook to deploy PostgreSQL and Flask:
After complete, just make sure there is no error. Then you see there are two containers created. In VM1 is Flask and VM2 is Postgresql:
Try to access the app using browsers, just type http://<vm1_ip>:5000:
Try to add a new task and then the data will be added to the database:
Conclusion
Finally we have created 2 VMs and deploy Flask Project with database.
Thank you for reading this article. Feel free to leave a comment if you have any questions, suggestions, or feedback.
First, you need to install Ansible. Just follow this link to install Ansible on your operating system installation guide.
After installation, create new directory called ansible-docker.
Create a new file called ansible.cfg as the Ansible configuration setting and then define the inventory file.
Then create a new file called hosts, where the name is defined on ansible.cfg.
NB: Don’t forget to change the IP Address and host name.
After setup the Ansible configuration setting & inventory file, let’s create a YAML file called playbook.yml
Then create roles directory:
Config, On this directory I will create a directory called tasks. After that, I should create yaml file called main.yml to run update, upgrade & install many dependencies.
Docker, On this directory create 2 directories called tasks & templates.
On tasks directory create new file called main.yml. This file contains Docker installation, Docker Compose installation & private registry setup.
In the template, create a template file using a jinja file named daemon.j2. This file contains configuration for private registry settings (optional).
NB: Field the IP using your remote server private IP
After all setup, Your project directory should look like this:
Test & Run
Okay, now test Your playbook.yml file using this command.
If You don’t have any errors, run the playbook using this command.
Wait until finish.
Conclusion
In this post, I just show you how to install Docker in a specific version using Ansible Playbook when you have one or more servers.
Thank You for reading this post, If You have suggestions or questions please leave them below. Thanks
NB: In this case, I just set the user as root. I installed the Docker on Ubuntu Server 22.04. For full code follow this link ansible-docker.