Skip to content

Blog

Kubernetes 104: Controllers

Kubernetes Cover

Let’s start again. Now I’m going to talk about Controllers in Kubernetes. In Kubernetes, a Controller is like a cluster’s brain, constantly working to ensure the system maintains its desired state. By monitoring objects such as Pods, Deployments, or DaemonSets, Controllers automate tasks and handle changes dynamically. Understanding Controllers is key to grasping how Kubernetes orchestrates and manages workloads seamlessly.

Common Kubernetes Controllers

Here are some of the most commonly used Controllers in Kubernetes:

1. Deployment

Deployments manage updates to Pods and ReplicaSets declaratively by transitioning the current state to the desired state step-by-step. They can create new ReplicaSets, adopt existing resources, or remove old Deployments.

Deployment Diagram

Common uses for Deployments include:

a. Releasing a ReplicaSet and monitoring its status.

b. Updating Pod specifications to declare a new desired state.

c. Scaling up to handle increased load.

d. Rolling back to a previous version if the current state is unstable.

e. Cleaning up unused ReplicaSets.

2. ReplicaSet

ReplicaSets (RS) function as controllers in Kubernetes, responsible for maintaining a consistent number of running Pods for a specific workload. Acting as the mechanism behind the scenes, the ReplicaSet controller continuously monitors the state of the Pods and ensures the desired replica count is maintained. If a Pod crashes, is evicted, or fails for any reason, the ReplicaSet controller swiftly creates new Pods to restore the desired state, ensuring resilience and uninterrupted service.

In practical use, ReplicaSets are not typically managed directly by users. Instead, they are controlled through Deployments, which leverage the ReplicaSet controller while providing additional features such as rolling updates, rollbacks, and declarative workload management. This abstraction allows users to benefit from the reliability and scalability of ReplicaSet controllers without dealing with their complexities directly.

ReplicaSet Diagram

3. DaemonSet

DaemonSet (DS) ensures that every or specific nodes in a cluster run a copy of a particular Pod. When a new node is added, DaemonSet automatically creates a Pod on that node. Conversely, when a node is removed, the associated Pod is deleted by the garbage collector. Deleting the DaemonSet removes all Pods it created.

DaemonSet Diagram

Common uses of DaemonSet:

1. Running storage daemons across nodes, such as Glusterd or Ceph.

2. Running log collection daemons across nodes, such as Fluentd or LogStash.

3. Running node monitoring daemons, such as Prometheus Node Exporter, Flowmill, or New Relic Agent.

DaemonSet is ideal for tasks that require processes to run on every node, such as log collection, monitoring, or providing local volumes.

4. StatefulSet

A StatefulSet is a Kubernetes controller used for managing stateful applications. Unlike Deployments, which focus on stateless workloads, StatefulSet is designed for applications that require persistent identity and stable storage. It ensures that each Pod it manages has a unique, stable network identity and maintains a strict order during creation, scaling, or deletion.

Key Features of StatefulSet:

1. Stable Network Identity: Each Pod gets a unique and persistent DNS name (e.g., pod-0, pod-1), which remains consistent even after rescheduling.

2. Ordered Pod Deployment and Scaling: Pods are created and scaled in a sequential order. For example, pod-0 must be created before pod-1, and the same applies during deletion.

3. Persistent Storage: StatefulSet works closely with PersistentVolumeClaim (PVC). Each Pod gets a dedicated PersistentVolume that remains intact even after the Pod is deleted or rescheduled.

StatefulSet Diagram

Common Use Cases:

1. Databases like MySQL, PostgreSQL, and MongoDB, where stable storage and network identity are critical.

2. Distributed Systems like Cassandra, Kafka, or ZooKeeper, where maintaining order and state is essential.

3. Caching Systems like Redis, requiring predictable storage and replication across nodes.

5. Job

A Job is a Kubernetes controller designed to manage tasks that run to completion. Unlike Deployments or StatefulSets, which manage long-running or stateful applications, Jobs are used for short-lived workloads that need to be executed only once or a specific number of times.

Key Features of a Job:

1. Ensures Completion: A Job creates one or more Pods to perform a task and ensures that the task is completed successfully. If a Pod fails, the Job controller automatically creates a replacement until the task succeeds.

2. Parallelism: Jobs support parallel execution, allowing multiple Pods to run concurrently, controlled by the parallelism and completions parameters.

3. Retries: Jobs retry failed Pods until the task is successful or a specified backoff limit is reached.

Common Use Cases:

- Batch Processing: Data transformation, ETL pipelines, or video encoding.

- Database Operations: Running migrations, backups, or clean-up scripts.

- One-Time Tasks: Performing diagnostics, generating reports, or initializing configurations.

Thank you for reading this post.😀

References:

- KUBERNETES UNTUK PEMULA. https://github.com/ngoprek-kubernetes/buku-kubernetes-pemula.

- Kubernetes Documentation: Jobs. https://kubernetes.io/docs/concepts/workloads/controllers/job.

- Kubernetes Controllers. https://www.uffizzi.com/kubernetes-multi-tenancy/kubernetes-controllers.

- Kubernetes: DaemonSet. https://opstree.com/blog/2021/12/07/kubernetes-daemonset.

- Kubernetes StatefulSet vs Kubernetes Deployment. https://devtron.ai/blog/deployment-vs-statefulset.

Kubernetes 103: Object

Kubernetes Cover

Let’s start again. Now I’m going to talk about objects in Kubernetes. In Kubernetes, an object represents a record of intent, where you declare what you want the cluster to do. The Kubernetes control plane works continuously to ensure that the current state of your system matches the desired state described by these objects.

Common Kubernetes Objects

Here are some of the most commonly used objects in Kubernetes:

1. Pod

A Pod is a group of one or more containers that share storage, networking, and a defined runtime configuration. Containers within a pod are scheduled and deployed together, operating in the same execution context.

Containers on Pod

A pod acts as a logical host for tightly connected containers, enabling seamless communication via localhost and standard IPC mechanisms like SystemV semaphores or POSIX shared memory. Containers in different pods, however, have unique IP addresses and communicate using pod IPs, ensuring clear isolation while supporting inter-pod networking.

Like containers in an application, pods are considered relatively ephemeral entities. Their lifecycle involves creation, assignment of a unique UID, scheduling to a node, and remaining there until they are stopped, restarted, or deleted. You can see the pod life cycle from the image below.

Pod Lifecycle

If a node fails, all pods on that node are marked for deletion after a specific timeout. A pod with a unique ID will not be rescheduled to a new node; instead, it will be replaced by a new pod with a different ID.

2. Service

A Service in Kubernetes is an abstraction that defines a logical set of Pods and the policies for accessing them, often referred to as microservices. You can see the service balancing traffic across multiple pods in the image below.

Kubernetes Service

For example, imagine a backend that provides image-processing functionality with three replicas. These replicas are identical, so the frontend doesn’t need to know which backend instance it uses. Even if the backend Pods change over time, the frontend doesn’t need to manage or track the list of current backends.

A Service decouples this complexity by providing a stable interface, allowing clients to interact with Pods without worrying about their lifecycle or specific details.

For applications running on Kubernetes, the platform provides a simple API endpoint that is continuously updated to reflect the current state of the Pods within a service.

For non-native applications, Kubernetes offers a virtual IP-based bridge that routes traffic to the backend Pods of the service. This ensures seamless integration and reliable access, regardless of changes in the underlying Pods.

3. Volume

A Kubernetes Volume provides a way for containers to access storage beyond their ephemeral lifecycle. Unlike a container’s local filesystem, which is destroyed when the container stops, a volume ensures data persists as long as the Pod using it exists. Kubernetes supports multiple volume types, such as emptyDir, hostPath, configMap, secret, and network-based storage like NFS or cloud provider-specific volumes.

Volumes can be shared among containers within the same Pod, enabling them to collaborate on data. However, when a Pod is deleted, the associated volume typically goes with it—unless you’re using Persistent Volumes (PV).

Persistent Volumes (PV) and Persistent Volume Claims (PVC)

A PV is a piece of storage provisioned in the cluster, either statically or dynamically. It is an abstract representation of storage resources like disks, network storage, or cloud storage, and exists independently of any specific Pod.

A PVC is a request for storage by a Pod. Users specify the required size, access modes (e.g., ReadWriteOnce, ReadOnlyMany), and storage class in the PVC. The cluster automatically binds the PVC to a suitable PV, providing the requested storage.

There are two methods to provide Persistent Volumes (PV)

1. Static

- The cluster administrator manually creates PVs by defining them in YAML manifests. - These PVs are available for binding with any PVC that matches their configuration.

2. Dynamic

- Kubernetes automatically provisions PVs based on the PVCs submitted by users.

- The cluster uses a StorageClass to determine the storage backend and provision the appropriate PV.

- This method simplifies administration by eliminating the need for pre-created PVs.

4. Namespace

A namespace in Kubernetes is a way to divide a cluster into virtual sub-clusters, providing logical isolation between resources. It is useful for organizing resources in environments with multiple teams, projects, or stages (e.g., development, staging, production).

By default, Kubernetes provides namespaces like default, kube-system, and kube-public. Users can create custom namespaces to segregate workloads and manage resource quotas, access controls, and policies independently for each namespace.

Namespaces are ideal for multi-tenant environments or to avoid naming collisions in large clusters. However, they don’t provide hard security boundaries and are primarily a tool for organizational purposes.

References:

- KUBERNETES UNTUK PEMULA. https://github.com/ngoprek-kubernetes/buku-kubernetes-pemula.

- Kubernetes Objects Guide. https://phoenixnap.com/kb/kubernetes-objects.

K3D: Getting Started with ArgoCD

Cover Image

K3D: Getting Started with ArgoCD

Intro

ArgoCD is a GitOps tool with a straightforward but powerful objective: to declaratively deploy applications to Kubernetes by managing application resources directly from version control systems, such as Git repositories. Every commit to the repository represents a change, which ArgoCD can apply to the Kubernetes cluster either manually or automatically. This approach ensures that deployment processes are fully controlled through version-controlled files, fostering an explicit and auditable release process.

For example, releasing a new application version involves updating the image tag in the resource files and committing the changes to the repository. ArgoCD syncs with the repository and seamlessly deploys the new version to the cluster.

Since ArgoCD itself operates on Kubernetes, it is straightforward to set up and integrates seamlessly with lightweight Kubernetes distributions like K3s. In this tutorial, we will demonstrate how to configure a local Kubernetes cluster using K3D and deploy applications with ArgoCD, utilizing the argocd-example-apps repository as a practical example.

Prerequisites

Before we begin, ensure you have the following installed:

- Docker

- Kubectl

- K3D

- ArgoCD CLI

Step 1: Set Up a K3D Cluster

Create a new Kubernetes cluster using K3D:

Terminal window
$ k3d cluster create argocluster --agents 2
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-argocluster'
INFO[0000] Created image volume k3d-argocluster-images
INFO[0000] Starting new tools node...
INFO[0000] Starting node 'k3d-argocluster-tools'
INFO[0001] Creating node 'k3d-argocluster-server-0'
INFO[0001] Creating node 'k3d-argocluster-agent-0'
INFO[0001] Creating node 'k3d-argocluster-agent-1'
INFO[0001] Creating LoadBalancer 'k3d-argocluster-serverlb'
INFO[0001] Using the k3d-tools node to gather environment information
INFO[0001] HostIP: using network gateway 172.18.0.1 address
INFO[0001] Starting cluster 'argocluster'
INFO[0001] Starting servers...
INFO[0002] Starting node 'k3d-argocluster-server-0'
INFO[0009] Starting agents...
INFO[0009] Starting node 'k3d-argocluster-agent-0'
INFO[0009] Starting node 'k3d-argocluster-agent-1'
INFO[0017] Starting helpers...
INFO[0017] Starting node 'k3d-argocluster-serverlb'
INFO[0024] Injecting records for hostAliases (incl. host.k3d.internal) and for 4 network members into CoreDNS configmap...
INFO[0026] Cluster 'argocluster' created successfully!
INFO[0026] You can now use it like this:
kubectl cluster-info

Verify that your cluster is running:

Terminal window
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k3d-argocluster-agent-0 Ready <none> 63s v1.30.4+k3s1
k3d-argocluster-agent-1 Ready <none> 62s v1.30.4+k3s1
k3d-argocluster-server-0 Ready control-plane,master 68s v1.30.4+k3s1

Step 2: Install ArgoCD

Install ArgoCD in your K3D cluster:

Terminal window
$ kubectl create namespace argocd
$ kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
customresourcedefinition.apiextensions.k8s.io/applications.argoproj.io created
customresourcedefinition.apiextensions.k8s.io/applicationsets.argoproj.io created
customresourcedefinition.apiextensions.k8s.io/appprojects.argoproj.io created
serviceaccount/argocd-application-controller created
serviceaccount/argocd-applicationset-controller created
serviceaccount/argocd-dex-server created
serviceaccount/argocd-notifications-controller created
serviceaccount/argocd-redis created
serviceaccount/argocd-repo-server created
serviceaccount/argocd-server created
role.rbac.authorization.k8s.io/argocd-application-controller created
role.rbac.authorization.k8s.io/argocd-applicationset-controller created
role.rbac.authorization.k8s.io/argocd-dex-server created
role.rbac.authorization.k8s.io/argocd-notifications-controller created
role.rbac.authorization.k8s.io/argocd-redis created
role.rbac.authorization.k8s.io/argocd-server created
...

Check the status of ArgoCD pods:

Terminal window
$ kubectl get pods -n argocd
NAME READY STATUS RESTARTS AGE
argocd-application-controller-0 1/1 Running 0 29m
argocd-applicationset-controller-684cd5f5cc-78cc8 1/1 Running 0 29m
argocd-dex-server-77c55fb54f-bw956 1/1 Running 0 29m
argocd-notifications-controller-69cd888b56-g7z4r 1/1 Running 0 29m
argocd-redis-55c76cb574-72mdh 1/1 Running 0 29m
argocd-repo-server-584d45d88f-2mdlc 1/1 Running 0 29m
argocd-server-8667f8577-prx6s 1/1 Running 0 29m

Expose the ArgoCD API server locally, then try to accessing the dashboard:

Terminal window
$ kubectl port-forward svc/argocd-server -n argocd 8080:443

ArgoCD

Step 3: Configure ArgoCD

Log in to ArgoCD

Retrieve the initial admin password:

Terminal window
kubectl get secret argocd-initial-admin-secret -n argocd -o jsonpath="{.data.password}" | base64 -d

Log in using the admin username and the password above.

Log in into ArgoCD Dashboard

Connect a Git Repository

Just clone the argocd-example-apps repository:

Terminal window
git clone https://github.com/argoproj/argocd-example-apps.git

Specify the ArgoCD server address in your CLI configuration:

Terminal window
$ argocd login localhost:8080
WARNING: server certificate had error: tls: failed to verify certificate: x509: certificate signed by unknown authorit
y. Proceed insecurely (y/n)? y
Username: admin
Password:
'admin:login' logged in successfully
Context 'localhost:8080' updated

Create a new ArgoCD application using the repository:

Terminal window
$ $ argocd app create example-app \
--repo https://github.com/argoproj/argocd-example-apps.git \
--path ./guestbook \
--dest-server https://kubernetes.default.svc \
--dest-namespace default
application 'example-app' created

Sync the application:

Terminal window
$ argocd app sync example-app
TIMESTAMP GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAG
E
2025-01-10T11:31:11+07:00 Service default guestbook-ui OutOfSync Missing
2025-01-10T11:31:11+07:00 apps Deployment default guestbook-ui OutOfSync Missing
2025-01-10T11:31:11+07:00 Service default guestbook-ui Synced Healthy
2025-01-10T11:31:11+07:00 Service default guestbook-ui Synced Healthy service/guestbook-ui created
2025-01-10T11:31:11+07:00 apps Deployment default guestbook-ui OutOfSync Missing deployment.apps/guestbook-ui created
2025-01-10T11:31:11+07:00 apps Deployment default guestbook-ui Synced Progressing deployment.apps/guestbook-ui created
...

Step 4: Verify the Deployment

Check that the application is deployed successfully:

Terminal window
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
guestbook-ui-649789b49c-zwjt8 1/1 Running 0 5m36s

Access the deployed application by exposing it via a NodePort or LoadBalancer:

Terminal window
$ kubectl port-forward svc/guestbook-ui 8081:80
Forwarding from 127.0.0.1:8081 -> 80
Forwarding from [::1]:8081 -> 80

Guestbook UI

Dashboard App List

Dashboard App

Conclusion

In this tutorial, you’ve set up a local Kubernetes cluster using K3D and deployed applications with ArgoCD. This setup provides a simple and powerful way to practice GitOps workflows locally. By leveraging tools like ArgoCD, you can ensure your deployments are consistent, auditable, and declarative. Happy GitOps-ing!

References

- GitOps on a Laptop with K3D and ArgoCD. https://www.sokube.io/en/blog/gitops-on-a-laptop-with-k3d-and-argocd-en.

- Take Argo CD for a spin with K3s and k3d. https://www.bekk.christmas/post/2020/13/take-argo-cd-for-a-spin-with-k3s-and-k3d.

- Application Deploy to Kubernetes with Argo CD and K3d. https://yashguptaa.medium.com/application-deploy-to-kubernetes-with-argo-cd-and-k3d-8e29cf4f83ee

K3D: Monitoring Your Service using Kubernetes Dashboard or Octant

Cover

K3D is a lightweight wrapper around k3s that allows you to run Kubernetes clusters inside Docker containers. While K3D is widely used for local development and testing, effective monitoring of services running on Kubernetes clusters is essential for debugging, performance tuning, and understanding resource usage.

In this blog, I will explore two popular monitoring tools for Kubernetes: Kubernetes Dashboard, the official web-based UI for Kubernetes, and Octant, a local, real-time, standalone dashboard developed by VMware. Both tools have unique strengths, and this guide will help you understand when to use one over the other.

Setting Up Kubernetes Dashboard On K3D

First you need to create a cluster using k3d cluster create:

Terminal window
$ k3d cluster create dashboard --servers 1 --agents 2
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-dashboard'
INFO[0000] Created image volume k3d-dashboard-images
INFO[0000] Starting new tools node...
INFO[0000] Starting node 'k3d-dashboard-tools'
INFO[0001] Creating node 'k3d-dashboard-server-0'
INFO[0001] Creating node 'k3d-dashboard-agent-0'
INFO[0001] Creating node 'k3d-dashboard-agent-1'
INFO[0001] Creating LoadBalancer 'k3d-dashboard-serverlb'
INFO[0001] Using the k3d-tools node to gather environment information
INFO[0001] HostIP: using network gateway 172.18.0.1 address
INFO[0001] Starting cluster 'dashboard'
INFO[0001] Starting servers...
INFO[0001] Starting node 'k3d-dashboard-server-0'
INFO[0008] Starting agents...
INFO[0008] Starting node 'k3d-dashboard-agent-0'
INFO[0008] Starting node 'k3d-dashboard-agent-1'
INFO[0015] Starting helpers...
INFO[0016] Starting node 'k3d-dashboard-serverlb'
INFO[0022] Injecting records for hostAliases (incl. host.k3d.internal) and for 4 network members into CoreDNS configmap...
INFO[0024] Cluster 'dashboard' created successfully!
INFO[0024] You can now use it like this:
kubectl cluster-info

Next, deploy Kubernetes Dashboard:

Terminal window
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml

Ensure the pods are running.

Terminal window
$ kubectl get pods -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
dashboard-metrics-scraper-795895d745-kcbkw 1/1 Running 0 10m
kubernetes-dashboard-56cf4b97c5-fg92n 1/1 Running 0 10m

Then, create a service account and bind role, to access the dashboard, you need a service account with the proper permissions. Create a service account and cluster role binding using the following YAML:

apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard

Apply this configuration:

Terminal window
$ kubectl apply -f admin-user.yaml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created

Then, Retrieve the token for login using:

Terminal window
$ kubectl -n kubernetes-dashboard create token admin-user

Use kubectl proxy to access the dashboard:

Terminal window
$ kubectl proxy
Starting to serve on 127.0.0.1:8001

Finally, open your browser and navigate to:

Terminal window
http://127.0.0.1:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login

Login to Kubernetes Dashboard

See the services

Note: Don’t forget to copy and paste the token when prompted.

Setting Up Octant on K3D

First, you need to install Octant, You can install Octant using a package manager or download it directly from the official releases. For example, on macOS, you can use Homebrew:

Terminal window
$ brew install octant

Next, on Linux just download the appropriate binary and move it to your path:

Terminal window
$ wget https://github.com/vmware-tanzu/octant/releases/download/v0.25.1/octant_0.25.1_Linux-64bit.tar.gz
$ tar -xvzf octant_0.25.1_Linux-64bit.tar.gz && mv octant_0.25.1_Linux-64bit octant
$ rm octant_0.25.1_Linux-64bit.tar.gz

Then, to start Octant,simply run the binary:

Terminal window
$ cd octant
$ ./octant

Finally, you can see the dashboard:

Octant Dashboard

Comparison: Kubernetes Dashboard vs Octant

FeatureKubernetes DashboardOctant
InstallationRequires deployment on the clusterLocal installation
AccessVia web proxyLocalhost
Real-Time UpdatesPartial (requires manual refresh)Full real-time updates
Ease of SetupModerate (requires token and RBAC)Easy (just run the binary)

Conclusion

Both Kubernetes Dashboard and Octant offer valuable features for monitoring Kubernetes clusters in K3D. If you need a quick and easy way to monitor your local cluster with minimal setup, Octant is a great choice. On the other hand, if you want an experience closer to managing a production environment, Kubernetes Dashboard is the better option.

References:

Exploring K3S on Docker using K3D

Cover Image

In this post, I’ll show you how to start with K3D, an awesome tool for running lightweight Kubernetes clusters using K3S on Docker. I hope this post will help you quickly set up and understand K3D. Let’s dive in!

What is K3S?

Before starting with K3D we need to know about K3S. Developed by Rancher Labs, K3S is a lightweight Kubernetes distribution designed for IoT and edge environments. It is a fully conformant Kubernetes distribution but optimized to work in resource-constrained settings by reducing its resource footprint and dependencies.

Key highlights of K3S include:

- Optimized for Edge: Ideal for small clusters and resource-limited environments.

- Built-In Tools: Networking (Flannel), ServiceLB Load-Balancer controller and Ingress (Traefik) are included, minimizing setup complexity.

- Compact Design: K3S simplifies Kubernetes by bundling everything into a single binary and removing unnecessary components like legacy APIs.

Now let’s dive into K3D.

What is K3D?

K3D acts as a wrapper for K3S, making it possible to run K3S clusters inside Docker containers. It provides a convenient way to manage these clusters, offering speed, simplicity, and scalability for local Kubernetes environments.

Docker meme

Here’s why K3D is popular:

- Ease of Use: Quickly spin up and tear down clusters with simple commands.

- Resource Efficiency: Run multiple clusters on a single machine without significant overhead.

- Development Focus: Perfect for local development, CI/CD pipelines, and testing.

Let’s move on to how you can set up K3D and start using it.

Requirements

Before starting with K3D, make sure you have installed the following prerequisites based on your operating system.

- Docker

Follow the Docker installation guide for your operating system. Alternatively, you can simplify the process with these commands:

Terminal window
$ curl -fsSL https://get.docker.com -o get-docker.sh
$ sudo sh get-docker.sh
$ sudo usermod -aG docker $USER #add user to the docker group
$ docker version
Client: Docker Engine - Community
Version: 27.4.0
API version: 1.47
Go version: go1.22.10
Git commit: bde2b89
Built: Sat Dec 7 10:38:40 2024
OS/Arch: linux/amd64
Context: default
...

- Kubectl

The Kubernetes command-line tool, kubectl, is required to interact with your K3D cluster. Install it by following the instructions on the official Kubernetes documentation. Or you can follow this step:

Terminal window
$ curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
$ sudo install kubectl /usr/local/bin/kubectl
$ kubectl version --client
Client Version: v1.32.0
Kustomize Version: v5.5.0

- K3D

Install K3D by referring the official documentation or using the following command:

Terminal window
$ curl -s https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash
$ k3d --version
k3d version v5.7.4
k3s version v1.30.4-k3s1 (default)

NB: I use Ubuntu 22.04 to install the requirements.

Create Your First Cluster

Basic

Terminal window
$ k3d cluster create mycluster
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-mycluster'
INFO[0000] Created image volume k3d-mycluster-images
INFO[0000] Starting new tools node...
INFO[0001] Creating node 'k3d-mycluster-server-0'
INFO[0002] Pulling image 'ghcr.io/k3d-io/k3d-tools:5.7.4'
INFO[0005] Pulling image 'docker.io/rancher/k3s:v1.30.4-k3s1'
INFO[0006] Starting node 'k3d-mycluster-tools'
INFO[0030] Creating LoadBalancer 'k3d-mycluster-serverlb'
INFO[0033] Pulling image 'ghcr.io/k3d-io/k3d-proxy:5.7.4'
INFO[0045] Using the k3d-tools node to gather environment information
INFO[0045] HostIP: using network gateway 172.18.0.1 address
INFO[0045] Starting cluster 'mycluster'
INFO[0045] Starting servers...
INFO[0045] Starting node 'k3d-mycluster-server-0'
INFO[0053] All agents already running.
INFO[0053] Starting helpers...
INFO[0053] Starting node 'k3d-mycluster-serverlb'
INFO[0060] Injecting records for hostAliases (incl. host.k3d.internal) and for 2 network members into CoreDNS configmap...
INFO[0062] Cluster 'mycluster' created successfully!
INFO[0062] You can now use it like this:
kubectl cluster-info

This command will create a cluster named mycluster with one control plane node.

Once the cluster is created, check its status using kubectl:

Terminal window
$ kubectl cluster-info
Kubernetes control plane is running at https://0.0.0.0:43355
CoreDNS is running at https://0.0.0.0:43355/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://0.0.0.0:43355/api/v1/namespaces/kube-system/services/https:metrics-server:https/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

To ensure that the nodes in the cluster are active, run:

Terminal window
$ kubectl get nodes --output wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k3d-mycluster-server-0 Ready control-plane,master 5m14s v1.30.4+k3s1 172.18.0.2 <none> K3s v1.30.4+k3s1 6.8.0-50-generic containerd://1.7.20-k3s1

To list all the clusters created with K3D, use the following command:

Terminal window
$ k3d cluster list
NAME SERVERS AGENTS LOADBALANCER
mycluster 1/1 0/0 true

Stop, start & delete your cluster, use the following command:

Terminal window
$ k3d cluster stop mycluster
INFO[0000] Stopping cluster 'mycluster'
INFO[0020] Stopped cluster 'mycluster
Terminal window
$ k3d cluster start mycluster
INFO[0000] Using the k3d-tools node to gather environment information
INFO[0000] Starting new tools node...
INFO[0000] Starting node 'k3d-mycluster-tools'
INFO[0001] HostIP: using network gateway 172.18.0.1 address
INFO[0001] Starting cluster 'mycluster'
INFO[0001] Starting servers...
INFO[0001] Starting node 'k3d-mycluster-server-0'
INFO[0005] All agents already running.
INFO[0005] Starting helpers...
INFO[0005] Starting node 'k3d-mycluster-serverlb'
INFO[0012] Injecting records for hostAliases (incl. host.k3d.internal) and for 2 network members into CoreDNS configmap...
INFO[0014] Started cluster 'mycluster'
Terminal window
$ k3d cluster delete mycluster
INFO[0000] Deleting cluster 'mycluster'
INFO[0001] Deleting cluster network 'k3d-mycluster'
INFO[0001] Deleting 1 attached volumes...
INFO[0001] Removing cluster details from default kubeconfig...
INFO[0001] Removing standalone kubeconfig file (if there is one)...
INFO[0001] Successfully deleted cluster mycluster!

If you want to start a cluster with extra server and worker nodes, then extend the creation command like this:

Terminal window
$ k3d cluster create mycluster --servers 2 --agents 4

After creating the cluster, you can verify its status using these commands:

Terminal window
$ k3d cluster list
NAME SERVERS AGENTS LOADBALANCER
mycluster 2/2 4/4 true
Terminal window
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k3d-mycluster-agent-0 Ready <none> 51s v1.30.4+k3s1
k3d-mycluster-agent-1 Ready <none> 52s v1.30.4+k3s1
k3d-mycluster-agent-2 Ready <none> 53s v1.30.4+k3s1
k3d-mycluster-agent-3 Ready <none> 51s v1.30.4+k3s1
k3d-mycluster-server-0 Ready control-plane,etcd,master 81s v1.30.4+k3s1
k3d-mycluster-server-1 Ready control-plane,etcd,master 64s v1.30.4+k3s1

Bootstrapping Cluster

Bootstrapping a cluster with configuration files allows you to automate and customize the process of setting up a K3D cluster. By using a configuration file, you can easily specify cluster details such as node count, roles, ports, volumes, and more, making it easy to recreate or modify clusters.

Here’s an example of a basic cluster configuration file my-cluster.yaml that sets up a K3D cluster with multiple nodes:

apiVersion: k3d.io/v1alpha5
kind: Simple
metadata:
name: my-cluster
servers: 1
agents: 2
image: rancher/k3s:v1.30.4-k3s1
ports:
- port: 30000-30100:30000-30100
nodeFilters:
- server:*
options:
k3s:
extraArgs:
- arg: --disable=traefik
nodeFilters:
- server:*

A K3D config to create a cluster named my-cluster with 1 server, 2 agents, K3S version v1.30.4-k3s1, host-to-server port mapping (30000-30100), and Traefik disabled on server nodes.

Terminal window
k3d create cluster --config my-cluster.yaml

The result after creation:

Terminal window
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k3d-my-cluster-agent-0 Ready <none> 14s v1.30.4+k3s1
k3d-my-cluster-agent-1 Ready <none> 15s v1.30.4+k3s1
k3d-my-cluster-server-0 Ready control-plane,master 19s v1.30.4+k3s1
Terminal window
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
NAMES
9c7a53f40065 ghcr.io/k3d-io/k3d-proxy:5.7.4 "/bin/sh -c nginx-pr…" About a minute ago Up 46 seconds 80/tcp,
0.0.0.0:30000-30100->30000-30100/tcp, :::30000-30100->30000-30100/tcp, 0.0.0.0:34603->6443/tcp k3d-my-cluster-server
lb
41f544fa9f8e rancher/k3s:v1.30.4-k3s1 "/bin/k3d-entrypoint…" About a minute ago Up 55 seconds
k3d-my-cluster-agent-
1
48acdbaa0734 rancher/k3s:v1.30.4-k3s1 "/bin/k3d-entrypoint…" About a minute ago Up 55 seconds
k3d-my-cluster-agent-
0
0e2799145367 rancher/k3s:v1.30.4-k3s1 "/bin/k3d-entrypoint…" About a minute ago Up 59 seconds
k3d-my-cluster-server
-0

Create Simple Deployment

Once your K3D cluster is up and running, you can deploy applications onto the cluster. A deployment in Kubernetes ensures that a specified number of pod replicas are running, and it manages updates to those pods.

Use the kubectl create deployment command to define and start a deployment. For example:

Terminal window
$ kubectl create deployment nginx --image=nginx
deployment.apps/nginx created

Check deployment status using kubectl get deplyment command:

Terminal window
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 0/1 1 0 2m58s

Expose the deployment:

Terminal window
$ kubectl expose deployment nginx --port=80 --type=LoadBalancer
service/nginx exposed

Verify the Pod and Service:

Terminal window
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-bf5d5cf98-p6mpj 1/1 Running 0 69s
Terminal window
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 95s
nginx LoadBalancer 10.43.122.4 172.18.0.2,172.18.0.3,172.18.0.4 80:30893/TCP 66s

Try to access using browser by using LoadBalancer EXTERNAL-IP:

Terminal window
$ http://172.18.0.2:30893

Access service

Conclusion

K3D simplifies the process of running Kubernetes clusters with K3S on Docker, making it ideal for local development and testing. By setting up essential tools like Docker, kubectl, and K3D, you can easily create and manage clusters. You can deploy applications with just a few commands, expose them, and access them locally. K3D offers a flexible and lightweight solution for Kubernetes, allowing developers to experiment and work with clusters in an efficient way.

Thank you for taking the time to read this guide. I hope it was helpful in getting you started with K3D and Kubernetes!😀