Skip to content

Blog

Kubernetes 105: Create Kubernetes Cluster

Kubernetes Cover

Let’s start again. Now we will do some practices.

In this part of the Kubernetes series, we will explore how to create a Kubernetes cluster in different environments. Whether you’re running Kubernetes locally or in the cloud, understanding how to set up a cluster is fundamental to deploying and managing containerized applications efficiently.

We will cover three different ways to create a Kubernetes cluster:

- Kind (Kubernetes in Docker) - A lightweight way to run Kubernetes clusters locally for testing and development.

- K3D (K3S in Docker) - A more lightweight Kubernetes distribution, optimized for local development and CI/CD workflows.

- EKS (Amazon Elastic Kubernetes Service) - A managed Kubernetes service provided by AWS for running Kubernetes workloads in the cloud.

Each approach has its own use cases, advantages, and trade-offs. Let’s dive into each one and see how to set up a cluster.

Setting Up a Kubernetes Cluster with Kind

Kind Logo

Kind (Kubernetes in Docker) is one of the simplest ways to spin up a Kubernetes cluster for local development and testing. It runs Kubernetes clusters inside Docker containers and is widely used for CI/CD and development workflows.

Prerequisites

- Docker installed on your machine. (installation guide)

- KIND CLI installed. (installation guide)

- Kubectl CLI installed. (installation guide)

Create a Cluster with Kind

- Create a new Kind cluster:

Terminal window
$ kind create cluster --name kind-cluster
Creating cluster "kind-cluster" ...
βœ“ Ensuring node image (kindest/node:v1.31.0) πŸ–Ό
βœ“ Preparing nodes πŸ“¦
βœ“ Writing configuration πŸ“œ
βœ“ Starting control-plane πŸ•Ή
βœ“ Installing CNI πŸ”Œ
βœ“ Installing StorageClass πŸ’Ύ
Set kubectl context to "kind-kind-cluster"
You can now use your cluster with:
kubectl cluster-info --context kind-kind-cluster
Thanks for using kind! 😊
  • Check the cluster:
Terminal window
$ kubectl cluster-info --context kind-kind-cluster
Kubernetes control plane is running at https://127.0.0.1:43417
CoreDNS is running at https://127.0.0.1:43417/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

- List available nodes:

Terminal window
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kind-cluster-control-plane Ready control-plane 75s v1.31.0

Create Simple Deployment

- Use the kubectl create deployment command to define and start a deployment:

Terminal window
$ kubectl create deployment nginx --image=nginx
deployment.apps/nginx created

- Check deployment status using kubectl get deployment command:

Terminal window
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 0/1 1 0 29s

- Expose the deployment:

Terminal window
$ kubectl expose deployment nginx --port=80 --type=LoadBalancer
service/nginx exposed

- Verify the Pod status and then try to access Nginx using your browser:

Terminal window
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-676b6c5bbc-wd87x 1/1 Running 0 12m

Access Nginx

- Delete the cluster when no longer needed

Terminal window
$ kind delete cluster --name kind-cluster
Deleting cluster "kind-cluster" ...
Deleted nodes: ["kind-cluster-control-plane"]

Setting Up a Kubernetes Cluster with K3D

K3D Logo

K3D is a tool that allows you to run lightweight Kubernetes clusters using K3S inside Docker. It is a great choice for fast, local Kubernetes development.

Prerequisites

- Docker installed on your machine. (installation guide)

- K3D CLI installed. (installation guide)

- Kubectl CLI installed. (installation guide)

Create a Cluster with K3D

- Create a new K3D cluster:

Terminal window
$ k3d cluster create my-k3d-cluster
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-my-k3d-cluster'
INFO[0000] Created image volume k3d-my-k3d-cluster-images
INFO[0000] Starting new tools node...
INFO[0000] Starting node 'k3d-my-k3d-cluster-tools'
INFO[0001] Creating node 'k3d-my-k3d-cluster-server-0'
INFO[0001] Creating LoadBalancer 'k3d-my-k3d-cluster-serverlb'
INFO[0001] Using the k3d-tools node to gather environment information
INFO[0001] HostIP: using network gateway 172.20.0.1 address
INFO[0001] Starting cluster 'my-k3d-cluster'
INFO[0001] Starting servers...
INFO[0001] Starting node 'k3d-my-k3d-cluster-server-0'
INFO[0008] All agents already running.
INFO[0008] Starting helpers...
INFO[0008] Starting node 'k3d-my-k3d-cluster-serverlb'
INFO[0016] Injecting records for hostAliases (incl. host.k3d.internal) and for 2 network members into CoreDNS configma
p...
INFO[0018] Cluster 'my-k3d-cluster' created successfully!
INFO[0018] You can now use it like this:
kubectl cluster-info

- Check the cluster status:

Terminal window
$ kubectl cluster-info
Kubernetes control plane is running at https://0.0.0.0:46503
CoreDNS is running at https://0.0.0.0:46503/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://0.0.0.0:46503/api/v1/namespaces/kube-system/services/https:metrics-server:https/p
roxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

- List available nodes:

Terminal window
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k3d-my-k3d-cluster-server-0 Ready control-plane,master 2m33s v1.30.4+k3s1

Create Simple Deployment

- Use the kubectl create deployment command to define and start a deployment:

Terminal window
$ kubectl create deployment httpd --image=httpd

- Check deployment status using kubectl get deployment command:

Terminal window
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
httpd 0/1 1 0 45s

- Verify the Pod status:

Terminal window
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
httpd-56f946b8c8-84ww8 1/1 Running 0 9m11s

- Expose the deployment:

Terminal window
$ kubectl expose deployment httpd --port=80 --type=LoadBalancer

- Try to access using browser:

Access HTTPD

- Delete the cluster when no longer needed:

Terminal window
$ k3d cluster delete my-k3d-cluster
INFO[0000] Deleting cluster 'my-k3d-cluster'
INFO[0001] Deleting cluster network 'k3d-my-k3d-cluster'
INFO[0001] Deleting 1 attached volumes...
INFO[0001] Removing cluster details from default kubeconfig...
INFO[0001] Removing standalone kubeconfig file (if there is one)...
INFO[0001] Successfully deleted cluster my-k3d-cluster!

Setting Up a Kubernetes Cluster on AWS EKS

AWS EKS Logo

Amazon Elastic Kubernetes Service (EKS) is a fully managed Kubernetes service on AWS, designed for running production workloads.

Prerequisites

- AWS CLI installed and configured. (installation guide)

- EKSCTL CLI installed. (installation guide)

- Kubectl CLI installed. (installation guide)

Create a cluster on EKS

To create a Kubernetes cluster in AWS, you can use the AWS Console (dashboard) or the eksctl CLI. For this guide, we will use eksctl.

We will provisions an EKS cluster with two t4g.small nodes in the us-east-1 region, making it ready for running Kubernetes workloads.

Terminal window
$ eksctl create cluster \
--name cluster-1 \
--region us-east-1 \
--node-type t4g.small \
--nodes 2 \
--nodegroup-name node-group-1
2025-02-01 19:52:35 [β„Ή] eksctl version 0.202.0
2025-02-01 19:52:35 [β„Ή] using region us-east-1
2025-02-01 19:52:37 [β„Ή] setting availability zones to [us-east-1c us-east-1f]
...
2025-02-01 20:02:04 [β„Ή] creating addon
2025-02-01 20:02:04 [β„Ή] successfully created addon
2025-02-01 20:02:05 [β„Ή] creating addon
2025-02-01 20:02:06 [β„Ή] successfully created addon
2025-02-01 20:02:07 [β„Ή] creating addon
2025-02-01 20:02:07 [β„Ή] successfully created addon
"us-east-1" region is ready

- Access AWS console, navigate to the EKS service and you can see the cluster is successfully created.

After cluster creation

- List available nodes:

Terminal window
kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-192-168-xx-yy.ec2.internal Ready <none> 17m v1.30.8-eks-aeac579
ip-192-168-xx-yy.ec2.internal Ready <none> 17m v1.30.8-eks-aeac57

Create Simple Deployment

- Use the kubectl create deployment command to define and start a deployment:

Terminal window
$ kubectl create deployment nginx --image=nginx
deployment.apps/nginx create

- Check deployment status using kubectl get deployment command:

Terminal window
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 1/1 1 1 23s

- Verify the Pod status:

Terminal window
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-bf5d5cf98-9dld5 1/1 Running 0 43s

- Expose the service:

Terminal window
$ kubectl expose deployment nginx --type=LoadBalancer --port=80 --name=nginx-service

- Try to access using the browser:

Access the service

- Delete the cluster when no longer needed:

Terminal window
$ eksctl delete cluster --name cluster-1 --region us-east-1
2025-02-01 20:51:59 [β„Ή] deleting EKS cluster "cluster-1"
2025-02-01 20:52:02 [β„Ή] will drain 0 unmanaged nodegroup(s) in cluster "cluster-1"
2025-02-01 20:52:02 [β„Ή] starting parallel draining, max in-flight of 1
2025-02-01 20:52:04 [β„Ή] deleted 0 Fargate profile(s)
2025-02-01 20:52:13 [βœ”] kubeconfig has been updated
2025-02-01 20:52:13 [β„Ή] cleaning up AWS load balancers created by Kubernetes objects of Kind Service or Ingress
2025-02-01 20:52:56 [β„Ή]
...
2025-02-01 21:02:00 [β„Ή] waiting for CloudFormation stack "eksctl-cluster-1-nodegroup-node-group-1"
2025-02-01 21:02:01 [β„Ή] will delete stack "eksctl-cluster-1-cluster"
2025-02-01 21:02:04 [βœ”] all cluster resources were deleted

Conclusion

Setting up a Kubernetes cluster is the first step in running containerized applications at scale. In this guide, we’ve explored three different ways to create a Kubernetes cluster and do a simple deployment: using Kind and K3D for local development and using EKS for cloud-based deployments. Each method has its own advantages depending on your use case.

Thanks for reading this post. Stay tuned!

References:

- KUBERNETES UNTUK PEMULA. https://github.com/ngoprek-kubernetes/buku-kubernetes-pemula.

- How do I install AWS EKS CLI (eksctl)?. https://learn.arm.com/install-guides/eksctl

Kubernetes 104: Controllers

Kubernetes Cover

Let’s start again. Now I’m going to talk about Controllers in Kubernetes. In Kubernetes, a Controller is like a cluster’s brain, constantly working to ensure the system maintains its desired state. By monitoring objects such as Pods, Deployments, or DaemonSets, Controllers automate tasks and handle changes dynamically. Understanding Controllers is key to grasping how Kubernetes orchestrates and manages workloads seamlessly.

Common Kubernetes Controllers

Here are some of the most commonly used Controllers in Kubernetes:

1. Deployment

Deployments manage updates to Pods and ReplicaSets declaratively by transitioning the current state to the desired state step-by-step. They can create new ReplicaSets, adopt existing resources, or remove old Deployments.

Deployment Diagram

Common uses for Deployments include:

a. Releasing a ReplicaSet and monitoring its status.

b. Updating Pod specifications to declare a new desired state.

c. Scaling up to handle increased load.

d. Rolling back to a previous version if the current state is unstable.

e. Cleaning up unused ReplicaSets.

2. ReplicaSet

ReplicaSets (RS) function as controllers in Kubernetes, responsible for maintaining a consistent number of running Pods for a specific workload. Acting as the mechanism behind the scenes, the ReplicaSet controller continuously monitors the state of the Pods and ensures the desired replica count is maintained. If a Pod crashes, is evicted, or fails for any reason, the ReplicaSet controller swiftly creates new Pods to restore the desired state, ensuring resilience and uninterrupted service.

In practical use, ReplicaSets are not typically managed directly by users. Instead, they are controlled through Deployments, which leverage the ReplicaSet controller while providing additional features such as rolling updates, rollbacks, and declarative workload management. This abstraction allows users to benefit from the reliability and scalability of ReplicaSet controllers without dealing with their complexities directly.

ReplicaSet Diagram

3. DaemonSet

DaemonSet (DS) ensures that every or specific nodes in a cluster run a copy of a particular Pod. When a new node is added, DaemonSet automatically creates a Pod on that node. Conversely, when a node is removed, the associated Pod is deleted by the garbage collector. Deleting the DaemonSet removes all Pods it created.

DaemonSet Diagram

Common uses of DaemonSet:

1. Running storage daemons across nodes, such as Glusterd or Ceph.

2. Running log collection daemons across nodes, such as Fluentd or LogStash.

3. Running node monitoring daemons, such as Prometheus Node Exporter, Flowmill, or New Relic Agent.

DaemonSet is ideal for tasks that require processes to run on every node, such as log collection, monitoring, or providing local volumes.

4. StatefulSet

A StatefulSet is a Kubernetes controller used for managing stateful applications. Unlike Deployments, which focus on stateless workloads, StatefulSet is designed for applications that require persistent identity and stable storage. It ensures that each Pod it manages has a unique, stable network identity and maintains a strict order during creation, scaling, or deletion.

Key Features of StatefulSet:

1. Stable Network Identity: Each Pod gets a unique and persistent DNS name (e.g., pod-0, pod-1), which remains consistent even after rescheduling.

2. Ordered Pod Deployment and Scaling: Pods are created and scaled in a sequential order. For example, pod-0 must be created before pod-1, and the same applies during deletion.

3. Persistent Storage: StatefulSet works closely with PersistentVolumeClaim (PVC). Each Pod gets a dedicated PersistentVolume that remains intact even after the Pod is deleted or rescheduled.

StatefulSet Diagram

Common Use Cases:

1. Databases like MySQL, PostgreSQL, and MongoDB, where stable storage and network identity are critical.

2. Distributed Systems like Cassandra, Kafka, or ZooKeeper, where maintaining order and state is essential.

3. Caching Systems like Redis, requiring predictable storage and replication across nodes.

5. Job

A Job is a Kubernetes controller designed to manage tasks that run to completion. Unlike Deployments or StatefulSets, which manage long-running or stateful applications, Jobs are used for short-lived workloads that need to be executed only once or a specific number of times.

Key Features of a Job:

1. Ensures Completion: A Job creates one or more Pods to perform a task and ensures that the task is completed successfully. If a Pod fails, the Job controller automatically creates a replacement until the task succeeds.

2. Parallelism: Jobs support parallel execution, allowing multiple Pods to run concurrently, controlled by the parallelism and completions parameters.

3. Retries: Jobs retry failed Pods until the task is successful or a specified backoff limit is reached.

Common Use Cases:

- Batch Processing: Data transformation, ETL pipelines, or video encoding.

- Database Operations: Running migrations, backups, or clean-up scripts.

- One-Time Tasks: Performing diagnostics, generating reports, or initializing configurations.

Thank you for reading this post.πŸ˜€

References:

- KUBERNETES UNTUK PEMULA. https://github.com/ngoprek-kubernetes/buku-kubernetes-pemula.

- Kubernetes Documentation: Jobs. https://kubernetes.io/docs/concepts/workloads/controllers/job.

- Kubernetes Controllers. https://www.uffizzi.com/kubernetes-multi-tenancy/kubernetes-controllers.

- Kubernetes: DaemonSet. https://opstree.com/blog/2021/12/07/kubernetes-daemonset.

- Kubernetes StatefulSet vs Kubernetes Deployment. https://devtron.ai/blog/deployment-vs-statefulset.

Kubernetes 103: Object

Kubernetes Cover

Let’s start again. Now I’m going to talk about objects in Kubernetes. In Kubernetes, an object represents a record of intent, where you declare what you want the cluster to do. The Kubernetes control plane works continuously to ensure that the current state of your system matches the desired state described by these objects.

Common Kubernetes Objects

Here are some of the most commonly used objects in Kubernetes:

1. Pod

A Pod is a group of one or more containers that share storage, networking, and a defined runtime configuration. Containers within a pod are scheduled and deployed together, operating in the same execution context.

Containers on Pod

A pod acts as a logical host for tightly connected containers, enabling seamless communication via localhost and standard IPC mechanisms like SystemV semaphores or POSIX shared memory. Containers in different pods, however, have unique IP addresses and communicate using pod IPs, ensuring clear isolation while supporting inter-pod networking.

Like containers in an application, pods are considered relatively ephemeral entities. Their lifecycle involves creation, assignment of a unique UID, scheduling to a node, and remaining there until they are stopped, restarted, or deleted. You can see the pod life cycle from the image below.

Pod Lifecycle

If a node fails, all pods on that node are marked for deletion after a specific timeout. A pod with a unique ID will not be rescheduled to a new node; instead, it will be replaced by a new pod with a different ID.

2. Service

A Service in Kubernetes is an abstraction that defines a logical set of Pods and the policies for accessing them, often referred to as microservices. You can see the service balancing traffic across multiple pods in the image below.

Kubernetes Service

For example, imagine a backend that provides image-processing functionality with three replicas. These replicas are identical, so the frontend doesn’t need to know which backend instance it uses. Even if the backend Pods change over time, the frontend doesn’t need to manage or track the list of current backends.

A Service decouples this complexity by providing a stable interface, allowing clients to interact with Pods without worrying about their lifecycle or specific details.

For applications running on Kubernetes, the platform provides a simple API endpoint that is continuously updated to reflect the current state of the Pods within a service.

For non-native applications, Kubernetes offers a virtual IP-based bridge that routes traffic to the backend Pods of the service. This ensures seamless integration and reliable access, regardless of changes in the underlying Pods.

3. Volume

A Kubernetes Volume provides a way for containers to access storage beyond their ephemeral lifecycle. Unlike a container’s local filesystem, which is destroyed when the container stops, a volume ensures data persists as long as the Pod using it exists. Kubernetes supports multiple volume types, such as emptyDir, hostPath, configMap, secret, and network-based storage like NFS or cloud provider-specific volumes.

Volumes can be shared among containers within the same Pod, enabling them to collaborate on data. However, when a Pod is deleted, the associated volume typically goes with itβ€”unless you’re using Persistent Volumes (PV).

Persistent Volumes (PV) and Persistent Volume Claims (PVC)

A PV is a piece of storage provisioned in the cluster, either statically or dynamically. It is an abstract representation of storage resources like disks, network storage, or cloud storage, and exists independently of any specific Pod.

A PVC is a request for storage by a Pod. Users specify the required size, access modes (e.g., ReadWriteOnce, ReadOnlyMany), and storage class in the PVC. The cluster automatically binds the PVC to a suitable PV, providing the requested storage.

There are two methods to provide Persistent Volumes (PV)

1. Static

- The cluster administrator manually creates PVs by defining them in YAML manifests. - These PVs are available for binding with any PVC that matches their configuration.

2. Dynamic

- Kubernetes automatically provisions PVs based on the PVCs submitted by users.

- The cluster uses a StorageClass to determine the storage backend and provision the appropriate PV.

- This method simplifies administration by eliminating the need for pre-created PVs.

4. Namespace

A namespace in Kubernetes is a way to divide a cluster into virtual sub-clusters, providing logical isolation between resources. It is useful for organizing resources in environments with multiple teams, projects, or stages (e.g., development, staging, production).

By default, Kubernetes provides namespaces like default, kube-system, and kube-public. Users can create custom namespaces to segregate workloads and manage resource quotas, access controls, and policies independently for each namespace.

Namespaces are ideal for multi-tenant environments or to avoid naming collisions in large clusters. However, they don’t provide hard security boundaries and are primarily a tool for organizational purposes.

References:

- KUBERNETES UNTUK PEMULA. https://github.com/ngoprek-kubernetes/buku-kubernetes-pemula.

- Kubernetes Objects Guide. https://phoenixnap.com/kb/kubernetes-objects.

K3D: Getting Started with ArgoCD

Cover Image

K3D: Getting Started with ArgoCD

Intro

ArgoCD is a GitOps tool with a straightforward but powerful objective: to declaratively deploy applications to Kubernetes by managing application resources directly from version control systems, such as Git repositories. Every commit to the repository represents a change, which ArgoCD can apply to the Kubernetes cluster either manually or automatically. This approach ensures that deployment processes are fully controlled through version-controlled files, fostering an explicit and auditable release process.

For example, releasing a new application version involves updating the image tag in the resource files and committing the changes to the repository. ArgoCD syncs with the repository and seamlessly deploys the new version to the cluster.

Since ArgoCD itself operates on Kubernetes, it is straightforward to set up and integrates seamlessly with lightweight Kubernetes distributions like K3s. In this tutorial, we will demonstrate how to configure a local Kubernetes cluster using K3D and deploy applications with ArgoCD, utilizing the argocd-example-apps repository as a practical example.

Prerequisites

Before we begin, ensure you have the following installed:

- Docker

- Kubectl

- K3D

- ArgoCD CLI

Step 1: Set Up a K3D Cluster

Create a new Kubernetes cluster using K3D:

Terminal window
$ k3d cluster create argocluster --agents 2
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-argocluster'
INFO[0000] Created image volume k3d-argocluster-images
INFO[0000] Starting new tools node...
INFO[0000] Starting node 'k3d-argocluster-tools'
INFO[0001] Creating node 'k3d-argocluster-server-0'
INFO[0001] Creating node 'k3d-argocluster-agent-0'
INFO[0001] Creating node 'k3d-argocluster-agent-1'
INFO[0001] Creating LoadBalancer 'k3d-argocluster-serverlb'
INFO[0001] Using the k3d-tools node to gather environment information
INFO[0001] HostIP: using network gateway 172.18.0.1 address
INFO[0001] Starting cluster 'argocluster'
INFO[0001] Starting servers...
INFO[0002] Starting node 'k3d-argocluster-server-0'
INFO[0009] Starting agents...
INFO[0009] Starting node 'k3d-argocluster-agent-0'
INFO[0009] Starting node 'k3d-argocluster-agent-1'
INFO[0017] Starting helpers...
INFO[0017] Starting node 'k3d-argocluster-serverlb'
INFO[0024] Injecting records for hostAliases (incl. host.k3d.internal) and for 4 network members into CoreDNS configmap...
INFO[0026] Cluster 'argocluster' created successfully!
INFO[0026] You can now use it like this:
kubectl cluster-info

Verify that your cluster is running:

Terminal window
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k3d-argocluster-agent-0 Ready <none> 63s v1.30.4+k3s1
k3d-argocluster-agent-1 Ready <none> 62s v1.30.4+k3s1
k3d-argocluster-server-0 Ready control-plane,master 68s v1.30.4+k3s1

Step 2: Install ArgoCD

Install ArgoCD in your K3D cluster:

Terminal window
$ kubectl create namespace argocd
$ kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
customresourcedefinition.apiextensions.k8s.io/applications.argoproj.io created
customresourcedefinition.apiextensions.k8s.io/applicationsets.argoproj.io created
customresourcedefinition.apiextensions.k8s.io/appprojects.argoproj.io created
serviceaccount/argocd-application-controller created
serviceaccount/argocd-applicationset-controller created
serviceaccount/argocd-dex-server created
serviceaccount/argocd-notifications-controller created
serviceaccount/argocd-redis created
serviceaccount/argocd-repo-server created
serviceaccount/argocd-server created
role.rbac.authorization.k8s.io/argocd-application-controller created
role.rbac.authorization.k8s.io/argocd-applicationset-controller created
role.rbac.authorization.k8s.io/argocd-dex-server created
role.rbac.authorization.k8s.io/argocd-notifications-controller created
role.rbac.authorization.k8s.io/argocd-redis created
role.rbac.authorization.k8s.io/argocd-server created
...

Check the status of ArgoCD pods:

Terminal window
$ kubectl get pods -n argocd
NAME READY STATUS RESTARTS AGE
argocd-application-controller-0 1/1 Running 0 29m
argocd-applicationset-controller-684cd5f5cc-78cc8 1/1 Running 0 29m
argocd-dex-server-77c55fb54f-bw956 1/1 Running 0 29m
argocd-notifications-controller-69cd888b56-g7z4r 1/1 Running 0 29m
argocd-redis-55c76cb574-72mdh 1/1 Running 0 29m
argocd-repo-server-584d45d88f-2mdlc 1/1 Running 0 29m
argocd-server-8667f8577-prx6s 1/1 Running 0 29m

Expose the ArgoCD API server locally, then try to accessing the dashboard:

Terminal window
$ kubectl port-forward svc/argocd-server -n argocd 8080:443

ArgoCD

Step 3: Configure ArgoCD

Log in to ArgoCD

Retrieve the initial admin password:

Terminal window
kubectl get secret argocd-initial-admin-secret -n argocd -o jsonpath="{.data.password}" | base64 -d

Log in using the admin username and the password above.

Log in into ArgoCD Dashboard

Connect a Git Repository

Just clone the argocd-example-apps repository:

Terminal window
git clone https://github.com/argoproj/argocd-example-apps.git

Specify the ArgoCD server address in your CLI configuration:

Terminal window
$ argocd login localhost:8080
WARNING: server certificate had error: tls: failed to verify certificate: x509: certificate signed by unknown authorit
y. Proceed insecurely (y/n)? y
Username: admin
Password:
'admin:login' logged in successfully
Context 'localhost:8080' updated

Create a new ArgoCD application using the repository:

Terminal window
$ $ argocd app create example-app \
--repo https://github.com/argoproj/argocd-example-apps.git \
--path ./guestbook \
--dest-server https://kubernetes.default.svc \
--dest-namespace default
application 'example-app' created

Sync the application:

Terminal window
$ argocd app sync example-app
TIMESTAMP GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAG
E
2025-01-10T11:31:11+07:00 Service default guestbook-ui OutOfSync Missing
2025-01-10T11:31:11+07:00 apps Deployment default guestbook-ui OutOfSync Missing
2025-01-10T11:31:11+07:00 Service default guestbook-ui Synced Healthy
2025-01-10T11:31:11+07:00 Service default guestbook-ui Synced Healthy service/guestbook-ui created
2025-01-10T11:31:11+07:00 apps Deployment default guestbook-ui OutOfSync Missing deployment.apps/guestbook-ui created
2025-01-10T11:31:11+07:00 apps Deployment default guestbook-ui Synced Progressing deployment.apps/guestbook-ui created
...

Step 4: Verify the Deployment

Check that the application is deployed successfully:

Terminal window
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
guestbook-ui-649789b49c-zwjt8 1/1 Running 0 5m36s

Access the deployed application by exposing it via a NodePort or LoadBalancer:

Terminal window
$ kubectl port-forward svc/guestbook-ui 8081:80
Forwarding from 127.0.0.1:8081 -> 80
Forwarding from [::1]:8081 -> 80

Guestbook UI

Dashboard App List

Dashboard App

Conclusion

In this tutorial, you’ve set up a local Kubernetes cluster using K3D and deployed applications with ArgoCD. This setup provides a simple and powerful way to practice GitOps workflows locally. By leveraging tools like ArgoCD, you can ensure your deployments are consistent, auditable, and declarative. Happy GitOps-ing!

References

- GitOps on a Laptop with K3D and ArgoCD. https://www.sokube.io/en/blog/gitops-on-a-laptop-with-k3d-and-argocd-en.

- Take Argo CD for a spin with K3s and k3d. https://www.bekk.christmas/post/2020/13/take-argo-cd-for-a-spin-with-k3s-and-k3d.

- Application Deploy to Kubernetes with Argo CD and K3d. https://yashguptaa.medium.com/application-deploy-to-kubernetes-with-argo-cd-and-k3d-8e29cf4f83ee

K3D: Monitoring Your Service using Kubernetes Dashboard or Octant

Cover

K3D is a lightweight wrapper around k3s that allows you to run Kubernetes clusters inside Docker containers. While K3D is widely used for local development and testing, effective monitoring of services running on Kubernetes clusters is essential for debugging, performance tuning, and understanding resource usage.

In this blog, I will explore two popular monitoring tools for Kubernetes: Kubernetes Dashboard, the official web-based UI for Kubernetes, and Octant, a local, real-time, standalone dashboard developed by VMware. Both tools have unique strengths, and this guide will help you understand when to use one over the other.

Setting Up Kubernetes Dashboard On K3D

First you need to create a cluster using k3d cluster create:

Terminal window
$ k3d cluster create dashboard --servers 1 --agents 2
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-dashboard'
INFO[0000] Created image volume k3d-dashboard-images
INFO[0000] Starting new tools node...
INFO[0000] Starting node 'k3d-dashboard-tools'
INFO[0001] Creating node 'k3d-dashboard-server-0'
INFO[0001] Creating node 'k3d-dashboard-agent-0'
INFO[0001] Creating node 'k3d-dashboard-agent-1'
INFO[0001] Creating LoadBalancer 'k3d-dashboard-serverlb'
INFO[0001] Using the k3d-tools node to gather environment information
INFO[0001] HostIP: using network gateway 172.18.0.1 address
INFO[0001] Starting cluster 'dashboard'
INFO[0001] Starting servers...
INFO[0001] Starting node 'k3d-dashboard-server-0'
INFO[0008] Starting agents...
INFO[0008] Starting node 'k3d-dashboard-agent-0'
INFO[0008] Starting node 'k3d-dashboard-agent-1'
INFO[0015] Starting helpers...
INFO[0016] Starting node 'k3d-dashboard-serverlb'
INFO[0022] Injecting records for hostAliases (incl. host.k3d.internal) and for 4 network members into CoreDNS configmap...
INFO[0024] Cluster 'dashboard' created successfully!
INFO[0024] You can now use it like this:
kubectl cluster-info

Next, deploy Kubernetes Dashboard:

Terminal window
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml

Ensure the pods are running.

Terminal window
$ kubectl get pods -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
dashboard-metrics-scraper-795895d745-kcbkw 1/1 Running 0 10m
kubernetes-dashboard-56cf4b97c5-fg92n 1/1 Running 0 10m

Then, create a service account and bind role, to access the dashboard, you need a service account with the proper permissions. Create a service account and cluster role binding using the following YAML:

apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard

Apply this configuration:

Terminal window
$ kubectl apply -f admin-user.yaml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created

Then, Retrieve the token for login using:

Terminal window
$ kubectl -n kubernetes-dashboard create token admin-user

Use kubectl proxy to access the dashboard:

Terminal window
$ kubectl proxy
Starting to serve on 127.0.0.1:8001

Finally, open your browser and navigate to:

Terminal window
http://127.0.0.1:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login

Login to Kubernetes Dashboard

See the services

Note: Don’t forget to copy and paste the token when prompted.

Setting Up Octant on K3D

First, you need to install Octant, You can install Octant using a package manager or download it directly from the official releases. For example, on macOS, you can use Homebrew:

Terminal window
$ brew install octant

Next, on Linux just download the appropriate binary and move it to your path:

Terminal window
$ wget https://github.com/vmware-tanzu/octant/releases/download/v0.25.1/octant_0.25.1_Linux-64bit.tar.gz
$ tar -xvzf octant_0.25.1_Linux-64bit.tar.gz && mv octant_0.25.1_Linux-64bit octant
$ rm octant_0.25.1_Linux-64bit.tar.gz

Then, to start Octant,simply run the binary:

Terminal window
$ cd octant
$ ./octant

Finally, you can see the dashboard:

Octant Dashboard

Comparison: Kubernetes Dashboard vs Octant

FeatureKubernetes DashboardOctant
InstallationRequires deployment on the clusterLocal installation
AccessVia web proxyLocalhost
Real-Time UpdatesPartial (requires manual refresh)Full real-time updates
Ease of SetupModerate (requires token and RBAC)Easy (just run the binary)

Conclusion

Both Kubernetes Dashboard and Octant offer valuable features for monitoring Kubernetes clusters in K3D. If you need a quick and easy way to monitor your local cluster with minimal setup, Octant is a great choice. On the other hand, if you want an experience closer to managing a production environment, Kubernetes Dashboard is the better option.

References: