Skip to content
LinkedIn GitHub X

Blog

Deploying a Simple Go API with Supervisor and Nginx

Go, Supervisor & Nginx Cover

Intro

Hi! In this post, I’ll show you how to deploy a Go API using Supervisor to manage the process and Nginx as a web server to serve it.

Before we dive into the deployment steps, let’s briefly discuss why we’re using Supervisor and Nginx.

- Supervisor is a process control system that helps manage and monitor applications running in the background. It ensures that your Go API stays up and automatically restarts it if it crashes. See the full documentation

- Nginx is a high-performance web server that can also function as a reverse proxy, making it ideal for serving our Go API to the internet. See the full documentation

🤔 Why Choose Supervisor Over Other Options?

You might wonder why we use Supervisor instead of alternatives like Systemd, PM2, or containerized solutions like Docker. Here’s a quick comparison:

ToolsProsCons
SupervisorSimple setup, great for managing multiple processes, easy log managementRequires manual config
SystemdNative to Linux, faster startupMore complex setup, harder to debug
PM2Built for Node.js, supports process monitoringNot ideal for Go applications
DockerIsolated environment, easy deployment, scalableMore setup overhead, requires container knowledge

When Should You Use Supervisor?

Use Supervisor when you want a non-containerized way to manage a Go service, with features like auto-restart and log management, without dealing with systemd’s complexity or Docker’s extra overhead.

Setup and Run a Simple Go API

Requirements

Before starting, make sure you have the following installed on your system:

- Go

Terminal window
$ go version
go version go1.24.0 linux/amd64

If not installed, download it from the official site.

- Supervisor

  • Ubuntu/Debian

    Terminal window
    $ sudo apt update
    $ sudo apt install supervisor -y
  • CentOS/RHEL

    Terminal window
    $ sudo yum install supervisor -y
  • Homebrew (macOS)

    Terminal window
    $ brew install supervisor

    After installation, check if Supervisor is running:

    Terminal window
    $ sudo systemctl status supervisor

    If it’s not running, start and enable it:

    Terminal window
    $ sudo systemctl start supervisor
    $ sudo systemctl enable supervisor

- Nginx

  • Ubuntu/Debian

    Terminal window
    $ sudo apt install nginx -y
  • CentOS/RHEL

    Terminal window
    $ sudo yum install nginx -y
  • Homebrew (macOS)

    Terminal window
    $ brew install nginx

    After installation, check if Nginx is running:

    Terminal window
    $ sudo systemctl status nginx

    If it’s not running, start and enable it:

    Terminal window
    $ sudo systemctl start nginx
    $ sudo systemctl enable nginx

Initialize a New Go Project

First, create a new directory for the project and initialize a Go module:

Terminal window
$ cd /var/www/
$ mkdir go-api && cd go-api
Terminal window
$ go mod init example.com/go-api

This command creates a Go module named example.com/go-api, which helps manage dependencies.

Create a Simple API

Now, create a new file main.go and add the following code:

Terminal window
$ vim main.go
package main
import (
"fmt"
"net/http"
)
func handler(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "text/plain")
fmt.Fprintln(w, "Simple Go API")
}
func main() {
http.HandleFunc("/", handler)
fmt.Println("Server started at :8080")
http.ListenAndServe(":8080", nil)
}

Compile and run the Go server:

Terminal window
$ go run main.go

If successful, you should see this message in the terminal:

Terminal window
Server started at :8080

Now test the API using curl:

Terminal window
$ curl localhost:8080
Simple Go API

Create a Simple API with ASCII Text Response (Optional)

First, install the go-figure package:

Terminal window
$ go get github.com/common-nighthawk/go-figure

Now, modify main.go to generate an ASCII text response dynamically:

package main
import (
"fmt"
"net/http"
"github.com/common-nighthawk/go-figure"
)
func handler(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "text/plain")
asciiArt := figure.NewFigure("Simple Go API", "", true).String()
fmt.Fprintln(w, asciiArt)
}
func main() {
http.HandleFunc("/", handler)
fmt.Println("Server started at :8080")
http.ListenAndServe(":8080", nil)
}
Terminal window
$ curl localhost:8080
____ _ _ ____ _ ____ ___
/ ___| (_) _ __ ___ _ __ | | ___ / ___| ___ / \ | _ \ |_ _|
\___ \ | | | '_ ` _ \ | '_ \ | | / _ \ | | _ / _ \ / _ \ | |_) | | |
___) | | | | | | | | | | |_) | | | | __/ | |_| | | (_) | / ___ \ | __/ | |
|____/ |_| |_| |_| |_| | .__/ |_| \___| \____| \___/ /_/ \_\ |_| |___|
|_|

Running the API as a Background Service with Supervisor

Create a Supervisor Configuration for the Go API

Create a new Supervisor config file:

Terminal window
$ sudo vim /etc/supervisor/conf.d/go-api.conf

Add the following configuration:

Terminal window
[program:go-api]
process_name=%(program_name)s_%(process_num)02d
directory=/var/www/go-api
command=bash -c 'cd /var/www/go-api && ./main'
autostart=true
autorestart=true
user=www-data
redirect_stderr=true
stderr_logfile=/var/log/go-api.err.log
stdout_logfile=/var/log/go-api.out.log

Explanation:

directory=/var/www/go-api → The working directory of the Go API.

command=bash -c 'cd /var/www/go-api && ./main' → Runs the API.

autostart=true → Starts automatically on system boot.

autorestart=true → Restarts if the process crashes.

user=www-data → Runs as the www-data user (adjust as needed).

redirect_stderr=true → Redirects error logs to stdout.

stdout_logfile=/var/log/api/go-api.out.log → Standard output log file.

stderr_logfile=/var/log/api/go-api.err.log → Error log file.

Now, we need build the Go API:

Terminal window
$ go build -o main .

Ensure the directory and binary have the correct permissions:

Terminal window
$ sudo chown -R www-data:www-data /var/www/go-api
$ sudo chmod 775 /var/www/go-api/main

Apply the Supervisor Configuration

Reload Supervisor and start the service:

Terminal window
$ sudo supervisorctl reread
$ sudo supervisorctl update
$ sudo supervisorctl start go-api:*

Check the service status:

Terminal window
$ sudo supervisorctl avail
go-api:go-api_00 in use auto 999:999
Terminal window
$ sudo supervisorctl status go-api:*
go-api:go-api_00 RUNNING pid 198867, uptime 0:01:52

Check Logs and Debugging

If the API is not running, check the logs:

Terminal window
$ cat /var/log/go-api.out.log
$ cat /var/log/go-api.err.log

Or use Supervisor’s built-in log viewer:

Terminal window
$ sudo supervisorctl tail -f go-api:go-api_00
==> Press Ctrl-C to exit <==
Server started at :8080

Setting Up Nginx as a Reverse Proxy for the API

Create a new configuration file:

Terminal window
$ sudo vim /etc/nginx/sites-available/go-api
Terminal window
server {
server_name _;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
}
error_log /var/log/nginx/go-api_error.log;
access_log /var/log/nginx/go-api_access.log;
}

Create a symbolic link to enable the site:

Terminal window
$ sudo ln -s /etc/nginx/sites-available/go-api /etc/nginx/sites-enabled/

Test the configuration:

Terminal window
$ nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

If the test is successful, restart Nginx:

Terminal window
$ sudo systemctl restart nginx

Now, you can access your Go API using:

  • Localhost (if running locally)
Terminal window
curl http://localhost
____ _ _ ____ _ ____ ___
/ ___| (_) _ __ ___ _ __ | | ___ / ___| ___ / \ | _ \ |_ _|
\___ \ | | | '_ ` _ \ | '_ \ | | / _ \ | | _ / _ \ / _ \ | |_) | | |
___) | | | | | | | | | | |_) | | | | __/ | |_| | | (_) | / ___ \ | __/ | |
|____/ |_| |_| |_| |_| | .__/ |_| \___| \____| \___/ /_/ \_\ |_| |___|
|_|
  • Server’s Public IP (if running on a VPS or remote server)
Terminal window
curl http://YOUR_SERVER_IP

Note: If you want to access your Go API using a custom domain instead of an IP address, you need to purchase a domain, configure its DNS to point to your server’s IP, and update your Nginx configuration accordingly. For better security, it’s recommended to set up HTTPS using Let’s Encrypt.

Conclusion

In this guide, we deployed a Go API using Supervisor to manage the process ensuring automatic restarts and efficient request handling also Nginx as a reverse proxy. Thank you for reading, and good luck with your deployment! 🚀

Kubernetes 105: Create Kubernetes Cluster

Kubernetes Cover

Let’s start again. Now we will do some practices.

In this part of the Kubernetes series, we will explore how to create a Kubernetes cluster in different environments. Whether you’re running Kubernetes locally or in the cloud, understanding how to set up a cluster is fundamental to deploying and managing containerized applications efficiently.

We will cover three different ways to create a Kubernetes cluster:

- Kind (Kubernetes in Docker) - A lightweight way to run Kubernetes clusters locally for testing and development.

- K3D (K3S in Docker) - A more lightweight Kubernetes distribution, optimized for local development and CI/CD workflows.

- EKS (Amazon Elastic Kubernetes Service) - A managed Kubernetes service provided by AWS for running Kubernetes workloads in the cloud.

Each approach has its own use cases, advantages, and trade-offs. Let’s dive into each one and see how to set up a cluster.

Setting Up a Kubernetes Cluster with Kind

Kind Logo

Kind (Kubernetes in Docker) is one of the simplest ways to spin up a Kubernetes cluster for local development and testing. It runs Kubernetes clusters inside Docker containers and is widely used for CI/CD and development workflows.

Prerequisites

- Docker installed on your machine. (installation guide)

- KIND CLI installed. (installation guide)

- Kubectl CLI installed. (installation guide)

Create a Cluster with Kind

- Create a new Kind cluster:

Terminal window
$ kind create cluster --name kind-cluster
Creating cluster "kind-cluster" ...
Ensuring node image (kindest/node:v1.31.0) 🖼
Preparing nodes 📦
Writing configuration 📜
Starting control-plane 🕹
Installing CNI 🔌
Installing StorageClass 💾
Set kubectl context to "kind-kind-cluster"
You can now use your cluster with:
kubectl cluster-info --context kind-kind-cluster
Thanks for using kind! 😊
  • Check the cluster:
Terminal window
$ kubectl cluster-info --context kind-kind-cluster
Kubernetes control plane is running at https://127.0.0.1:43417
CoreDNS is running at https://127.0.0.1:43417/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

- List available nodes:

Terminal window
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kind-cluster-control-plane Ready control-plane 75s v1.31.0

Create Simple Deployment

- Use the kubectl create deployment command to define and start a deployment:

Terminal window
$ kubectl create deployment nginx --image=nginx
deployment.apps/nginx created

- Check deployment status using kubectl get deployment command:

Terminal window
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 0/1 1 0 29s

- Expose the deployment:

Terminal window
$ kubectl expose deployment nginx --port=80 --type=LoadBalancer
service/nginx exposed

- Verify the Pod status and then try to access Nginx using your browser:

Terminal window
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-676b6c5bbc-wd87x 1/1 Running 0 12m

Access Nginx

- Delete the cluster when no longer needed

Terminal window
$ kind delete cluster --name kind-cluster
Deleting cluster "kind-cluster" ...
Deleted nodes: ["kind-cluster-control-plane"]

Setting Up a Kubernetes Cluster with K3D

K3D Logo

K3D is a tool that allows you to run lightweight Kubernetes clusters using K3S inside Docker. It is a great choice for fast, local Kubernetes development.

Prerequisites

- Docker installed on your machine. (installation guide)

- K3D CLI installed. (installation guide)

- Kubectl CLI installed. (installation guide)

Create a Cluster with K3D

- Create a new K3D cluster:

Terminal window
$ k3d cluster create my-k3d-cluster
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-my-k3d-cluster'
INFO[0000] Created image volume k3d-my-k3d-cluster-images
INFO[0000] Starting new tools node...
INFO[0000] Starting node 'k3d-my-k3d-cluster-tools'
INFO[0001] Creating node 'k3d-my-k3d-cluster-server-0'
INFO[0001] Creating LoadBalancer 'k3d-my-k3d-cluster-serverlb'
INFO[0001] Using the k3d-tools node to gather environment information
INFO[0001] HostIP: using network gateway 172.20.0.1 address
INFO[0001] Starting cluster 'my-k3d-cluster'
INFO[0001] Starting servers...
INFO[0001] Starting node 'k3d-my-k3d-cluster-server-0'
INFO[0008] All agents already running.
INFO[0008] Starting helpers...
INFO[0008] Starting node 'k3d-my-k3d-cluster-serverlb'
INFO[0016] Injecting records for hostAliases (incl. host.k3d.internal) and for 2 network members into CoreDNS configma
p...
INFO[0018] Cluster 'my-k3d-cluster' created successfully!
INFO[0018] You can now use it like this:
kubectl cluster-info

- Check the cluster status:

Terminal window
$ kubectl cluster-info
Kubernetes control plane is running at https://0.0.0.0:46503
CoreDNS is running at https://0.0.0.0:46503/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://0.0.0.0:46503/api/v1/namespaces/kube-system/services/https:metrics-server:https/p
roxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

- List available nodes:

Terminal window
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k3d-my-k3d-cluster-server-0 Ready control-plane,master 2m33s v1.30.4+k3s1

Create Simple Deployment

- Use the kubectl create deployment command to define and start a deployment:

Terminal window
$ kubectl create deployment httpd --image=httpd

- Check deployment status using kubectl get deployment command:

Terminal window
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
httpd 0/1 1 0 45s

- Verify the Pod status:

Terminal window
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
httpd-56f946b8c8-84ww8 1/1 Running 0 9m11s

- Expose the deployment:

Terminal window
$ kubectl expose deployment httpd --port=80 --type=LoadBalancer

- Try to access using browser:

Access HTTPD

- Delete the cluster when no longer needed:

Terminal window
$ k3d cluster delete my-k3d-cluster
INFO[0000] Deleting cluster 'my-k3d-cluster'
INFO[0001] Deleting cluster network 'k3d-my-k3d-cluster'
INFO[0001] Deleting 1 attached volumes...
INFO[0001] Removing cluster details from default kubeconfig...
INFO[0001] Removing standalone kubeconfig file (if there is one)...
INFO[0001] Successfully deleted cluster my-k3d-cluster!

Setting Up a Kubernetes Cluster on AWS EKS

AWS EKS Logo

Amazon Elastic Kubernetes Service (EKS) is a fully managed Kubernetes service on AWS, designed for running production workloads.

Prerequisites

- AWS CLI installed and configured. (installation guide)

- EKSCTL CLI installed. (installation guide)

- Kubectl CLI installed. (installation guide)

Create a cluster on EKS

To create a Kubernetes cluster in AWS, you can use the AWS Console (dashboard) or the eksctl CLI. For this guide, we will use eksctl.

We will provisions an EKS cluster with two t4g.small nodes in the us-east-1 region, making it ready for running Kubernetes workloads.

Terminal window
$ eksctl create cluster \
--name cluster-1 \
--region us-east-1 \
--node-type t4g.small \
--nodes 2 \
--nodegroup-name node-group-1
2025-02-01 19:52:35 [ℹ] eksctl version 0.202.0
2025-02-01 19:52:35 [ℹ] using region us-east-1
2025-02-01 19:52:37 [ℹ] setting availability zones to [us-east-1c us-east-1f]
...
2025-02-01 20:02:04 [ℹ] creating addon
2025-02-01 20:02:04 [ℹ] successfully created addon
2025-02-01 20:02:05 [ℹ] creating addon
2025-02-01 20:02:06 [ℹ] successfully created addon
2025-02-01 20:02:07 [ℹ] creating addon
2025-02-01 20:02:07 [ℹ] successfully created addon
"us-east-1" region is ready

- Access AWS console, navigate to the EKS service and you can see the cluster is successfully created.

After cluster creation

- List available nodes:

Terminal window
kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-192-168-xx-yy.ec2.internal Ready <none> 17m v1.30.8-eks-aeac579
ip-192-168-xx-yy.ec2.internal Ready <none> 17m v1.30.8-eks-aeac57

Create Simple Deployment

- Use the kubectl create deployment command to define and start a deployment:

Terminal window
$ kubectl create deployment nginx --image=nginx
deployment.apps/nginx create

- Check deployment status using kubectl get deployment command:

Terminal window
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 1/1 1 1 23s

- Verify the Pod status:

Terminal window
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-bf5d5cf98-9dld5 1/1 Running 0 43s

- Expose the service:

Terminal window
$ kubectl expose deployment nginx --type=LoadBalancer --port=80 --name=nginx-service

- Try to access using the browser:

Access the service

- Delete the cluster when no longer needed:

Terminal window
$ eksctl delete cluster --name cluster-1 --region us-east-1
2025-02-01 20:51:59 [ℹ] deleting EKS cluster "cluster-1"
2025-02-01 20:52:02 [ℹ] will drain 0 unmanaged nodegroup(s) in cluster "cluster-1"
2025-02-01 20:52:02 [ℹ] starting parallel draining, max in-flight of 1
2025-02-01 20:52:04 [ℹ] deleted 0 Fargate profile(s)
2025-02-01 20:52:13 [✔] kubeconfig has been updated
2025-02-01 20:52:13 [ℹ] cleaning up AWS load balancers created by Kubernetes objects of Kind Service or Ingress
2025-02-01 20:52:56 [ℹ]
...
2025-02-01 21:02:00 [ℹ] waiting for CloudFormation stack "eksctl-cluster-1-nodegroup-node-group-1"
2025-02-01 21:02:01 [ℹ] will delete stack "eksctl-cluster-1-cluster"
2025-02-01 21:02:04 [✔] all cluster resources were deleted

Conclusion

Setting up a Kubernetes cluster is the first step in running containerized applications at scale. In this guide, we’ve explored three different ways to create a Kubernetes cluster and do a simple deployment: using Kind and K3D for local development and using EKS for cloud-based deployments. Each method has its own advantages depending on your use case.

Thanks for reading this post. Stay tuned!

References:

- KUBERNETES UNTUK PEMULA. https://github.com/ngoprek-kubernetes/buku-kubernetes-pemula.

- How do I install AWS EKS CLI (eksctl)?. https://learn.arm.com/install-guides/eksctl

Kubernetes 104: Controllers

Kubernetes Cover

Let’s start again. Now I’m going to talk about Controllers in Kubernetes. In Kubernetes, a Controller is like a cluster’s brain, constantly working to ensure the system maintains its desired state. By monitoring objects such as Pods, Deployments, or DaemonSets, Controllers automate tasks and handle changes dynamically. Understanding Controllers is key to grasping how Kubernetes orchestrates and manages workloads seamlessly.

Common Kubernetes Controllers

Here are some of the most commonly used Controllers in Kubernetes:

1. Deployment

Deployments manage updates to Pods and ReplicaSets declaratively by transitioning the current state to the desired state step-by-step. They can create new ReplicaSets, adopt existing resources, or remove old Deployments.

Deployment Diagram

Common uses for Deployments include:

a. Releasing a ReplicaSet and monitoring its status.

b. Updating Pod specifications to declare a new desired state.

c. Scaling up to handle increased load.

d. Rolling back to a previous version if the current state is unstable.

e. Cleaning up unused ReplicaSets.

2. ReplicaSet

ReplicaSets (RS) function as controllers in Kubernetes, responsible for maintaining a consistent number of running Pods for a specific workload. Acting as the mechanism behind the scenes, the ReplicaSet controller continuously monitors the state of the Pods and ensures the desired replica count is maintained. If a Pod crashes, is evicted, or fails for any reason, the ReplicaSet controller swiftly creates new Pods to restore the desired state, ensuring resilience and uninterrupted service.

In practical use, ReplicaSets are not typically managed directly by users. Instead, they are controlled through Deployments, which leverage the ReplicaSet controller while providing additional features such as rolling updates, rollbacks, and declarative workload management. This abstraction allows users to benefit from the reliability and scalability of ReplicaSet controllers without dealing with their complexities directly.

ReplicaSet Diagram

3. DaemonSet

DaemonSet (DS) ensures that every or specific nodes in a cluster run a copy of a particular Pod. When a new node is added, DaemonSet automatically creates a Pod on that node. Conversely, when a node is removed, the associated Pod is deleted by the garbage collector. Deleting the DaemonSet removes all Pods it created.

DaemonSet Diagram

Common uses of DaemonSet:

1. Running storage daemons across nodes, such as Glusterd or Ceph.

2. Running log collection daemons across nodes, such as Fluentd or LogStash.

3. Running node monitoring daemons, such as Prometheus Node Exporter, Flowmill, or New Relic Agent.

DaemonSet is ideal for tasks that require processes to run on every node, such as log collection, monitoring, or providing local volumes.

4. StatefulSet

A StatefulSet is a Kubernetes controller used for managing stateful applications. Unlike Deployments, which focus on stateless workloads, StatefulSet is designed for applications that require persistent identity and stable storage. It ensures that each Pod it manages has a unique, stable network identity and maintains a strict order during creation, scaling, or deletion.

Key Features of StatefulSet:

1. Stable Network Identity: Each Pod gets a unique and persistent DNS name (e.g., pod-0, pod-1), which remains consistent even after rescheduling.

2. Ordered Pod Deployment and Scaling: Pods are created and scaled in a sequential order. For example, pod-0 must be created before pod-1, and the same applies during deletion.

3. Persistent Storage: StatefulSet works closely with PersistentVolumeClaim (PVC). Each Pod gets a dedicated PersistentVolume that remains intact even after the Pod is deleted or rescheduled.

StatefulSet Diagram

Common Use Cases:

1. Databases like MySQL, PostgreSQL, and MongoDB, where stable storage and network identity are critical.

2. Distributed Systems like Cassandra, Kafka, or ZooKeeper, where maintaining order and state is essential.

3. Caching Systems like Redis, requiring predictable storage and replication across nodes.

5. Job

A Job is a Kubernetes controller designed to manage tasks that run to completion. Unlike Deployments or StatefulSets, which manage long-running or stateful applications, Jobs are used for short-lived workloads that need to be executed only once or a specific number of times.

Key Features of a Job:

1. Ensures Completion: A Job creates one or more Pods to perform a task and ensures that the task is completed successfully. If a Pod fails, the Job controller automatically creates a replacement until the task succeeds.

2. Parallelism: Jobs support parallel execution, allowing multiple Pods to run concurrently, controlled by the parallelism and completions parameters.

3. Retries: Jobs retry failed Pods until the task is successful or a specified backoff limit is reached.

Common Use Cases:

- Batch Processing: Data transformation, ETL pipelines, or video encoding.

- Database Operations: Running migrations, backups, or clean-up scripts.

- One-Time Tasks: Performing diagnostics, generating reports, or initializing configurations.

Thank you for reading this post.😀

References:

- KUBERNETES UNTUK PEMULA. https://github.com/ngoprek-kubernetes/buku-kubernetes-pemula.

- Kubernetes Documentation: Jobs. https://kubernetes.io/docs/concepts/workloads/controllers/job.

- Kubernetes Controllers. https://www.uffizzi.com/kubernetes-multi-tenancy/kubernetes-controllers.

- Kubernetes: DaemonSet. https://opstree.com/blog/2021/12/07/kubernetes-daemonset.

- Kubernetes StatefulSet vs Kubernetes Deployment. https://devtron.ai/blog/deployment-vs-statefulset.

Kubernetes 103: Object

Kubernetes Cover

Let’s start again. Now I’m going to talk about objects in Kubernetes. In Kubernetes, an object represents a record of intent, where you declare what you want the cluster to do. The Kubernetes control plane works continuously to ensure that the current state of your system matches the desired state described by these objects.

Common Kubernetes Objects

Here are some of the most commonly used objects in Kubernetes:

1. Pod

A Pod is a group of one or more containers that share storage, networking, and a defined runtime configuration. Containers within a pod are scheduled and deployed together, operating in the same execution context.

Containers on Pod

A pod acts as a logical host for tightly connected containers, enabling seamless communication via localhost and standard IPC mechanisms like SystemV semaphores or POSIX shared memory. Containers in different pods, however, have unique IP addresses and communicate using pod IPs, ensuring clear isolation while supporting inter-pod networking.

Like containers in an application, pods are considered relatively ephemeral entities. Their lifecycle involves creation, assignment of a unique UID, scheduling to a node, and remaining there until they are stopped, restarted, or deleted. You can see the pod life cycle from the image below.

Pod Lifecycle

If a node fails, all pods on that node are marked for deletion after a specific timeout. A pod with a unique ID will not be rescheduled to a new node; instead, it will be replaced by a new pod with a different ID.

2. Service

A Service in Kubernetes is an abstraction that defines a logical set of Pods and the policies for accessing them, often referred to as microservices. You can see the service balancing traffic across multiple pods in the image below.

Kubernetes Service

For example, imagine a backend that provides image-processing functionality with three replicas. These replicas are identical, so the frontend doesn’t need to know which backend instance it uses. Even if the backend Pods change over time, the frontend doesn’t need to manage or track the list of current backends.

A Service decouples this complexity by providing a stable interface, allowing clients to interact with Pods without worrying about their lifecycle or specific details.

For applications running on Kubernetes, the platform provides a simple API endpoint that is continuously updated to reflect the current state of the Pods within a service.

For non-native applications, Kubernetes offers a virtual IP-based bridge that routes traffic to the backend Pods of the service. This ensures seamless integration and reliable access, regardless of changes in the underlying Pods.

3. Volume

A Kubernetes Volume provides a way for containers to access storage beyond their ephemeral lifecycle. Unlike a container’s local filesystem, which is destroyed when the container stops, a volume ensures data persists as long as the Pod using it exists. Kubernetes supports multiple volume types, such as emptyDir, hostPath, configMap, secret, and network-based storage like NFS or cloud provider-specific volumes.

Volumes can be shared among containers within the same Pod, enabling them to collaborate on data. However, when a Pod is deleted, the associated volume typically goes with it—unless you’re using Persistent Volumes (PV).

Persistent Volumes (PV) and Persistent Volume Claims (PVC)

A PV is a piece of storage provisioned in the cluster, either statically or dynamically. It is an abstract representation of storage resources like disks, network storage, or cloud storage, and exists independently of any specific Pod.

A PVC is a request for storage by a Pod. Users specify the required size, access modes (e.g., ReadWriteOnce, ReadOnlyMany), and storage class in the PVC. The cluster automatically binds the PVC to a suitable PV, providing the requested storage.

There are two methods to provide Persistent Volumes (PV)

1. Static

- The cluster administrator manually creates PVs by defining them in YAML manifests. - These PVs are available for binding with any PVC that matches their configuration.

2. Dynamic

- Kubernetes automatically provisions PVs based on the PVCs submitted by users.

- The cluster uses a StorageClass to determine the storage backend and provision the appropriate PV.

- This method simplifies administration by eliminating the need for pre-created PVs.

4. Namespace

A namespace in Kubernetes is a way to divide a cluster into virtual sub-clusters, providing logical isolation between resources. It is useful for organizing resources in environments with multiple teams, projects, or stages (e.g., development, staging, production).

By default, Kubernetes provides namespaces like default, kube-system, and kube-public. Users can create custom namespaces to segregate workloads and manage resource quotas, access controls, and policies independently for each namespace.

Namespaces are ideal for multi-tenant environments or to avoid naming collisions in large clusters. However, they don’t provide hard security boundaries and are primarily a tool for organizational purposes.

References:

- KUBERNETES UNTUK PEMULA. https://github.com/ngoprek-kubernetes/buku-kubernetes-pemula.

- Kubernetes Objects Guide. https://phoenixnap.com/kb/kubernetes-objects.

K3D: Getting Started with ArgoCD

Cover Image

K3D: Getting Started with ArgoCD

Intro

ArgoCD is a GitOps tool with a straightforward but powerful objective: to declaratively deploy applications to Kubernetes by managing application resources directly from version control systems, such as Git repositories. Every commit to the repository represents a change, which ArgoCD can apply to the Kubernetes cluster either manually or automatically. This approach ensures that deployment processes are fully controlled through version-controlled files, fostering an explicit and auditable release process.

For example, releasing a new application version involves updating the image tag in the resource files and committing the changes to the repository. ArgoCD syncs with the repository and seamlessly deploys the new version to the cluster.

Since ArgoCD itself operates on Kubernetes, it is straightforward to set up and integrates seamlessly with lightweight Kubernetes distributions like K3s. In this tutorial, we will demonstrate how to configure a local Kubernetes cluster using K3D and deploy applications with ArgoCD, utilizing the argocd-example-apps repository as a practical example.

Prerequisites

Before we begin, ensure you have the following installed:

- Docker

- Kubectl

- K3D

- ArgoCD CLI

Step 1: Set Up a K3D Cluster

Create a new Kubernetes cluster using K3D:

Terminal window
$ k3d cluster create argocluster --agents 2
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-argocluster'
INFO[0000] Created image volume k3d-argocluster-images
INFO[0000] Starting new tools node...
INFO[0000] Starting node 'k3d-argocluster-tools'
INFO[0001] Creating node 'k3d-argocluster-server-0'
INFO[0001] Creating node 'k3d-argocluster-agent-0'
INFO[0001] Creating node 'k3d-argocluster-agent-1'
INFO[0001] Creating LoadBalancer 'k3d-argocluster-serverlb'
INFO[0001] Using the k3d-tools node to gather environment information
INFO[0001] HostIP: using network gateway 172.18.0.1 address
INFO[0001] Starting cluster 'argocluster'
INFO[0001] Starting servers...
INFO[0002] Starting node 'k3d-argocluster-server-0'
INFO[0009] Starting agents...
INFO[0009] Starting node 'k3d-argocluster-agent-0'
INFO[0009] Starting node 'k3d-argocluster-agent-1'
INFO[0017] Starting helpers...
INFO[0017] Starting node 'k3d-argocluster-serverlb'
INFO[0024] Injecting records for hostAliases (incl. host.k3d.internal) and for 4 network members into CoreDNS configmap...
INFO[0026] Cluster 'argocluster' created successfully!
INFO[0026] You can now use it like this:
kubectl cluster-info

Verify that your cluster is running:

Terminal window
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k3d-argocluster-agent-0 Ready <none> 63s v1.30.4+k3s1
k3d-argocluster-agent-1 Ready <none> 62s v1.30.4+k3s1
k3d-argocluster-server-0 Ready control-plane,master 68s v1.30.4+k3s1

Step 2: Install ArgoCD

Install ArgoCD in your K3D cluster:

Terminal window
$ kubectl create namespace argocd
$ kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
customresourcedefinition.apiextensions.k8s.io/applications.argoproj.io created
customresourcedefinition.apiextensions.k8s.io/applicationsets.argoproj.io created
customresourcedefinition.apiextensions.k8s.io/appprojects.argoproj.io created
serviceaccount/argocd-application-controller created
serviceaccount/argocd-applicationset-controller created
serviceaccount/argocd-dex-server created
serviceaccount/argocd-notifications-controller created
serviceaccount/argocd-redis created
serviceaccount/argocd-repo-server created
serviceaccount/argocd-server created
role.rbac.authorization.k8s.io/argocd-application-controller created
role.rbac.authorization.k8s.io/argocd-applicationset-controller created
role.rbac.authorization.k8s.io/argocd-dex-server created
role.rbac.authorization.k8s.io/argocd-notifications-controller created
role.rbac.authorization.k8s.io/argocd-redis created
role.rbac.authorization.k8s.io/argocd-server created
...

Check the status of ArgoCD pods:

Terminal window
$ kubectl get pods -n argocd
NAME READY STATUS RESTARTS AGE
argocd-application-controller-0 1/1 Running 0 29m
argocd-applicationset-controller-684cd5f5cc-78cc8 1/1 Running 0 29m
argocd-dex-server-77c55fb54f-bw956 1/1 Running 0 29m
argocd-notifications-controller-69cd888b56-g7z4r 1/1 Running 0 29m
argocd-redis-55c76cb574-72mdh 1/1 Running 0 29m
argocd-repo-server-584d45d88f-2mdlc 1/1 Running 0 29m
argocd-server-8667f8577-prx6s 1/1 Running 0 29m

Expose the ArgoCD API server locally, then try to accessing the dashboard:

Terminal window
$ kubectl port-forward svc/argocd-server -n argocd 8080:443

ArgoCD

Step 3: Configure ArgoCD

Log in to ArgoCD

Retrieve the initial admin password:

Terminal window
kubectl get secret argocd-initial-admin-secret -n argocd -o jsonpath="{.data.password}" | base64 -d

Log in using the admin username and the password above.

Log in into ArgoCD Dashboard

Connect a Git Repository

Just clone the argocd-example-apps repository:

Terminal window
git clone https://github.com/argoproj/argocd-example-apps.git

Specify the ArgoCD server address in your CLI configuration:

Terminal window
$ argocd login localhost:8080
WARNING: server certificate had error: tls: failed to verify certificate: x509: certificate signed by unknown authorit
y. Proceed insecurely (y/n)? y
Username: admin
Password:
'admin:login' logged in successfully
Context 'localhost:8080' updated

Create a new ArgoCD application using the repository:

Terminal window
$ $ argocd app create example-app \
--repo https://github.com/argoproj/argocd-example-apps.git \
--path ./guestbook \
--dest-server https://kubernetes.default.svc \
--dest-namespace default
application 'example-app' created

Sync the application:

Terminal window
$ argocd app sync example-app
TIMESTAMP GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAG
E
2025-01-10T11:31:11+07:00 Service default guestbook-ui OutOfSync Missing
2025-01-10T11:31:11+07:00 apps Deployment default guestbook-ui OutOfSync Missing
2025-01-10T11:31:11+07:00 Service default guestbook-ui Synced Healthy
2025-01-10T11:31:11+07:00 Service default guestbook-ui Synced Healthy service/guestbook-ui created
2025-01-10T11:31:11+07:00 apps Deployment default guestbook-ui OutOfSync Missing deployment.apps/guestbook-ui created
2025-01-10T11:31:11+07:00 apps Deployment default guestbook-ui Synced Progressing deployment.apps/guestbook-ui created
...

Step 4: Verify the Deployment

Check that the application is deployed successfully:

Terminal window
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
guestbook-ui-649789b49c-zwjt8 1/1 Running 0 5m36s

Access the deployed application by exposing it via a NodePort or LoadBalancer:

Terminal window
$ kubectl port-forward svc/guestbook-ui 8081:80
Forwarding from 127.0.0.1:8081 -> 80
Forwarding from [::1]:8081 -> 80

Guestbook UI

Dashboard App List

Dashboard App

Conclusion

In this tutorial, you’ve set up a local Kubernetes cluster using K3D and deployed applications with ArgoCD. This setup provides a simple and powerful way to practice GitOps workflows locally. By leveraging tools like ArgoCD, you can ensure your deployments are consistent, auditable, and declarative. Happy GitOps-ing!

References

- GitOps on a Laptop with K3D and ArgoCD. https://www.sokube.io/en/blog/gitops-on-a-laptop-with-k3d-and-argocd-en.

- Take Argo CD for a spin with K3s and k3d. https://www.bekk.christmas/post/2020/13/take-argo-cd-for-a-spin-with-k3s-and-k3d.

- Application Deploy to Kubernetes with Argo CD and K3d. https://yashguptaa.medium.com/application-deploy-to-kubernetes-with-argo-cd-and-k3d-8e29cf4f83ee