Skip to content

Blog

3 posts with the tag “Blog”

Kubernetes 101: Introduction

Hi, welcome to the wonderful world of DevOps. This is a post I wrote while learning Kubernetes. So, let’s start the journey!😎.

πŸ“„ What is Kubernetes?

Since release by Google in 2014, Kubernetes has become a very popular technology in modern cloud infrastructure. It is a useful open-source orchestrator for containerizing applications.

Kubernetes everywhere meme

The name Kubernetes comes from the Greek, meaning β€œnavigator” or β€œpilot”. As such, Kubernetes is expected to become an open-source platform that can be used for application workload management, as well as providing declarative configuration and automation. Now, with widely available services, support and tools, Kubernetes has a large and rapidly growing ecosystem.

Based on its experience over the last decade, and with the best ideas coming from the community, Google created Kubernetes, see Large-scale cluster management at Google with Borg for more.

By the way, Kubernetes is not a monolithic system, but rather a building block and pluggable system that is needed to build a platform that developers want, while still prioritizing the concept of flexibility.

⚑ Kubernetes Features

There are many features of Kubernetes:

- Service discovery and load balancing, means Kubernetes can expose containers using their DNS or IP. Kubernetes can also perform load balancing and distribute traffic within a system.

- Storage orchestration, Kubernetes automatically installs storage systems, either on-premises or cloud services.

- Deployment and automatic rollback, Kubernetes will change the current state to the user’s desired state at a controlled rate. For example, Kubernetes will automatically create a new container, delete the old container, and adopt its resources into the new container.

- Automatic Packing, Kubernetes is very efficient in the allocation of memory (RAM) and CPU usage for each container.

- Recovery, Kubernetes can restart failed containers, replace containers, and shut down containers that do not respond and do not provide services to users.

- Managing secrets and configurations, Kubernetes stores and manages sensitive information, such as passwords, OAuth tokens, and SSH keys, so this information can be updated without creating container images and exposing existing secrets.

- Providing PaaS services, Kubernetes provides features such as deployment, scaling, load balancing, logging, and monitoring, although Kubernetes runs at the container level rather than the hardware level.

😌 The following features are not available!?

Kubernetes meme

There are some features not included or provided by Kubernetes, for example:

- Kubernetes aims to support all variations of workloads (including stateless or stateful, and data processing), so as long as the application is running on containers, there is no limit to the applications that can be supported.

- Kubernetes does not provide a CI/CD workflow mechanism.

- Kubernetes does not provide application-level services such as middleware (e.g. message buses), data processing frameworks (e.g. Spark), databases (e.g. MySQL), caches, or cluster storage systems (e.g. Ceph) as an integrated service. But all of these components can run on Kubernetes.

- Kubernetes does not provide or require a configuration language like Jsonnet, as it provides a declarative API that can be used with different types of declarative specifications.

- Kubernetes does not provide or adapt a configuration for the machine’s maintenance, management, or self-healing to any particular specification.

Reference:

Thank you for reading this post.πŸ˜€

Kubernetes 102: Kubernetes & Containerization

Okay, let’s start again. Kubernetes and container are two things that cannot be separated from each other.

Understanding container is essential in the context of Kubernetes, because Kubernetes is an orchestration system for managing containers. Knowing about containers will help you understand how Kubernetes organizes and distributes containerized applications, optimizes resource usage, and ensures isolation and failure recovery.

πŸ€” Why Container?

VM Meme

The old way of deploying an application was to install the application on a machine (VM) and then install various libraries or dependencies using a package manager. This creates dependencies between executables (files that can be run), configuration, and other dependencies. It takes a lot of time to do this.

Container meme

A new way to overcome this problem is by deploying the applications using containers. By the way, containers are virtualization at the operating system level, not at the hardware level.

Containers support isolation, both between the containers and also with the machine on which the container is placed. Then, You can’t see another process in another container because each container has its file system.

There are advantages and disadvantages when using containers:

βž• Pros

- Portability, containers encapsulate all dependencies and configurations, allowing applications to run consistently across different environments.

- Efficiency, containers share the host OS kernel, making them more lightweight and resource-efficient compared to virtual machines.

- Scalability, containers can be easily scaled up or down to handle varying loads, and orchestration tools like Kubernetes automate this process.

- Isolation, containers provide process and resource isolation, enhancing security and stability by limiting the impact of failures to individual containers.

βž– Cons

- Complexity, managing containerized applications can be complex, especially at scale, requiring robust orchestration and monitoring tools.

- Security, containers share the host OS kernel, which can pose security risks if a vulnerability in the kernel is exploited.

- Compatibility, not all applications are suitable for containerization, especially those with complex dependencies or those that require direct access to hardware.

πŸ“¦ Container Architecture

Container Architecture

Containers have several main components you need to know about, likes:

1. Container Runtime

A container runtime is a software component responsible for running containers. It provides the tools and services necessary to create, start, stop, and manage the lifecycle of containers. Container runtimes ensure that containers are isolated from each other and from the host system while sharing the host operating system kernel. There are various kinds of runtimes, such as Docker, containerd, CRI-O, and several other types of runtime implementations in support of the Kubernetes CRI (Container Runtime Interface).

2. Container Image

A container image is a lightweight, standalone, executable software package that includes everything needed to run a piece of software, including code, runtime, system tools, libraries, and settings. Container images are used to create containers.

3. Application Container

This is the result of the new image, including any code changes. Then build the container using the new image and re-run it.

☸️ Kubernetes Architecture

Kubernetes clusters consist of worker nodes that run applications in containers. Each cluster has at least one worker node. Pods are a workload component of the application. The control plane manages the worker nodes and pods in the cluster.

Kubernetes Architecture

πŸŽ› Control Plane Components

1. Kube-apiserver

The control plane component exposing the Kubernetes API as the front-end, and this component is designed for horizontal scaling.

2. Etcd

It is a consistent key-value store that is used as data storage for Kubernetes clusters, so you need to pay attention to the mechanism for backing it up on Kubernetes clusters.

3. Kube-scheduler

The Kube scheduler is a core component of Kubernetes, responsible for assigning pods to nodes within a cluster. It ensures efficient resource utilization and adherence to various scheduling policies by filtering out nodes that don’t meet the pod’s requirements, scoring the remaining nodes based on criteria such as resource availability and affinity rules, and then binding the pod to the highest-scoring node. This process ensures that pods are placed on appropriate nodes, maintaining a balanced distribution of resources and adhering to constraints and priorities.

4. Kube-controller-manager

The Kube controller manager is a key component of Kubernetes, responsible for running various controllers that regulate the state of the cluster. It includes controllers that handle tasks such as node management, replication, and endpoint updates.

5. Cloud-controller-manager

The cloud controller manager is a component in Kubernetes that integrates the cluster with the underlying cloud services. It runs controllers specific to cloud environments that handle tasks such as node lifecycle, route management, and service load balancers.

πŸ’» Node Components

1. Kubelet

Kubelet is a critical agent that runs on each node in a Kubernetes cluster and is responsible for managing the state of the pods on that node. It ensures that the containers described in the pod specifications are running and healthy, by interacting with the container runtime.

2. Kube-proxy

Kube-proxy is a network component in Kubernetes that runs on each node and is responsible for maintaining network rules and facilitating communication between services. It handles traffic routing to ensure that requests are properly routed to the appropriate pods, supporting both internal and external service access. Kube-proxy uses methods such as iptables, IPVS, or userspace proxying to manage and balance network traffic, ensuring reliable and efficient connectivity within the Kubernetes cluster.

3. Container runtime

A container runtime is software that manages the lifecycle of containers, including creating, starting, stopping, and deleting them. It provides the tools and services necessary to run containerized applications, ensuring that they are isolated from each other and the host system. Popular container runtimes include Docker, containerd, rktlet, and CRI-O. In Kubernetes, the container runtime interfaces with the kubelet to manage containers as part of the orchestration of the cluster.

🍨Addons Components

Other components are pods and services that implement the functions required by the cluster.

1. DNS

DNS, or Domain Name System, is an important Kubernetes add-on that provides service discovery and name resolution within a cluster. It allows pods to communicate with each other, and with external services, by translating human-readable service names into IP addresses.

2. Web UI

Web UI add-ons in Kubernetes are additional components that provide graphical interfaces for managing and monitoring the cluster. These tools, such as the Kubernetes Dashboard, provide an easy-to-use web-based interface that allows administrators to interact with the cluster, view resource utilization, manage deployments, and troubleshoot issues.

3. Container Resource Monitoring

Container resource monitoring addons are tools or services used in Kubernetes to track and analyze container resource usage, such as CPU, memory, and disk I/O. These add-ons collect metrics time-series from running containers and provide insight into their performance and resource consumption, improving resource allocation, scaling decisions, and troubleshooting.

4. Cluster-level logging

Cluster-level logging is an add-on component in Kubernetes that collects and aggregates log data from all nodes and pods in the cluster. It helps centralize logs, making it easier to monitor, analyze, and troubleshoot applications and infrastructure issues.

References:

I think that’s all from this post, maybe next I will explain Kubernetes Object and the other components.

Install Docker using Ansible

Setup

First, you need to install Ansible. Just follow this link to install Ansible on your operating system installation guide.

After installation, create new directory called ansible-docker.

Terminal window
$ mkdir ansible-docker && cd ansible-docker

Create a new file called ansible.cfg as the Ansible configuration setting and then define the inventory file.

[defaults]
inventory = hosts
host_key_checking = True
deprecation_warnings = False
collections = ansible.posix, community.general

Then create a new file called hosts, where the name is defined on ansible.cfg.

Terminal window
[example-server]
0.0.0.0 ansible_user=root

NB: Don’t forget to change the IP Address and host name.

After setup the Ansible configuration setting & inventory file, let’s create a YAML file called playbook.yml

---
- name: Setup Docker on Ubuntu Server 22.04
hosts: all
become: true
remote_user: root
roles:
- config
- docker

Then create roles directory:

  • Config, On this directory I will create a directory called tasks. After that, I should create yaml file called main.yml to run update, upgrade & install many dependencies.
---
- name: Update&Upgrade
ansible.builtin.apt:
name: aptitude
state: present
update_cache: true
- name: Install dependencies
ansible.builtin.apt:
name:
- net-tools
- apt-transport-https
- ca-certificates
- curl
- software-properties-common
- python3-pip
- virtualenv
- python3-setuptools
- gnupg-agent
- autoconf
- dpkg-dev
- file
- g++
- gcc
- libc-dev
- make
- pkg-config
- re2c
- wget
state: present
update_cache: true
  • Docker, On this directory create 2 directories called tasks & templates.

On tasks directory create new file called main.yml. This file contains Docker installation, Docker Compose installation & private registry setup.

---
- name: Add Docker GPG apt Key
ansible.builtin.apt_key:
url: https://download.docker.com/linux/ubuntu/gpg
state: present
- name: Add repository into sources list
ansible.builtin.apt_repository:
repo: deb [arch=amd64] https://download.docker.com/linux/ubuntu {{ ansible_lsb.codename }} stable
state: present
filename: docker
- name: Install Docker 23.0.1-1
ansible.builtin.apt:
name:
- docker-ce=5:23.0.1-1~ubuntu.22.04~jammy
- docker-ce-cli=5:23.0.1-1~ubuntu.22.04~jammy
- containerd.io
state: present
update_cache: true
- name: Setup docker user
ansible.builtin.user:
name: docker
groups: "docker"
append: true
sudo_user: yes
- name: Install Docker module for Python
ansible.builtin.pip:
name: docker
- name: Install Docker-Compose&Set Permission
ansible.builtin.get_url:
url: https://github.com/docker/compose/releases/download/1.29.2/docker-compose-Linux-x86_64
dest: /usr/local/bin/docker-compose
mode: '755'
- name: Create Docker-Compose symlink
ansible.builtin.command:
cmd: ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
creates: /usr/bin/docker-compose
- name: Add private registry
ansible.builtin.template:
src: daemon.j2
dest: /etc/docker/daemon.json
mode: preserve
- name: Restart Docker
ansible.builtin.service:
name: docker
state: restarted
enabled: true

In the template, create a template file using a jinja file named daemon.j2. This file contains configuration for private registry settings (optional).

{
"insecure-registries" : ["http://0.0.0.0:5000"]
}

NB: Field the IP using your remote server private IP

After all setup, Your project directory should look like this:

Terminal window
$ tree
.
β”œβ”€β”€ ansible.cfg
β”œβ”€β”€ config
β”‚ └── tasks
β”‚ └── main.yml
β”œβ”€β”€ docker
β”‚ β”œβ”€β”€ tasks
β”‚ β”‚ └── main.yml
β”‚ └── templates
β”‚ └── daemon.j2
β”œβ”€β”€ hosts
└── playbook.yml

Test & Run

Okay, now test Your playbook.yml file using this command.

Terminal window
$ ansible-playbook --syntax-check playbook.yml

If You don’t have any errors, run the playbook using this command.

Terminal window
$ ansible-playbook -i hosts playbook.yml

Wait until finish.

Terminal window
____________________________________________
< PLAY [Setup Docker on Ubuntu Server 22.04] >
--------------------------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
________________________
< TASK [Gathering Facts] >
------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||

Conclusion

In this post, I just show you how to install Docker in a specific version using Ansible Playbook when you have one or more servers.

Thank You for reading this post, If You have suggestions or questions please leave them below. Thanks

NB: In this case, I just set the user as root. I installed the Docker on Ubuntu Server 22.04. For full code follow this link ansible-docker.