top of page

Stakater Blog

Follow our blog for the latest updates in the world of DevSecOps, Cloud and Kubernetes

Khizer Jan

Understanding Kubernetes: An Open-Source Container Orchestration Framework

Introduction

Kubernetes, an open-source container orchestration framework pioneered by Google, forms the bedrock for efficiently managing containers across diverse environments such as physical machines, virtual machines, cloud settings, and hybrid deployments.


Challenges addressed by Kubernetes

The surge in micro-services gave rise to increased container technology adoption, as containers proved to be an ideal environment for hosting independent applications. Consequently, applications evolved into intricate structures comprising hundreds or even thousands of containers. The complexity of managing this multitude across various environments spurred the necessity for container orchestration tools. Kubernetes, as one such tool, addresses critical challenges, ensuring high availability, scalability, and robust disaster recovery mechanisms.


Kubernetes basic architecture

A Kubernetes (K8s) cluster comprises a minimum of one master node and multiple connected worker nodes, each hosting a kubelet process. Kubelet facilitates communication within the cluster and executes tasks, including running applications, on worker nodes. Docker containers of diverse applications are deployed on these nodes, where the actual work occurs.



Master node responsibilities

The master node oversees essential Kubernetes processes, including the API server, serving as the entry point for various clients such as UI, API, and CLI. The controller manager monitors the cluster’s status, addressing issues like container failures. The scheduler intelligently allocates containers to worker nodes based on workload and available resources. Vital to the cluster is etcd, a key-value storage system that maintains real-time configuration and status data. Backing up and restoring the entire cluster relies on etcd snapshots.


Virtual network

Facilitating communication among nodes, the virtual network transforms the cluster into a cohesive unit, aggregating resources for optimal performance. Notably, worker nodes are robust, given their myriad application responsibilities, whereas master nodes, crucial for cluster access, demand less but still essential resources.


Ensuring cluster reliability

Master nodes, pivotal for cluster access, underscore the need for redundancy. In production environments, it’s recommended to have an odd number of master nodes, typically three or more, to ensure a majority for the quorum. The cluster requires a quorum to operate effectively and make decisions.


If you have an odd number of master nodes (e.g., 3), the quorum is (3/2) + 1 = 2, meaning the cluster can tolerate the failure of one node and still maintain a majority. However, if you have an even number of master nodes (e.g., 2), the quorum is (2/2) + 1 = 2, which means the cluster can tolerate the failure of one node, but if one more node fails, it may lose quorum and become unavailable.


Basic concepts and configuration


Pods and containers:

In the realm of Kubernetes (K8s), the fundamental building block is the “pod”. As a K8s user, your primary interaction revolves around configuring and managing pods. A pod essentially acts as a wrapper for containers. On each worker node, multiple pods coexist, and within a pod, you can host multiple containers. Typically, one pod corresponds to a single application, but exceptions arise when a primary application necessitates auxiliary containers. The virtual network assigned by K8s provides each pod with its unique IP address, enabling inter-pod communication.


Pod lifecycle and services:

Pods are dynamic and ephemeral components. If a container within a pod stops or fails, K8s automatically restarts it. However, due to their ephemeral nature, pods may frequently restart. This is where the concept of a “service” comes into play. When a pod is recreated, it gets a new IP address, potentially causing inconvenience for applications communicating via IP. To mitigate this, K8s introduces services, acting as a stable alternative to dynamic IP addresses. Services have dual functionalities — they provide a permanent IP address for inter-pod communication and act as load balancers.


Configuration Process:

To configure a K8s cluster and its components like pods and services, all requests pass through the master node, specifically through the API server. K8s clients, including UIs like the K8s dashboard, APIs (scripts or curl commands), and command-line tools like kubectl, communicate exclusively with the API server. Requests are submitted in either YAML or JSON format. Declarative in nature, the configuration requests outline the desired outcome, specifying the components and their properties.


Example configuration:

Consider the following YAML configuration as an example:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 2
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-app-container
        image: my-image
        ports:
        - containerPort: 80
        env:
        - name: ENV_VARIABLE
          value: "example"

In this declarative configuration, a “Deployment” component named “my-app” is defined, specifying the creation of two pod replicas. Each replica hosts a container based on the “my-image” image, running on port 80 with an environment variable set. K8s strives to maintain the declared state, automatically addressing discrepancies between the desired and actual states, such as restarting failed pods.


Feel free to explore Kubernetes platform assessment in case you are interested to assess where you stand with your platform on Kubernetes or one of its distributions.


In conclusion, delving into the intricacies of Kubernetes, covering fundamental concepts, and the architecture equips users with the knowledge needed to proficiently manage containerized applications across a spectrum of environments. This comprehensive understanding ensures a nuanced approach to orchestrating resilient clusters with precision and efficiency.

97 views0 comments

Comments


bottom of page