Kubernetes is an open-source container orchestration platform used for running distributed applications and services at scale. Merely knowing the basics of Kubernetes won’t be sufficient enough in order to leverage the many advantages that it offers. It’s important to first understand the complete Kubernetes architecture, its components and how they interact with each other to know how Kubernetes actually works. Let’s take a brief look and explore how the different components of Kubernetes work together.
Kubernetes is the ideal solution for complete orchestration, scaling and deployment of containerized applications. You can also read about application containerization, Kubernetes API, Kubernetes API Gateway and much more here!
– The What, Why, and How of Application Containerization
– What is Kubernetes API?
This blog outlines the various components within a Kubernetes architecture that are required for a complete and working Kubernetes cluster. Here, we talk about,
– What is a Kubernetes Cluster?
– The Compute Machines or Nodes( Worker Nodes)
– The Control Plane(Master Node)
– Components of the Control Plane
– Node Components
Let’s dive in!
What is a Kubernetes Cluster?
A Kubernetes cluster is a set of nodes that runs containerized applications and workloads. A Kubernetes cluster has two parts:
– The Control Plane( Master Node)
– The Compute Machines or Nodes( Worker Nodes)
The Compute Machines or Worker Nodes
A node, inside a Kubernetes cluster, is a worker machine that can either be a virtual or a physical machine depending on the cluster. Each node within the cluster is managed by the control plane and contains the services necessary to run Pods, which are made up of containers.
Wondering what are pods in Kubernetes? A pod is a smallest and simplest unit in the Kubernetes architecture model. It represents a set of running containers within the cluster.
Every cluster has at least one worker node. The worker node(s) are generally responsible to host the Pods. The Kubernetes control plane automatically controls and manages the scheduling of these pods across different nodes within the cluster keeping into account the available resources on each Node.
The Control Plane or Master Node
The control plane (master node), in the Kubernetes architecture, is an important part of the Kubernetes cluster. It exposes the API and interfaces for deploying and managing the complete lifecycle of containers. It manages the worker nodes and the pods within the Kubernetes cluster. The components of the control plane help in taking decisions about the cluster (like scheduling the pods), as well as identifying and responding to cluster events ( such as starting a new pod when the deployment’s replicas field is unsatisfied).
Here’s the Kubernetes architecture diagram with all the components tied together.
Components of the Control Plane
Let’s talk about the most crucial part of the Kubernetes cluster: the control plane. The control plane contains the core Kubernetes components that are responsible for controlling the complete Kubernetes cluster along with data that specifies the cluster’s state and configuration. Here, we’ll talk about the components of the control plane.
With the help of these K8s components, the Control Plane controls and responds to cluster events and makes sure that the containers within the K8s cluster are running in sufficient numbers and with the necessary resources.
The API server is a crucial component of the Kubernetes control plane that exposes the Kubernetes API. The Kubernetes API server, which is the front end of the Kubernetes control plane, is responsible for managing internal and external requests. It is the API server that determines whether a request is valid and in case if it is, then it processes the request.
etcd is the key-value store database that contains the configuration data and information about the state of the Kubernetes cluster. It is etcd, where users can find complete in-depth information and the true state of the cluster.
It is a component of the control plane that looks for newly created Pods with no assigned node. It selects a node for the pods to run on. Also, kube-scheduler ensures that the Kubernetes cluster is healthy. It identifies where new containers are to be added.
The Kubernetes scheduler looks after the resources that a pod needs, such as CPU or memory, along with the complete health of the K8s cluster. Accordingly, it schedules the pod to an appropriate compute node required by the pod depending upon certain factors including resource requirements, hardware/software/ restrictions, affinity and anti-affinity specifications, inter-workload interference, and much more.
Within the Kubernetes architecture, Kube-controller-manager is the control plane component that runs controller processes. Each controller takes care of running the Kubernetes cluster and the Kube-controller-manager consists of several controller functions like these.
Some types of these controllers are,
- Node controller: It is responsible for identifying and responding when nodes go down.
- Job controller: The task of this component is to watch for Job objects that represent one-off/independent tasks. It then creates Pods for running those tasks to completion.
- Endpoints controller: It saves the endpoints object which means it joins the services and pods.
- Service Account & Token controllers: It creates default accounts and API access tokens for new namespaces.
It is a Kubernetes control plane component that embeds cloud-specific control logic. It allows the Kubernetes cluster to get linked into the cloud provider’s API and separates those components that interact with the cloud platform from components that only interact with the Kubernetes cluster. In simpler words, cloud-controller-manager only runs controllers that are specific to the cloud provider which is being used.
This Kubernetes architecture diagram shows how different parts of a Kubernetes cluster are related to each other.
By looking at the Kubernetes architecture explanation, we can clearly see that Kubernetes runs the workload by placing containers into Pods to run on Nodes. After knowing in detail about what are Pods in Kubernetes, let’s now look at the components of a node or compute machine.
Here’s a brief description of the node components. So, let’s take a look,
It is a node component that runs on each node in the cluster. It makes sure that containers within a pod are running. But, how is this done? kubelet chooses a set of PodSpecs and ensures that the containers defined in these PodSpecs are running and healthy.
kube-proxy is a network proxy that runs on each node within a Kubernetes cluster. Each node contains kube-proxy for facilitating Kubernetes networking services. The kube-proxy handles communications inside or outside of the K8s cluster by using the operating system packet filtering layer if it is available or else it forwards the traffic itself.
Each node has a container runtime engine that is responsible for running the containers. Kubernetes supports many container runtimes such as Docker, containerd, CRI-O, and any implementation of the Kubernetes CRI (Container Runtime Interface).
kubelet, kube-proxy and container runtime are important node components but they can also be present within the master node or the control plane.
Wrapping it all!
This was a complete Kubernetes architecture explanation. I hope it gives a clear picture of how Kubernetes works. Along with all the incredible advantages that come up with deploying Kubernetes, there are challenges too. A powerful and reliable Kubernetes & Microservices management platform such as BuildPiper can help overcome these complex Kubernetes challenges and allow enterprises to extract the most out of their investments.
Opstree is an End to End DevOps solution provider