Kubernetes: DaemonSet

Kubernetes is one of the widely used orchestration tools for container application and container management. With a variety of features and options, it helps organizations remove manual intervention at every stage. With lots of requirements & scenarios, the user or any organization deals with lots of Kubernetes resources types options which leads to having proper knowledge of every Kubernetes resources type to fit specific or combination of resources with different scenarios that organizations generally require. To know more about different Kubernetes resource types, you can visit the official documentation provided by Kubernetes.

This blog will cover one of the Kubernetes resources which are only used for a specific use-case. There are multiple resources where we can leverage those resources at specific use-case, but for now, we are only focusing on DaemonSet which is very important and has a unique functionality that we cannot cover by using another pod controller.

cute daemon

Before diving into DaemonSet & its workflow, let’s discuss the requirement and scenario which is generally required and also discuss if there is the possibility of achieving those requirements with other Kubernetes resources. There will be lots of unique Kubernetes terms mentioned in blogs that we are not covering. This blog is dedicated only to Kubernetes DaemonSet and its use-cases.

ScenariO | REQUIREMENTS

There can be multiple options or requirements which can be fit in part but for now, we are discussing only a few of the scenarios.

– We want to ship all logs stored from each node location to Elasticsearch or any other log server

– Another option is to monitor each node’s resources like its memory, free space, load, etc.

There can be multiple options that we require but we are mainly focusing on the specific one here which cannot be covered by different pod controllers but only by specific pod controllers and this requirement can be very critical and we need to take care of that.

Without The pod devil

Let’s say if we talk about deployment or other pod controllers and we want to achieve the above requirements. As we have already 3 nodes currently, we will implement 3 pod counts/replicas using deployment to achieve our requirement.

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80

According to official documentation, Kube-scheduler selects a node for the pod in a 2-step operation: Filtering & Scoring options. Long story short, Kubernetes checks resources request by pod and presents on each node and based on that, Kubernetes score each node and Kube-scheduler assigns the Pod to the Node with the highest ranking. If there is more than one node with equal scores, Kube-scheduler selects one of these at random.

But somehow, we manage to configure the pod on each worker node. Deployment can work that way which can help us to achieve our requirements

but we already know we use Kubernetes for mainly dynamic requirements. Like

  • What if there is a dynamic requirement?
  • What if the node goes down?
  • What if a new node comes up due to resource constraint?

To achieve the dynamic requirements, Kubernetes is one of the best fits,

Daemonset: Node partner

We experienced that the above scenarios cannot be achieved directly and efficiently by using only deployment and another pod controller. So, we required some inbuilt functionality that manages and monitors this requirement.

DaemonSet is one of the important Kubernetes inbuild pod controllers which manages pods in a certain way. Even having multiple pod controllers with the same outcome, DaemonSet has some different functionality that we cannot cover by different pod controllers.

The basic functionality of DaemonSet is to configure pods for each node present on that cluster and most importantly if there is scaling of nodes like node down and new nodes come up, DaemonSet proactively monitors all these pods and nodes ratios.

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-elasticsearch
  namespace: kube-system
  labels:
    k8s-app: fluentd-logging
spec:
  selector:
    matchLabels:
      name: fluentd-elasticsearch
  template:
    metadata:
      labels:
        name: fluentd-elasticsearch
    spec:
      containers:
      - name: fluentd-elasticsearch
        image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2

If you will see the DaemonSet YAML file, it is very similar to deployment YAML and other pod controller YAML but if you look closely, you will find out that while defining DaemonSet spec or specification, we are not providing any replica counts. DaemonSet will throw an error if we provide replicas count to YAML. It is because the replicas count of DaemonSet is dynamic in nature as it depends on the node count of the cluster.

That’s the main functionality of DaemonSet to proactively monitor pod counts and match with node counts present in the cluster. If there is fluctuation, it proactively uses Kubernetes objects to add or remove the pod from the cluster which does not require any manual intervention. That’s the beauty of DaemonSet and that’s the true meaning of a pure orchestration tool for containers.

Even if there is draining of the existing node or node goes down, DaemonSet also monitors if there is node goes down, it checks if the existing pod count & number of node counts are or not. When a node goes down, it is triggered to remove the pod count and decrease it to the number of nodes that exist in that cluster.

Conclusion

There can be lots of scenarios and requirements which cannot be covered by only one set of resources. We discussed the requirements and scenarios where we require a specific pod on each node to leverage the benefits of the pod for each node. We discussed why another pod controller is not a good fit for this requirement. Even achieving the requirements cannot help as there can be a lot of manual work by users to check if there is a new node and after pod termination, it should use the same node for scheduling.

We also discussed YAML differences between DaemonSet & other pod controllers, DaemonSet is a good fit for scenarios we discussed where we require each on every node and DaemonSet proactively monitors the pod & node ratio for further actions.

Please let us know in the comment for any feedback & information. Also, let us know if you want more Kubernetes resources blogs or any technology blog.

GIF Reference

Opstree is an End to End DevOps solution provider

Connect Us

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: