Logging is a critical part of monitoring and there are a lot of tools for logs monitoring like Splunk, Sumologic, and Elasticsearch, etc. Since Kubernetes is becoming so much popular now, and running multiple applications and services on a Kubernetes cluster requires a centralized, cluster-level stack to analyze the logs created by pods.
One of the well-liked centralized logging solutions is the combination of multiple opensource tools i.e. Elasticsearch, Fluentd, and Kibana. In this blog, we will talk about setting up the logging stack on the Kubernetes cluster with our newly developed operator named “Logging Operator”.
If you guys want to set up the EFK on the Kubernetes cluster manually, you can read our blogs for EFK here.
Before going further we are assuming that you guys are already familiar with the functionality of Elasticsearch, Fluentd, and Kibana. If not, you can read it here.
EFK Challenges on Kubernetes
EFK setup is easy on Kubernetes but it can get complex as per the requirement. Some of the use-cases are:-
- Different elasticsearch nodes (Master, Data, Ingestion, and Client) scaling and descaling in the existing clusters.
- TLS management will be done manually and a new client certificate will be needed for every new elasticsearch node.
- Elasticsearch and Fluentd configuration updates need to be rollout manually.
- Custom configuration in Fluentd for structured(JSON) and non-structured logs parsing.
As we are a DevOps consultant organization, and we helped a lot of people designing their elasticsearch cluster with security and industry best practices. So we used our experience and knowledge for creating a custom CRD (Custom Resource Definition) to set up and manage the logging stack on the Kubernetes cluster. If you are new to CRD and want to read about in-depth, we will suggest the official documentation.
The API name which we have created is “logging.opstreelabs.in/v1alpha1” and this operator is also published under the OperatorHub catalog.
Some of the key highlighting features of the logging operator are:-
- Elasticsearch setup with or without TLS on Transport and HTTP Layer
- Customizable elasticsearch configuration and configurable heap size
- Fluentd as a lightweight log-shipper and JSON field separation support
- Kibana integration with elasticsearch for logs visualization
- Seamless upgrade for Elasticsearch, Fluentd, and Kibana stack
- Inculcated best practices for Kubernetes setup like
- Loosely coupled setup, i.e. Elasticsearch, Fluentd, and Kibana setup can be done individually as well
- Index Lifecycle support to manage rollover and cleanup of indexes
- Index template support for configuring index settings like- policy, replicas, shards, etc.
- Elasticsearch different node types, like:-
- Master Node
- Client Node
- Ingestion Node
- Data Node
Setup of Logging Stack
So for deploying the efk setup using we need a Kubernetes cluster 1.11+ and that’s it. Let’s deploy the logging operator first.
git clone https://github.com/OT-CONTAINER-KIT/logging-operator cd logging-operator
Once the repo is clonned, we can deploy the CRD objects on Kubernetes.
kubectl apply -f config/crd/bases
Similar like CRD, we have pre-baked RBAC config files as well inside config/rbac which can be installed and configured by
kubectl apply -f config/rbac/
Once all the initial steps are done, we can create the deployment for “Logging Operator”. The deployment manifests for the operator are present inside config/manager/manager.yaml file.
kubectl apply -f config/manager/manager.yaml
Then we can create a sample EFK cluster using the samples directory.
kubectl apply -f config/samples/elasticsearch-example.yaml kubectl apply -f config/samples/fluentd-example.yaml kubectl apply -f config/samples/kibana-example.yaml
The “Logging Operator” is still in the developing phase and we are trying to add some exciting features defined in our ROADMAP.
If you want to know more about it, please visit our documentation site
If you face any issue or you want to ask any question, please feel free to use the comment section of this blog. Also, if you have some issues or feature requests regarding the Logging Operator you can raise the issue for that in GitHub.
Thanks for reading, I’d really appreciate any and all feedback, please leave your comment below if you guys have any feedback.
Cheers Till the Next time!!!
Opstree is an End to End DevOps solution provider