Kubernetes is one of the most popular projects around container orchestration but it’s quite interesting that Kubernetes itself has no code to run or manage Linux/windows containers. So, what is running the containers within your Kubernetes pods?
Yes… Kubernetes doesn’t run your containers
It’s just an orchestration platform sitting above container runtimes. No code to run a container and to manage the container’s lifecycle on its own, instead, dockershim was implemented (in kubelet ) for talking to Docker as container runtime. I will talk about dockershim in the later section of the blog.
Also, docker has grown and matured over the last few years and has gained a stack of components like runc (open container initiative), containerd (CNCF project). OCI (est. in June,2015) splits docker into two parts:
1) to handle docker cli & processing requests and 2) to handle container running functions i.e runC.
In this blog, we will see how we can deploy the Elasticsearch, Fluent-bit, and Kibana (EFK) stack on Kubernetes. EFK stack’s prime objective is to reliably and securely retrieve data from the K8s cluster in any format, as well as to facilitate anytime searching, analyzing, and visualizing of the data.
What is EFK Stack?
EFK stands for Elasticsearch, Fluent bit, and Kibana.
Elasticsearch is a scalable and distributed search engine that is commonly used to store large amounts of log data. It is a NoSQL database. Its primary function is to store and retrieve logs from fluent bit.
Fluent Bit is a logging and metrics processor and forwarder that is extremely fast, lightweight, and highly scalable. Because of its performance-oriented design, it is simple to collect events from various sources and ship them to various destinations without complexity.
In the previous blog, we discussed setting up Offline Kubernetes Cluster over on-premises servers. After setting up the Kubernetes cluster we need to have some basic components to manage the orchestration and monitoring of the Kubernetes Cluster which will help Horizontal Pod Autoscaler and Vertical Pod Autoscaler to get information about CPU/Memory. Also, we have to limit access to all the components and Microservice we have set up for the SSO tool.
To begin with, we need a service mesh tool to manage the traffic flow between multiple microservices and We have many tools for this like Istio, Linkerd, Cilium Service Mesh, Consul connect, etc. Here I am considering Istio.
Firstly, We will be talking Istio Setup over Kubernetes Cluster.
Istio is an open source service mesh that helps organizations run distributed, microservices-based apps anywhere. Istio enables organizations to secure, connect, and monitor microservices, so they can modernize their enterprise apps more swiftly and securely. Istio allows organizations to deliver distributed applications at scale. It simplifies service-to-service network operations like traffic management, authorization, and encryption, as well as auditing and observability.
In an organization, there are multiple projects, and every project has multiple users every user has a different role to perform, based on the role whether he is owner, maintainer, developer, reporter, or guest we assign the role to that user, but the main problem is that when we have to use those users to the different project then we have to do all the same task again. There is a better way to manage users in GitLab by creating groups and assigning those groups to the project.
What is GitLab Group?
In GitLab, we use groups to manage one or more related projects at the same time. We can use groups to manage permissions for your projects. If someone has access to the group, they get access to all the projects in the group. We can also view all of the issues and merge requests for the projects in the group, and view analytics that shows the group’s activity. We can also create subgroups in a group.
Our journey began in 2014 – to become a reliable DevOps, Cloud, and Security partner and 2022 has proved to be a very significant part of that continuum. It was very heartening to experience the trust of very large, medium and small enterprises reposed on us throughout the year. I want to sincerely thank our customers/partners who Rely on Us for their Cloud, DevOps and Security outcomes. Some large retailers, established and emerging Fintechs, Superapps, and One of the Fortune 10 companies, amongst many others – this overwhelming trust and validation makes us much more confident and ready to further cement our position as a DevOps leader in 2023.
In 2022 we saw an increasing trend of enterprises and large customers, expecting us to take complete ownership of their systems. They expected us to take care of both BAU and R&D work streams and deliver business-critical outcomes. I want to thank our people for stepping up beyond expectations. As a result of the team’s zeal, passion, and professionalism, we were able to deliver very promising results in a very short time leading to robust customer satisfaction and confidence. Our Leaders (Growth Partners and Consulting Partners) have played a pivotal role in making us immensely successful in this arena.