Introducing Kubernetes Vault Web-hook

Initially, we had the DevOps framework in which Development and Operation team collaborated to create an “Agile” ecosystem. But nowadays a lot of people are talking about the “DevSecOps” realm in which people do not treat security as an afterthought instead of that people are inculcating security in their development and operation practices.

Continue reading “Introducing Kubernetes Vault Web-hook”

Running Non-containerized Microservices

Whenever someone says orchestration for microservices, the first thing that usually comes to mind is Kubernetes. I believe that’s normal. I used to think the same but then I came across an interesting scenario that changed the way I used to think about microservice orchestration completely.

Usually, people think microservices mean containers, hence they build their application in a cloud-native approach so that it can easily run on any platform using the containerized approach. Well, I agree that containerization is a decent way of designing a cloud-native application especially when we integrate it with orchestrators like Kubernetes or OpenShift. It takes away a lot of overhead from us like:- scaling, failover, deployment, etc but it doesn’t imply that microservices can only be managed inside a containerized ecosystem. Microservice is an ideology or mindset for designing the application and containerization is a power-up that supports the ideology.

Continue reading “Running Non-containerized Microservices”

Prometheus at Scale – Part 1

Prometheus has gained a lot of popularity because of its cloud-native approach for monitoring systems. Its popularity has reached a level that people are now giving native support to it, while developing software and applications such as Kubernetes, Envoy, etc. For other applications, there are already exporters(agent) available to monitor it.

Since I have been working on Prometheus for quite a long time and recently have started doing development on it, I was confident that I can handle any kind of scenario in it. Here, in this blog, I am going to discuss a scenario that was a very good learning experience for me.

One thing I love about working with a service-based organization is that it keeps you on your toes, so you have to learn constantly. The same is the case with the current organization I am associated with.

Recently I got an opportunity to work on a project in which the client had a requirement of implementing a Prometheus HA solution. Here is a brief information about the requirement:-

  • They had a 100+ node Kubernetes cluster and they wanted to keep the data for a longer period. Moreover, the storage on the node was a blocker for them.
  • In the case of Prometheus failure, they didn’t have a backup plan ready.
  • They needed the scaling solution for Prometheus as well.

Our Solution

So, we started with our research for the best possible scenarios, for the HA part, we thought we can implement the Federated Prometheus concept and for long-term storage, we thought of implementing the Thanos project. But while doing the research, we came across one more interesting project called Cortex.

So, we did our comparison between Thanos and Cortex. Here are some interesting highlights:-

CortexThanos
Recent data stored in injestorsRecent data stored in Prometheus
Use Prometheus write API to write data at a remote locationUse sidecar approach to write data at a remote location
Supports Long Term storageSupports Long term storage
HA is supportedHA is not supported
Single setup can be integrated with multiple PrometheusSingle setup can be associated with single Prometheus

So after this metrics comparison, we decided to go with the Cortex solution as it was able to fulfill the above mentioned requirements of the client.

But the cortex solutions is not free of complications, there are some complications of cortex project as well:-

  • As the architecture is a bit complex, it requires an in-depth understanding of Prometheus as TSDB.
  • These projects require a decent amount of computing power in terms of memory and CPU.
  • It can increase your remote storage costs like S3, GCS, Azure storage, etc.

Since all these complications were not blockers for us, so we moved ahead with the Cortex approach and implemented it in the project and it started working fine right from day one.

But in terms of scaling, we have to scale Prometheus vertically not horizontally because it is not designed to scale horizontally.

Vertical scanning, horizontal scanning and Prometheus at scale explained with cats

If we try to scale Prometheus horizontally, we will end up with scattered data that cannot be consolidated easily, so in terms of the scaling part, we would suggest you go with a vertical approach.

To automate the vertical scaling of Prometheus in Kubernetes we have used VPA(Vertical Pod Autoscaler). It can both down-scale pods that are over-requesting resources, and also up-scale pods that are under-requesting resources based on their usage over time.

Conclusion

So in this blog, we have seen that what approach we have taken for implementing the High Availability, Scalability, and Long Term storage in Prometheus. In the next part of the blog, we will see how we actually setup these things in our environment.

If you guys have any other ideas or suggestions around the approach, please comment in the comment section. Thanks for reading, I’d really appreciate your suggestions and feedback.

Image Reference-1 

CONTACT US

Monitoring Druid with Prometheus

Druid Exporter – A Prometheus agent for Druid Database

A while back we got the requirement for working on Apache Druid. By working on Apache Druid, We mean setup, management, and monitoring. Since it was a new topic for us we started evaluating it and we actually find it has a lot of amazing features.

So for the people who don’t have an idea about Druid and just starting with Druid. Let me give a quick walk-through of it.

Continue reading “Monitoring Druid with Prometheus”

Opstree’s Logging (EFK) Operator

Logging is a critical part of monitoring and there are a lot of tools for logs monitoring like Splunk, Sumologic, and Elasticsearch, etc. Since Kubernetes is becoming so much popular now, and running multiple applications and services on a Kubernetes cluster requires a centralized, cluster-level stack to analyze the logs created by pods.
One of the well-liked centralized logging solutions is the combination of multiple opensource tools i.e. Elasticsearch, Fluentd, and Kibana. In this blog, we will talk about setting up the logging stack on the Kubernetes cluster with our newly developed operator named “Logging Operator”.

Continue reading “Opstree’s Logging (EFK) Operator”