Our Team hosted a Workshop on Prometheus which was very insightful and made absolute sense in terms of the Principles and the application part. The audience strength was pretty amazing, besides this, we also received many queries from the people who couldn’t join because of the Virtual Meet Limit. So, we shared the recording and the slides on social platforms. Here we try to share a glimpse of the workshop while keeping the same essence.Continue reading “Introduction to Prometheus Monitoring”
If you are using cloud based services, it is evident and paramount to track events that have happened. Isn’t it?
Monitoring events in the cloud is important.
If you are using AWS, let’s assume you find that one autoscaling group in your AWS account is deleted. What will be your response?
How will you know who did it?Continue reading “Event Monitoring Using AWS CloudTrail”
As we promised in our previous blog Prometheus as Scale – Part 1 that in our next blog we will be writing about the implementation part of Cortex with Prometheus, so here we are with our promise. But before going to the implementation part, we would suggest you guys go through our first blog to know the need for it.
Previously we talked that Prometheus is becoming a go-to option for people who want to implement event-based monitoring and alerting. The implementation and management of Prometheus are quite easy. But when we have a large infrastructure to monitor or the infrastructure has started to grow you require to scale monitoring solution as well.
Prometheus has gained a lot of popularity because of its cloud-native approach for monitoring systems. Its popularity has reached a level that people are now giving native support to it, while developing software and applications such as Kubernetes, Envoy, etc. For other applications, there are already exporters(agent) available to monitor it.
Since I have been working on Prometheus for quite a long time and recently have started doing development on it, I was confident that I can handle any kind of scenario in it. Here, in this blog, I am going to discuss a scenario that was a very good learning experience for me.
One thing I love about working with a service-based organization is that it keeps you on your toes, so you have to learn constantly. The same is the case with the current organization I am associated with.
Recently I got an opportunity to work on a project in which the client had a requirement of implementing a Prometheus HA solution. Here is a brief information about the requirement:-
- They had a 100+ node Kubernetes cluster and they wanted to keep the data for a longer period. Moreover, the storage on the node was a blocker for them.
- In the case of Prometheus failure, they didn’t have a backup plan ready.
- They needed the scaling solution for Prometheus as well.
So, we started with our research for the best possible scenarios, for the HA part, we thought we can implement the Federated Prometheus concept and for long-term storage, we thought of implementing the Thanos project. But while doing the research, we came across one more interesting project called Cortex.
So, we did our comparison between Thanos and Cortex. Here are some interesting highlights:-
|Recent data stored in injestors||Recent data stored in Prometheus|
|Use Prometheus write API to write data at a remote location||Use sidecar approach to write data at a remote location|
|Supports Long Term storage||Supports Long term storage|
|HA is supported||HA is not supported|
|Single setup can be integrated with multiple Prometheus||Single setup can be associated with single Prometheus|
So after this metrics comparison, we decided to go with the Cortex solution as it was able to fulfill the above mentioned requirements of the client.
But the cortex solutions is not free of complications, there are some complications of cortex project as well:-
- As the architecture is a bit complex, it requires an in-depth understanding of Prometheus as TSDB.
- These projects require a decent amount of computing power in terms of memory and CPU.
- It can increase your remote storage costs like S3, GCS, Azure storage, etc.
Since all these complications were not blockers for us, so we moved ahead with the Cortex approach and implemented it in the project and it started working fine right from day one.
But in terms of scaling, we have to scale Prometheus vertically not horizontally because it is not designed to scale horizontally.
If we try to scale Prometheus horizontally, we will end up with scattered data that cannot be consolidated easily, so in terms of the scaling part, we would suggest you go with a vertical approach.
To automate the vertical scaling of Prometheus in Kubernetes we have used VPA(Vertical Pod Autoscaler). It can both down-scale pods that are over-requesting resources, and also up-scale pods that are under-requesting resources based on their usage over time.
So in this blog, we have seen that what approach we have taken for implementing the High Availability, Scalability, and Long Term storage in Prometheus. In the next part of the blog, we will see how we actually setup these things in our environment.
If you guys have any other ideas or suggestions around the approach, please comment in the comment section. Thanks for reading, I’d really appreciate your suggestions and feedback.
Torture the data, and it will confess to anything.Ronald Coase
WHAT IS ELASTIC SIEM
Elastic SIEM (Security Information and Event Management) is a new feature provided by Elastic NV. Using Elastic SIEM we can track and maintain important events that concern us.
Events are actions that reflect something that has happened.