Monitoring Druid with Prometheus

Druid Exporter – A Prometheus agent for Druid Database

A while back we got the requirement for working on Apache Druid. By working on Apache Druid, We mean setup, management, and monitoring. Since it was a new topic for us we started evaluating it and we actually find it has a lot of amazing features.

So for the people who don’t have an idea about Druid and just starting with Druid. Let me give a quick walk-through of it.

Druid is a columnar database that has high throughput for queries and insertions. It has a really good performance for aggregated operations just like a TSDB database. In addition, it is highly scalable as well which can easily be scaled by adding nodes in the cluster.

So when we started the activity for setting up Druid, we didn’t face any difficulties because the official documentation for setup is pretty clear. When we get to the monitoring part, we started searching for the solution which can fit in our requirement but we didn’t find any solution. The reason was that we wanted to monitor Druid either with Prometheus or Elasticsearch because these were the existing tools getting used for monitoring and we don’t want to add any special monitoring tool for Druid only.

The Druid Exporter

Since we were using the Prometheus for a long time and have an in-depth understanding of it, so we decided to write our own custom exporter i.e. “Druid Exporter”.

https://github.com/opstree/druid-exporter

Druid exporter is a Golang based exporter which captures Druid’s API metrics as well as JSON emitted metrics and then converts them into the Prometheus time-series format. Some of the key highlighting metrics are:-

  • Druid’s health metrics
  • Druid’s datasource metrics
  • Druid’s segment metrics
  • Druid’s supervisor metrics
  • Druid’s tasks metrics
  • Druid’s components metrics like- broker, historical, ingestion(kafka), coordinator, sys
  • Druid’s JVM metrics

Once we have seen this project is matured enough to get contributed to the open-source society. We made it opensource and contributed it to the Prometheus community as well. It can be seen listed under the Prometheus site here:-

https://prometheus.io/docs/instrumenting/exporters/

Installation/Configuration

There are some changes needed in the druid cluster to exploit the full capabilities of druid exporter. Druid emits the metrics to different emitters. So, in the druid database, we must allow the HTTP emitter.

If you are using the druid properties file you must add this entry to the file common.properties:-

druid.emitter.http.recipientBaseUrl=http://<druid_exporter_url>:<druid_exporter_port>/druid
druid.emitter=http

In case configuration of a druid are managed by environment variables:-

export druid_emitter_http_recipientBaseUrl=http://<druid_exporter_url>:<druid_exporter_port>/druid
export druid_emitter=http

Druid exporter can be download from release

# Export the Druid Coordinator or Router URL
export DRUID_URL="http://druid.opstreelabs.in"
export PORT="8080"

./druid-exporter [<flags>]

Druid exporter can also be installed inside the Kubernetes cluster using the helm chart.

helm show values prometheus-community/prometheus-druid-exporter
helm install [RELEASE_NAME] prometheus-community/prometheus-druid-exporter

And finally, a Grafana Dashboard can be found here

https://grafana.com/grafana/dashboards/12155

Conclusion

Druid Exporter development and release can be tracked here

https://github.com/opstree/druid-exporter/releases

If you guys like you can contribute to Druid Exporter by opening Pull Requests and issues. And if you some input you can tell us in the comment section as well.

Thanks for reading, I’d really appreciate any and all feedback, please leave your comment below if you guys have any feedback.

Cheers Till the Next time!!!

 

Opstree is an End to End DevOps solution provider

CONTACT US

Leave a Reply