Autoscaling in Nomad Cluster

We are living in the microservice era, where we have a number of applications to support a business model. But our application success cannot be determined by the features only, it should have a scalable model as well. Otherwise, something like this would happen:-

When we generally talk about the scaling in the microservices, people think that applications that are running inside Kubernetes as containers. Since Kubernetes has its own method of autoscaling using the metrics-server, we don’t have to worry about the scaling of the applications inside it.

But just like we discussed in our previous blog on Running non-containerized microservices. It’s not always like this we are going to use Kubernetes as an orchestrator because there might be some scenarios where we have to use some different solutions. As we have mentioned in our previous blog, we were using Nomad to handle the windows IIS-based application with orchestrator features and we were successful to deploy those applications over the Nomad platform.

Now there was another challenge that came across in front of us scaling of the applications, then we started to evaluate the scaling solutions for Nomad-based applications and we observed that Nomad has a very interesting way of handling the autoscaling.

Autoscaling for Nomad applications

Nomad tasks can easily get scaled out and scaled in to handle the traffic accordingly. Nomad uses a plugin called nomad-autoscaler. The Nomad Autoscaler currently supports horizontal, vertical application scaling. For autoscaling applications, Nomad provides a plugin interface through which we can connect multiple data stores for metrics. For example:- If our organization is using Prometheus for monitoring, we can easily integrate it with the Prometheus plugin and define the scaling policies using the PromQL expressions.

Some of the available autoscaling plugins are:-

  • Datadog
  • Prometheus
  • DigitalOcean
  • Openstack
HashiCorp Nomad Autoscaling Tech Preview

Nomad application autoscaling using Prometheus

In this example, we are going to see how we can integrate Nomad with Prometheus to autoscale the applications. As a first-level change, we have to update the Nomad server configuration so it can use Prometheus as a metrics store.

nomad {
  address = ""
telemetry {
  prometheus_metrics = true
  disable_hostname   = true
apm "prometheus" {
  driver = "prometheus"
  config = {
    address = ""
strategy "target-value" {
  driver = "target-value"

Once we have provided the Prometheus as the metrics store inside the Nomad server configuration, as the next part of it now we have to define the scaling policies inside the nomad jobs. Scaling policies will always be defined on the job level.

scaling {
      min     = 1
      max     = 4
      enabled = true
      policy {
        evaluation_interval = "2s"
        cooldown            = "5s"
        check "cpu_usage" {
          source = "prometheus"
          query  = "avg(nomad_client_allocs_cpu_total_percent{task='api'})"
          strategy "target-value" {
            target = 50

After configuring the scaling policies, we can test the application scaling by putting some load using tools like- apache benchmark, siege, etc.


So just like we promised in the previous blog that in the coming blog on Nomad, we will discuss the autoscaling parameters. I hope you guys have enjoyed the reading but if you have any feedback or suggestions, please reach out to me. In the upcoming blog on Nomad, we will discuss the Consul configuration for Nomad-based applications.

Opstree is an End to End DevOps solution provider

Connect Us

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: