I have started adding more pieces into my pet project, and I would like to have some visibility about what is happening and in where.

We can classify monitoring frameworks in two big groups:

  • Those that actively reports their stats to a central server (like statsd).
  • Those that expose their metrics, to allow services to collect that data(like prometheus).

I've just selected Prometheus, because is widely used and the project is under the umbrella of the Cloud Native Computing Foundation

And also, I like the Grafana dashboards aesthetics :)

Minimal setup

My setup will have two different VMs:

  • One that will host Prometheus and Grafana, that will store the collected data.
  • Another VM with a RabbitMQ server (that will be used as a jobs queue).

We have three ways to install Prometheus

I choose the official binary.

Installing Prometheus

There is a fine Digital Ocean blog post to setup prometheus.

I am following the steps from the tutorial except the part of not creating a home dir for it. I prefer prometheus to have its own home to store the binary, and the database.

adduser prometheus --shell /bin/false

I did not gave him a shell, so if we have to perform any action as the prometheus user:

sudo -u prometheus bash

In order to have prometheus running at startup, we can create a starup script, or run it under supervisord or similar.

Running prometheus as a service

(This is copy / paste from the DO article).

Systemd script: /etc/systemd/system/prometheus.service


ExecStart=/home/prometheus/prometheus/prometheus \
    --config.file /etc/prometheus/prometheus.yml \
    --storage.tsdb.path /home/prometheus/tsdb/  \
    --web.console.templates=/home/prometheus/prometheus/consoles \

sudo systemctl daemon-reload
sudo systemctl start prometheus
sudo systemctl status prometheus

Installing RabbitMQ

The version that comes in the default Debian 9 / Ubuntu Xenial repos are a little bit outdated, so it is better to download the package from the official RabbitMQ downloads page, or directly add the RabbitMQ Debian repository.

Currently, Debian 9 default RabbitMQ package is 3.5.x, while the prometheus exporter client requires at least 3.6.x.

Also, we will need an up to date erlang version (19.x and above), so again, we need to add the APT repository, following the instructions found in the official erlang page.

sudo dpkg -i erlang-solutions_1.0_all.deb
deb precise contrib

Install the Erlang APT repo key

sudo apt-key add erlang_solutions.asc

Install the rabbitmq prometheus exporter

RabbitMQ allows to add plugins, and among them there is the Prometheus RabbitMQ plugin

The plugins must be downloaded (ez files are compiled erlang modules), and saved into the RabbitMQ plugins directory.

In my Debian installation : /usr/lib/rabbitmq/lib/rabbitmq-server-[version]/plugins (or /usr/lib/rabbitmq/plugins, I'm not sure if updating the server and not having the plugins updated would work :/ ).


You should be careful when upgrading rabbimq, since those files should need to be updated.

Then we must enable the plugins:

rabbitmq-plugins enable prometheus
rabbitmq-plugins enable prometheus_cowboy
rabbitmq-plugins enable prometheus_httpd
rabbitmq-plugins enable prometheus_rabbitmq_exporter

So now the metrics can be accessed through the Management API, so it makes sense to tell rabbitmq to use https:


{rabbitmq_management, [
   {listener, [
     {port, 15672},
     {ssl, true}

In rabbitmq we have the guest user, that by default is an admin one, but that is only allowed to make requests from localhost. So we must add a new user:

rabbitmqctl add_user watcher watcherpass
rabbitmqctl set_user_tag watcher monitoring

We can test that the data is beeing exported correctly with a command:

curl -k -i -u watcher:watcherpass https://rabbitmq.lh:15672/api/metrics

(We must use the -k option, because we are using a custom signed certificate).

We need a user that can extract the prometheus stats data:

Configure prometheus to get the RabbitMQ exported data

  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

      - targets: ['localhost:9090']

  - job_name: 'rabbitmq'
      - targets: ['rabbitmq.lh:15672']
    metrics_path: /api/metrics
    scheme: https
      username: gachapin
      password: gachapin
      server_name: 'rabbitmq.lh'
      insecure_skip_verify: true

Run from the commandline:

sudo -u prometheus /home/prometheus/prometheus/prometheus \
     --config.file /etc/prometheus/prometheus.yml \
     --storage.tsdb.path /home/prometheus/tsdb/  \
     --web.console.templates=/home/prometheus/prometheus/consoles \

Let's open the browser, and go to http://watcher.lh:9090/ to see the prometheus web console. We can check that we are collecting the rabbitmq data by entering the rabbitmq_queues.

Add grafana to plot it beautifully

We go to the Grafana downloads page, and again we opt for the option of using their APT repository, so we can easily get the updates.

It tells us to use this, no matter if Ubuntu or Debian. Add the following line to your /etc/apt/sources.list file

deb jessie main

and curl | sudo apt-key add - to add packagecloud key.

We should change the admin user / pass in the /etc/grafana/grafana.ini file.

After installation we want to enable it on startup:

sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable grafana-server

After adding the Prometheus data source, we can look for an existing RabbitMQ Grafana Dashboard, and select the one that uses rabbitmq_exporter for prometheus

Some of the graph requires minor changes, like the rabbitmq_connectionsTotal to rabbitmq_connections

Access prometheus only locally

Since we aren't going to access the prometheus console from outside, we can make it serve the data only to localhost, so we can add the :


So the local grafana interface can still access it. And in case we want to use it, we can still use ssh port forwarding.

Minimal setup done

With this we have the minimal setup to monitor how the rabbitmq queues / server is doing. Next steps would be set up alarms if the queue is growing too much, if we are running out of memory, or if the CPU is having too much work.