This chapter includes additional configuration options and/or general systems configuration.

It is meant for more advanced usage.

Using Fully Qualified Domain Name for my server

This requires that you have set up DNS service to point a fully qualified domain name to your server’s IP address.

For example, points to (if you simply ping you will see the IP address).

Instead of using the ports e.g. 8811 and 5555 with IP combination, we can use a FQDN, e.g. to reach peer manager on our server.

In this chapter we are going to configure nginx to serve IOTA Peer Manager and Grafana on port 80, while using a fully qualified domain name.

You should be able to create subdomains for your main domain name. For example, if your FQDN is “”, you can create in your DNS service an entry for:


Here’s what you have to change:

For Peer Manager, edit the file /etc/nginx/conf.d/iotapm.conf:

upstream iotapm {

server {
  listen 80;
  server_tokens off;

  # Redirect same port from http to https
  # The two lines here under are included in newer
  # versions of the playbook. Omit those if they were
  # not present in your configuration file.
  error_page 497 https://$host:$server_port$request_uri;
  include /etc/nginx/conf.d/ssl.cfg;

  auth_basic "Restricted";
  auth_basic_user_file /etc/nginx/.htpasswd;

  location / {
      proxy_pass http://iotapm;

Of course, don’t forget to replace with your own FQDN e.g.

Now, test nginx is okay with the change:

nginx -t

Output should look like this:

# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

Then, reload nginx configuration:

systemctl reload nginx

You should be able to point your browser to and see the Peer Manager.


For Ubuntu/Debian you will have to allow http port in ufw firewall:

ufw allow http

For Centos:

firewall-cmd –add-service=http –permanent –zone=public && firewall-cmd –reload

The same can be done for grafana /etc/nginx/conf.d/grafana.conf:

upstream grafana {

server {
    listen 80;
    server_tokens off;

  location / {
      proxy_pass http://grafana;
      proxy_http_version 1.1;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection 'upgrade';
      proxy_set_header Host $host;
      proxy_cache_bypass $http_upgrade;

Again, test nginx: nginx -t and reload nginx: systemctl reload nginx.

Now you should be able to point your browser to


Using SSL/HTTPS for accessing your panels ensures all traffic and passwords are impossible to “sniff”. The iri-playbook enables HTTPS by default but uses a self-signed certificate.

Configuring my server with HTTPS

Note that you can configure your node with HTTPS via iric. This document is kept here for reference and for advanced users.

There are amazing tutorials out there explaining how to achieve this. What is important to realize is that you can either create your own “self-signed” certificates (you become the Certificate Authority which isn’t recognized by anyone else), or use valid certificate authorities.

Since a while the IRI Playbook uses own generated self-signed certificate by default. You can replace the certificate and key with your own certificate+key. This can be done here /etc/nginx/conf.d/ssl.cfg (this file is included in most configurations).

Let’s Encrypt is a free service which allows you to create a certificate per domain name. Other solution would be to purchase a certificates.

By having a “valid” certificate for your server (signed by a trusted authority), you will get the green lock next to the URL in the browser, indicating that your connection is secure.

Your connection will also be encrypted if you opt for a self-signed certificate. However, the browser cannot verify who signed the certificate and will report a certificate error (in most cases you can just accept it as an exception and proceed).

Here is a great tutorial on how to add HTTPS to your nginx, choose nginx and the OS version you are using (Ubuntu/Debian/CentOS):

(For iri-playbook installations you can configure the generated certificate and key in /etc/nginx/conf.d/ssl.cfg)


I encourage you to refer to the previous chapter about configuring FQDN for Peer Manager and Grafana. From there you can proceed to adding HTTPS to those configurations.


For Ubuntu/Debian you will have to allow https port in ufw firewall:

ufw allow https

For Centos:

firewall-cmd –add-service=https –permanent –zone=public && firewall-cmd –reload

Reverse Proxy for IRI API (wallet)

If you read the two chapters above about configuring nginx to support FQDN or HTTPS you might be wondering whether you should reverse proxy from the web server to IRI API port (for wallet connections etc).

iri-playbook installs HAProxy with which you can reverse proxy to IRI API port and benefit from logging and security policies. In addition, you can add a HTTPS certificate. IOTA’s Trinity wallet requires nodes to have a valid SSL certificate.

See Running IRI API Port Behind HAProxy on how to enable HAproxy for wallet via reverse proxy and how to enable HTTPS(SSL) for it.

Sending Alert Notifications

Since release v1.1 a new feature has been introduced to support alerting.


This is considered an advanced feature. Configuration hereof requires some basic Linux and system configuration experience.


To edit files you can use nano which is a simple editor. See Using Nano to Edit Files for instructions.

TL;DR version

  1. Edit the file /opt/prometheus/alertmanager/config.yml using nano or any other editor.
  2. Find the following lines:
# Send using postfix local mailer
# You can send to a gmail or hotmail address
# but these will most probably be put into junkmail
# unles you configure your DNS and the from address
- name: email-me
  - to: root@localhost
    from: alertmanager@test001
    html: '{{ template "email.tmpl" . }}'
    smarthost: localhost:25
    send_resolved: true
  1. Replace the email address in the line: - to: root@localhost with your email address.
  2. Replace the email address in the line from: alertmanager@test001 with your node’s name, e.g: alertmanager@fullnode01.
  3. Save the file (in nano CTRL-X and confirm ‘y’)
  4. Restart alertmanager: systemctl restart alertmanager


Emails generated by your server will most certainly end up in junk mail. The reason being that your server is not configured as verified for sending emails.

You can, alternatively, try to send emails to your gmail account if you have one (or any other email account).

You will find examples in the /opt/prometheus/alertmanager/config.yml on how to authenticate.

For more information about alertmanager’s configuration consult the documentation.


The monitoring system has a set of default alerting rules. These are configured to monitor various data of the full node.

For example:

  • CPU load high
  • Memory usage high
  • Swap usage high
  • Disk space low
  • Too few or too many neighbors
  • Inactive neighbors
  • Milestones sync

Prometheus is the service responsible for collecting metrics data from the node’s services and status.

Alert Manager is the service responsible for sending out notifications.

Configuration Files

It is possible to add or tweak existing rules:


The alerting rules are part of Prometheus and are configured in /etc/prometheus/alert.rules.yml.


Changes to Prometheus’s configuration requires a restart of prometheus.


The configuration file for alertmanager can be found in /opt/prometheus/alertmanager/config.yml.

This is where you can set your email address and/or slack channel (not from iota!) to where you want to send the notifications.

The email template used for the emails can be found in /opt/prometheus/alertmanager/template/email.tmpl.


Changes to Alert Manager configuration files require a restart of alertmanager.


Prometheus can be controlled via systemctl, for example:

To restart: systemctl restart prometheus
To stop: systemctl stop prometheus
Status: systemctl status prometheus
Log: journalctl -u prometheus

The same can be done with alertmanager.

For more information see Documentation Prometheus Alertmanager

Configuring Multiple Nodes for Ansible

Using the Ansible playbook, it is possible to configure multiple full nodes at once.

How does it work?

Basically, following the manual installation instructions should get you there: Installation.

This chapter includes some information on how to prepare your nodes.


The idea is to clone the iri-playbook repository onto one of the servers/nodes, configure values and run the playbook.

The node from where you run the playbook will SSH connect to the rest of the nodes and configure them. Of course, it will also become a full node by itself.

SSH Access

For simplicity, let’s call the node from where you run the playbook the “master node”.

In order for this to work, you need to have SSH access to all nodes from the master node. This guide is based on user root access. There is a possibility to run as a user with privileges and become root, but we will skip this for simplicity.

Assuming you already have SSH access to all the nodes (using password?) let’s prepare SSH key authentication which allows you to connect without having to enter a password each time.

Make sure you are root whoami. If not, run sudo su - to become root.

Create New SSH Key

Let’s create a new SSH key:

ssh-keygen -b 2048 -t rsa

You will be asked to enter the path (allow the default /root/.ssh/id_rsa) and password (for simplicity, just click ‘Enter’ to use no password).

Output should look similar to this:

# ssh-keygen -b 2048 -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/
The key fingerprint is:
SHA256:tCmiLASAsDLPAhH3hcI0s0TKDCXg/QwQukVQZCHL3Ok root@test001
The key's randomart image is:
+---[RSA 2048]----+
|#%/. ..          |
|@%*=o.           |
|X*o*.   .        |
|+*. +  . o       |
|o.oE.o. S        |
|.o . . .         |
|. o              |
| .               |
|                 |

The generated key is the default key to be used by SSH when authenticating to other nodes (/root/.ssh/id_rsa).

Copy SSH Key Identity

Next, we copy the public key to the other nodes:

ssh-copy-id -i /root/.ssh/id_rsa root@other-node-name-or-ip

Given that you have root SSH access to the other nodes, you will be asked to enter a password, and possibly a question about host authenticity.

Output should look like:

# ssh-copy-id root@other-node-name-or-ip
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/"
The authenticity of host 'node-name (' can't be established.
ECDSA key fingerprint is SHA256:4QAhCxldhxR2bWes4uSVGl7ZAKiVXqgNT7geWAS043M.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@other-node-name-or-ip's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'root@other-node-name-or-ip'"
and check to make sure that only the key(s) you wanted were added.

Perform the authentication test, e.g ssh 'root@other-node-name-or-ip'. This should work without a password.

Run the ssh-copy-id -i /root/.ssh/id_rsa root@other-node-name-or-ip for each node you want to configure.

Once this is done you can use Ansible to configure these nodes.

Using Nano to Edit Files

Nano is a linux editor with which you can easily edit files. Of course, this is nothing like a graphical editor (e.g. notepad) but it does its job.

Most Linux experts use vi or vim which is much harder for beginners.

First, ensure you have nano installed:

  • On Ubuntu/Debian: apt-get install nano -y
  • On CentOS: yum install nano -y

Next, you can use nano to create a new file or edit an existing one. For example, we want to create a new file /tmp/test.txt, we run:

nano /tmp/test.txt

Nano opens the file and we can start writing. Let’s add the following lines:

IRI_NEIGHBORS="tcp:// tcp://testing:15600"

Instead of writing this, you can copy paste it. Pasting can be done using right mouse click or SHIFT-INSERT.

To save the file you can click F3 or, to exit and save you can click CTRL-X, if any modifications it will ask you if to save the file.

After having saved the file, you can run nano /tmp/test.txt again in order to edit the existing file.


Please check Nano’s Turorial for more information.

Running IRI API Port Behind HAProxy

On the IRI Dockerized version, IRI is already configured to run behind of HAProxy on port 14267. You do not need to follow the instructions below. They are kept here is reference only.

The IRI API port can be configured to be accessible via HAProxy. The benefits in doing so are:

  • Logging
  • Whitelist/blacklisting
  • Password protection
  • Rate limiting per IP, or per command
  • Denying invalid requests

To get it configured and installed you can use iric or run:

cd /opt/iri-playbook && git pull && ansible-playbook -i inventory -v site.yml --tags=iri_ssl,loadbalancer_role -e '{"lb_bind_addresses": [""]}' -e overwrite=yes

Please read this important information:

The API port will be accessible on 14267 by default.

Note that if you have previously enabled IRI with --remote option or API_HOST = you can disable those now. HAProxy will take care of that.

In addition, the REMOTE_LIMIT_API in the configuration files are no longer playing any role. HAProxy has taken control over the limited commands.

To see the configured denied/limited commands see group_vars/all/lb.yml or edit /etc/haproxy/haproxy.cfg after installation. The regex is different from what you have been used to.

Rate Limits

HAProxy enables rate limiting. In some cases, if you are loading a seed which has a lot of transactions on it, HAProxy might block too many requests.

One solution is to increase the rate limiting values in /etc/haproxy/haproxy.cfg. Find those lines and set the number accordingly:

# dynamic stuff for frontend + raise gpc0 counter
tcp-request content  track-sc2 src
acl conn_rate_abuse  sc2_conn_rate gt 250
acl http_rate_abuse  sc2_http_req_rate gt 400
acl conn_cur_abuse  sc2_conn_cur gt 21

Don’t forget to restart HAProxy afterwards: systemctl restart haproxy.

Enabling HTTPS for HAProxy

To enable HTTPS for haproxy run the following command or find the option in the main menu of iric. It will enable HAProxy to serve the IRI API on port 14267 with HTTPS (Warning: this will override any manual changes you might have applied to /etc/haproxy/haproxy.cfg previously):

cd /opt/iri-playbook && git pull && ansible-playbook -i inventory site.yml -v --tags=iri_ssl,loadbalancer_role -e lb_bind_address= -e haproxy_https=yes -e overwrite=yes

Note that this will apply a default self-signed certificate, but the command is required to enable HTTPS in the first place. If you want to use a valid certificate from a trusted certificate authority you can provide your own certificate + key file manually after running the above command. Alternatively, check the section below for installing a Let’s Encrypt certificate which is free:

Let’s Encrypt Free Certificate You can install a letsencrypt certificate: one prerequisite is that you have a fully qualified domain name pointing to the IP of your node.

If you already have a domain name, and ran the above command to enable HTTPS, you can run the following script:


The script will ask you for your email address which is used as an account at Let’s Encrypt. It will also ask for the domain name that points to your server’s public IP address.

The script will install the required utilities and request the certificate for you. It will proceed to install the certificate with HAProxy and add a cron job to automatically renew the certificate before it expires.

Once the script is finished you can point your browser to https://your-domain-name:14267: you should get a 403 forbidden page. You will be able to see the green lock icon/pad on the left of the URL which means the certificate is valid.

If you need help with this, please find help on Discord #fullnodes channel.


This setup is not fully automated yet via iric. For that reason, please avoid running the HAProxy enable commands as that will overwrite the certificate configuration in haproxy configuration file. If you did that accidentally you can always run the /usr/local/bin/ once more and it will set the correct configuration file for haproxy.


If you previously used a script to configure Let’s Encrypt with Nginx and your Nginx is no longer working, please follow the instructions at Fix Nginx

Installation Options

This is an explanation about the select-options provided by the fully automated installer.


This installation runs all the services inside Docker containers. If you already have Docker installed on your system you might choose to skip this step.


Nginx is a fast and versatile webserver. Its main function in this configuration is to allow access to GUIs in the browser such as IOTA Peer Manager, Prometheus, Grafana and more.

System Dependencies

Although all services are going to run inside of Docker, some additional packages installed on the system are required. If you choose not to install any dependencies, some things might not function as expected and you will have to resolved the dependencies manually.


The installation takes care of the firewalls: it ensures the firewall is running and configures the required ports. You can choose not to let the installer configure the firewall should you wish to do this manually.


HAProxy is a proxy/load-balancer. In the context of this installation it can be enabled to serve the IRI API port.

You can read more about it here: Running IRI API Port Behind HAProxy.

IOTA Caddy

The IOTA Caddy is a feature to perform more efficient PoW. All attachToTangle requests are proxied from HAProxy to the IOTA Caddy middleware.

Thanks to Luca Moser


The monitoring refers to installation of:

  • Prometheus (metrics collector)
  • Alertmanager (trigger alerts based on certain rules)
  • Grafana (Metrics dashboard)
  • Iota-prom-exporter (IRI full node metrics exporter for Prometheus)

It is recommended to install those to have a full overview of your node’s performance.

ZMQ Metrics

IRI can provide internal metrics and data by exposing ZeroMQ port (locally by default). If enabled, this will allow the iota-prom-exporter to read this data and create additional graphs in Grafana (e.g. transactions confirmation rate etc).

Upgrade IRI and Remove Existing Database

(option #3 from the IOTA Snapshot Blog)

A snapshot of the database normally involves a new version of IRI. This is also the case in the upcoming snapshot of April 29th, 2018.

Here are the steps you should follow in order to get a new version of IRI and remove the old database:

Run the following commands as user root (you can run sudo su to become user root).

  1. Stop IRI:
systemctl stop iri
  1. Remove the existing database:
rm -rf /var/lib/iri/target/mainnet*
  1. Run iric the command-line utility. Choose “Update IRI Software”. This will download the latest version and restart IRI.

If you don’t have iric installed, you can refer to this chapter on how to upgrade IRI manually Upgrade IRI.

Upgrade IRI and Keep Existing Database

(option #2 from the IOTA Snapshot Blog)

If you want to keep the existing database, the instructions provided by the IF include steps to compile the RC version (v1.4.2.4_RC) and apply a database migration tool.

To make this process easy, I included a script that will automate this process. This script works for both CentOS and Ubuntu/Debian (but only for iri-playbook installations).

You will be asked if you want to download a pre-compiled IRI from my server, or compile it on your server should you choose to do so.

Please read the warning below and use the following command (as root) in order to upgrade to and keep the existing database:

bash <(curl -s


This script will only work with installations of the iri-playbook. I provide this script to assist, but I do not take any responsibility for any damages, loss of data or breakage. By running this command you agree to the above and you take full responsibility.

For assistance and questions you can find help on IOTA’s #fullnodes channel (discord).