Cluster Setup

Warning! This page documents an old version of InfluxDB, which is no longer actively developed. InfluxDB v1.2 is the most recent stable version of InfluxDB.

Note: Clustering is now a commerial product called InfluxEnterprise. More information can be found here.

This guide briefly introduces the InfluxDB cluster model and provides step-by-step instructions for setting up a cluster.

InfluxDB cluster model

InfluxDB supports arbitrarily sized clusters and any replication factor from 1 to the number of nodes in the cluster.

There are three types of nodes in an InfluxDB cluster: consensus nodes, data nodes, and hybrid nodes. A cluster must have an odd number of nodes running the consensus service to form a Raft consensus group and remain in a healthy state.

Hardware requirements vary for the different node types. See Hardware Sizing for cluster hardware requirements.

Cluster setup

The following steps configure and start up an InfluxDB cluster with three hybrid nodes. If you’re interested in having any of the different node types, see Cluster Node Configuration for their configuration details. Note that your first three nodes must be either hybrid nodes or consensus nodes.

We assume that you are running some version of Linux, and, while it is possible to build a cluster on a single server, it is not recommended.

1   Install InfluxDB on three machines. Do not start the daemon on any of the machines.

2  Configure the three nodes.

Where IP is the node’s IP address or hostname, each node’s /etc/influxdb/influxdb.conf file should have the following settings:

[meta]
  enabled = true
  ...
  bind-address = "<IP>:8088"
  ...
  http-bind-address = "<IP>:8091"

...

[data]
  enabled = true

...

[http]
  ...
  bind-address = "<IP>:8086"
  • Setting [meta] enabled = true and [data] enabled = true makes the node a hybrid node.
  • The [meta] bind-address is the address for cluster wide communication.
  • The [meta] http-bind-address is the address for meta node communication.
  • The [http] bind-address is the address for the HTTP API.

NOTE: The hostnames for each machine must be resolvable by all members of the cluster.

3  Point all nodes to each other.

On all three nodes, set INFLUXD_OPTS in /etc/default/influxdb:

INFLUXD_OPTS="-join <IP1>:8091,<IP2>:8091,<IP3>:8091"

where IP1 is the first node’s IP address or hostname, IP2 is the second nodes’s IP address or hostname, and IP3 is the third node’s IP address or hostname.

If the /etc/default/influxdb file does not exist, create it.

4  Start InfluxDB on each node:

sudo service influxdb start

5  Verify that the cluster is healthy.

Issue a SHOW SERVERS query to each node in your cluster using the influx CLI. The output should show that your cluster is made up of three hybrid nodes (hybrid nodes appear as both data_nodes and meta_nodes in the SHOW SERVERS query results):

> SHOW SERVERS
name: data_nodes
----------------
id	 http_addr		  tcp_addr
1	  <IP1>:8086	  <IP1>:8088
2	  <IP2>:8086	  <IP2>:8088
3	  <IP3>:8086	  <IP3>:8088


name: meta_nodes
----------------
id	 http_addr		  tcp_addr
1	  <IP1>:8091	  <IP1>:8088
2	  <IP2>:8091	  <IP2>:8088
3	  <IP3>:8091	  <IP3>:8088

Note: The SHOW SERVERS query groups results into data_nodes and meta_nodes. The term meta_nodes is outdated and refers to a node that runs the consensus service.

And that’s your three node cluster!

If you believe that you did the above steps correctly, but are still experiencing problems, try restarting each node in your cluster.

Adding nodes to your cluster

Once your initial cluster is healthy and running appropriately, you can start adding nodes to the cluster. Additional nodes can be consensus nodes, data nodes, or hybrid nodes. See Cluster Node Configuration for how to configure the different node types.

Adding a node to your cluster follows the same procedure that we outlined above. Note that in step 4, when you point your new node to the cluster, you must set INFLUXD_OPTS to every node in the cluster, including itself.

Removing nodes from your cluster

Please see the reference documentation on DROP SERVER.