Note: Clustering is now a commerial product called InfluxEnterprise. More information can be found here.
This guide briefly introduces the InfluxDB cluster model and provides step-by-step instructions for setting up a cluster.
InfluxDB cluster model
InfluxDB supports arbitrarily sized clusters and any replication factor from 1 to the number of nodes in the cluster.
There are three types of nodes in an InfluxDB cluster: consensus nodes, data nodes, and hybrid nodes. A cluster must have an odd number of nodes running the consensus service to form a Raft consensus group and remain in a healthy state.
Hardware requirements vary for the different node types. See Hardware Sizing for cluster hardware requirements.
The following steps configure and start up an InfluxDB cluster with three hybrid nodes. If you’re interested in having any of the different node types, see Cluster Node Configuration for their configuration details. Note that your first three nodes must be either hybrid nodes or consensus nodes.
We assume that you are running some version of Linux, and, while it is possible to build a cluster on a single server, it is not recommended.
Note: Always use the most recent release for clustering as there are significant improvements with each release.
1 Install InfluxDB on three machines. Do not start the daemon on any of the machines.
2 Configure the three nodes.
IP is the node’s IP address or hostname, each node’s
/etc/influxdb/influxdb.conf file should have the following settings:
[meta] enabled = true ... bind-address = "<IP>:8088" http-bind-address = "<IP>:8091" ... [data] enabled = true [http] ... bind-address = "<IP>:8086"
[meta] enabled = trueand
[data] enabled = truemakes the node a hybrid node.
[meta] bind-addressis the address for cluster wide communication.
[meta] http-bind-addressis the address for meta node communication.
[http] bind-addressis the address for the HTTP API.
NOTE: The hostnames for each machine must be resolvable by all members of the cluster.
3 Start InfluxDB on the first node:
sudo service influxdb start
4 Point the second and third nodes to the first node.
On the second and third nodes, set
IP1 is the first node’s IP address or hostname.
/etc/default/influxdb file does not exist, create it.
5 Start InfluxDB on the second and third nodes:
sudo service influxdb start
6 Verify that the cluster is healthy.
SHOW SERVERS query to each node in your cluster using the
The output should show that your cluster is made up of three hybrid nodes (hybrid nodes appear as both
meta_nodes in the
SHOW SERVERS query results):
> SHOW SERVERS name: data_nodes ---------------- id http_addr tcp_addr 1 IP1:8086 IP1:8088 3 IP2:8086 IP2:8088 5 IP3:8086 IP3:8088 name: meta_nodes ---------------- id http_addr tcp_addr 1 IP1:8091 IP1:8088 2 IP2:8091 IP2:8088 4 IP3:8091 IP3:8088
- Currently, the
SHOW SERVERSquery groups results into
meta_nodes. The term
meta_nodesis outdated and refers to a node that runs the consensus service.
- The irregular node
idnumbers in the
SHOW SERVERSresults is a known issue and a fix is underway. For now, it may be easier to identify data nodes and consensus nodes by the IP addresses reported in the
And that’s your three node cluster!
If you believe that you did the above steps correctly, but are still experiencing problems, try restarting each node in your cluster.
Adding nodes to your cluster
Once your initial cluster is healthy and running appropriately, you can start adding nodes to the cluster. Additional nodes can be consensus nodes, data nodes, or hybrid nodes. See Cluster Node Configuration for how to configure the different node types.
Adding a node to your cluster follows the same procedure that we outlined above.
Note that in step 4, when you point your new node to the cluster, you must set
INFLUXD_OPTS to the
hostname:port pair of a pre-existing cluster member that is running the consensus service.
If you specify more than one
hostname:port pair in a comma delimited list, Influx will try to connect with the additional pairs if it cannot connect with the first.
Removing nodes from your cluster
Please see the reference documentation on