Documentation

Create a multi-node cluster

Create a multi-node InfluxDB 3 Enterprise cluster for high availability, performance, and workload isolation. Configure nodes with specific modes (ingest, query, process, compact) to optimize for your use case.

Prerequisites

  • Shared object store
  • Network connectivity between nodes

Basic multi-node setup

## NODE 1 compacts stored data

# Example variables
# node-id: 'host01'
# cluster-id: 'cluster01'
# bucket: 'influxdb-3-enterprise-storage'

influxdb3 serve \
  --node-id host01 \
  --cluster-id cluster01 \
  --mode ingest,query,compact \
  --object-store s3 \
  --bucket influxdb-3-enterprise-storage \
  --http-bind localhost:8181 \
  --aws-access-key-id <AWS_ACCESS_KEY_ID> \
  --aws-secret-access-key <AWS_SECRET_ACCESS_KEY>
  • Copy
  • Fill window
## NODE 2 handles writes and queries 

# Example variables
# node-id: 'host02'
# cluster-id: 'cluster01'
# bucket: 'influxdb-3-enterprise-storage'

influxdb3 serve \
  --node-id host02 \
  --cluster-id cluster01 \
  --mode ingest,query \
  --object-store s3 \
  --bucket influxdb-3-enterprise-storage \
  --http-bind localhost:8282 \
  --aws-access-key-id AWS_ACCESS_KEY_ID \
  --aws-secret-access-key AWS_SECRET_ACCESS_KEY
  • Copy
  • Fill window

Learn how to set up a multi-node cluster for different use cases, including high availability, read replicas, processing data, and workload isolation.

Create an object store

With the InfluxDB 3 Enterprise diskless architecture, all data is stored in a common object store. In a multi-node cluster, you connect all nodes to the same object store.

Enterprise supports the following object stores:

  • AWS S3 (or S3-compatible)
  • Azure Blob Storage
  • Google Cloud Storage

Refer to your object storage provider’s documentation for setting up an object store.

Connect to your object store

When starting your InfluxDB 3 Enterprise node, include provider-specific options for connecting to your object store–for example:

To use an AWS S3 or S3-compatible object store, provide the following options with your influxdb3 serve command:

  • --object-store: s3
  • --bucket: Your AWS S3 bucket name
  • --aws-access-key-id: Your AWS access key ID
    (can also be defined using the AWS_ACCESS_KEY_ID environment variable)
  • --aws-secret-access-key: Your AWS secret access key
    (can also be defined using the AWS_SECRET_ACCESS_KEY environment variable)
influxdb3 serve \
  # ...
  --object-store s3 \
  --bucket 
AWS_BUCKET_NAME
\
--aws-access-key-id
AWS_ACCESS_KEY_ID
\
--aws-secret-access-key
AWS_SECRET_ACCESS_KEY
  • Copy
  • Fill window

For information about other S3-specific settings, see Configuration options - AWS.

To use Azure Blob Storage as your object store, provide the following options with your influxdb3 serve command:

  • --object-store: azure
  • --bucket: Your Azure Blob Storage container name
  • --azure-storage-account: Your Azure Blob Storage storage account name
    (can also be defined using the AZURE_STORAGE_ACCOUNT environment variable)
  • --aws-secret-access-key: Your Azure Blob Storage access key
    (can also be defined using the AZURE_STORAGE_ACCESS_KEY environment variable)
influxdb3 serve \
  # ...
  --object-store azure \
  --bucket 
AZURE_CONTAINER_NAME
\
--azure-storage-account
AZURE_STORAGE_ACCOUNT
\
--azure-storage-access-key
AZURE_STORAGE_ACCESS_KEY
  • Copy
  • Fill window

To use Google Cloud Storage as your object store, provide the following options with your influxdb3 serve command:

  • --object-store: google
  • --bucket: Your Google Cloud Storage bucket name
  • --google-service-account: The path to to your Google credentials JSON file (can also be defined using the GOOGLE_SERVICE_ACCOUNT environment variable)
influxdb3 serve \
  # ...
  --object-store google \
  --bucket 
GOOGLE_BUCKET_NAME
\
--google-service-account
GOOGLE_SERVICE_ACCOUNT
  • Copy
  • Fill window

Server modes

InfluxDB 3 Enterprise modes determine what subprocesses the Enterprise node runs. These subprocesses fulfill required tasks including data ingestion, query processing, compaction, and running the processing engine.

The influxdb3 serve --mode option defines what subprocesses a node runs. Each node can run in one or more of the following modes:

  • all (default): Runs all necessary subprocesses.

  • ingest: Runs the data ingestion subprocess to handle writes.

  • query: Runs the query processing subprocess to handle queries.

  • process: Runs the processing engine subprocess to trigger and execute plugins.

  • compact: Runs the compactor subprocess to optimize data in object storage.

    Only one node in your cluster can run in compact mode.

Server mode examples

Configure a node to only handle write requests

influxdb3 serve \
  # ...
  --mode ingest
  • Copy
  • Fill window

Configure a node to only run the Compactor

influxdb3 serve \
  # ...
  --mode compact
  • Copy
  • Fill window

Configure a node to handle queries and run the processing engine

influxdb3 serve \
  # ...
  --mode query,process
  • Copy
  • Fill window

Cluster configuration examples

High availability cluster

A minimum of two nodes are required for basic high availability (HA), with both nodes reading and writing data.

Basic high availability setup

In a basic HA setup:

  • Two nodes both write data to the same object store and both handle queries
  • Node 1 and Node 2 are read replicas that read from each other’s object store directories
  • One of the nodes is designated as the Compactor node

Only one node can be designated as the Compactor. Compacted data is meant for a single writer, and many readers.

The following examples show how to configure and start two nodes for a basic HA setup.

  • Node 1 is for compaction
  • Node 2 is for ingest and query
## NODE 1

# Example variables
# node-id: 'host01'
# cluster-id: 'cluster01'
# bucket: 'influxdb-3-enterprise-storage'

influxdb3 serve \
  --node-id host01 \
  --cluster-id cluster01 \
  --mode ingest,query,compact \
  --object-store s3 \
  --bucket influxdb-3-enterprise-storage \
  --http-bind localhost:8181 \
  --aws-access-key-id <AWS_ACCESS_KEY_ID> \
  --aws-secret-access-key <AWS_SECRET_ACCESS_KEY>
  • Copy
  • Fill window
## NODE 2

# Example variables
# node-id: 'host02'
# cluster-id: 'cluster01'
# bucket: 'influxdb-3-enterprise-storage'

influxdb3 serve \
  --node-id host02 \
  --cluster-id cluster01 \
  --mode ingest,query \
  --object-store s3 \
  --bucket influxdb-3-enterprise-storage \
  --http-bind localhost:8282 \
  --aws-access-key-id AWS_ACCESS_KEY_ID \
  --aws-secret-access-key AWS_SECRET_ACCESS_KEY
  • Copy
  • Fill window

After the nodes have started, querying either node returns data for both nodes, and NODE 1 runs compaction. To add nodes to this setup, start more read replicas with the same cluster ID.

High availability with a dedicated Compactor

Data compaction in InfluxDB 3 Enterprise is one of the more computationally demanding operations. To ensure stable performance in ingest and query nodes, set up a compactor-only node to isolate the compaction workload.

Dedicated Compactor setup

The following examples sets up high availability with a dedicated Compactor node:

  1. Start two read-write nodes as read replicas, similar to the previous example.

    ## NODE 1 — Writer/Reader Node #1
    
    # Example variables
    # node-id: 'host01'
    # cluster-id: 'cluster01'
    # bucket: 'influxdb-3-enterprise-storage'
    
    influxdb3 serve \
      --node-id host01 \
      --cluster-id cluster01 \
      --mode ingest,query \
      --object-store s3 \
      --bucket influxdb-3-enterprise-storage \
      --http-bind localhost:8181 \
      --aws-access-key-id <AWS_ACCESS_KEY_ID> \
      --aws-secret-access-key <AWS_SECRET_ACCESS_KEY>
    
    • Copy
    • Fill window
    ## NODE 2 — Writer/Reader Node #2
    
    # Example variables
    # node-id: 'host02'
    # cluster-id: 'cluster01'
    # bucket: 'influxdb-3-enterprise-storage'
    
    influxdb3 serve \
      --node-id host02 \
      --cluster-id cluster01 \
      --mode ingest,query \
      --object-store s3 \
      --bucket influxdb-3-enterprise-storage \
      --http-bind localhost:8282 \
      --aws-access-key-id <AWS_ACCESS_KEY_ID> \
      --aws-secret-access-key <AWS_SECRET_ACCESS_KEY>
    
    • Copy
    • Fill window
  2. Start the dedicated compactor node with the --mode=compact option to ensure the node only runs compaction.

    ## NODE 3 — Compactor Node
    
    # Example variables
    # node-id: 'host03'
    # cluster-id: 'cluster01'
    # bucket: 'influxdb-3-enterprise-storage'
    
    influxdb3 serve \
      --node-id host03 \
      --cluster-id cluster01 \
      --mode compact \
      --object-store s3 \
      --bucket influxdb-3-enterprise-storage \
      --aws-access-key-id <AWS_ACCESS_KEY_ID> \
      --aws-secret-access-key <AWS_SECRET_ACCESS_KEY>
    
    • Copy
    • Fill window

High availability with read replicas and a dedicated Compactor

For a robust and effective setup for managing time-series data, you can run ingest nodes alongside query nodes and a dedicated Compactor node.

Workload Isolation Setup
  1. Start ingest nodes with the ingest mode.

    Send all write requests to only your ingest nodes.

    ## NODE 1 — Writer Node #1
    
    # Example variables
    # node-id: 'host01'
    # cluster-id: 'cluster01'
    # bucket: 'influxdb-3-enterprise-storage'
    
    influxdb3 serve \
      --node-id host01 \
      --cluster-id cluster01 \
      --mode ingest \
      --object-store s3 \
      --bucket influxdb-3-enterprise-storage \
      --http-bind localhost:8181 \
      --aws-access-key-id <AWS_ACCESS_KEY_ID> \
      --aws-secret-access-key <AWS_SECRET_ACCESS_KEY>
    
    • Copy
    • Fill window
    ## NODE 2 — Writer Node #2
    
    # Example variables
    # node-id: 'host02'
    # cluster-id: 'cluster01'
    # bucket: 'influxdb-3-enterprise-storage'
    
    influxdb3 serve \
      --node-id host02 \
      --cluster-id cluster01 \
      --mode ingest \
      --object-store s3 \
      --bucket influxdb-3-enterprise-storage \
      --http-bind localhost:8282 \
      --aws-access-key-id <AWS_ACCESS_KEY_ID> \
      --aws-secret-access-key <AWS_SECRET_ACCESS_KEY>
    
    • Copy
    • Fill window
  2. Start the dedicated Compactor node with the compact mode.

    ## NODE 3 — Compactor Node
    
    # Example variables
    # node-id: 'host03'
    # cluster-id: 'cluster01'
    # bucket: 'influxdb-3-enterprise-storage'
    
    influxdb3 serve \
     --node-id host03 \
     --cluster-id cluster01 \
     --mode compact \
     --object-store s3 \
     --bucket influxdb-3-enterprise-storage \
     --aws-access-key-id <AWS_ACCESS_KEY_ID> \
      <AWS_SECRET_ACCESS_KEY>
    
    • Copy
    • Fill window
  3. Finally, start the query nodes using the query mode.

    Send all query requests to only your query nodes.

    ## NODE 4 — Read Node #1
    
    # Example variables
    # node-id: 'host04'
    # cluster-id: 'cluster01'
    # bucket: 'influxdb-3-enterprise-storage'
    
    influxdb3 serve \
      --node-id host04 \
      --cluster-id cluster01 \
      --mode query \
      --object-store s3 \
      --bucket influxdb-3-enterprise-storage \
      --http-bind localhost:8383 \
      --aws-access-key-id <AWS_ACCESS_KEY_ID> \
      --aws-secret-access-key <AWS_SECRET_ACCESS_KEY>
    
    • Copy
    • Fill window
    ## NODE 5 — Read Node #2
    
    # Example variables
    # node-id: 'host05'
    # cluster-id: 'cluster01'
    # bucket: 'influxdb-3-enterprise-storage'
    
    influxdb3 serve \
      --node-id host05 \
      --cluster-id cluster01 \
      --mode query \
      --object-store s3 \
      --bucket influxdb-3-enterprise-storage \
      --http-bind localhost:8484 \
      --aws-access-key-id <AWS_ACCESS_KEY_ID> \
      <AWS_SECRET_ACCESS_KEY>
    
    • Copy
    • Fill window

Writing and querying in multi-node clusters

You can use the default port 8181 for any write or query request without changing any of the commands.

Specify hosts for write and query requests

To benefit from this multi-node, isolated architecture:

  • Send write requests to a node that you have designated as an ingester.
  • Send query requests to a node that you have designated as a querier.

When running multiple local instances for testing or separate nodes in production, specifying the host ensures writes and queries are routed to the correct instance.

# Example querying a specific host
# HTTP-bound Port: 8585
influxdb3 query \
  --host 
http://localhost:8585
--token
AUTH_TOKEN
\
--database
DATABASE_NAME
\
"
QUERY
"
  • Copy
  • Fill window

Replace the following placeholders with your values:

  • http://localhost:8585: the host and port of the node to query
  • AUTH_TOKEN: your database token with permission to query the specified database
  • DATABASE_NAME: the name of the database to query
  • QUERY: the SQL or InfluxQL query to run against the database

Was this page helpful?

Thank you for your feedback!


The future of Flux

Flux is going into maintenance mode. You can continue using it as you currently are without any changes to your code.

Read more

New in InfluxDB 3.2

Key enhancements in InfluxDB 3.2 and the InfluxDB 3 Explorer UI is now generally available.

See the Blog Post

InfluxDB 3.2 is now available for both Core and Enterprise, bringing the general availability of InfluxDB 3 Explorer, a new UI that simplifies how you query, explore, and visualize data. On top of that, InfluxDB 3.2 includes a wide range of performance improvements, feature updates, and bug fixes including automated data retention and more.

For more information, check out: