Documentation

Back up and restore data

InfluxDB 3 Enterprise persists all data and metadata to object storage. Back up your data by copying object storage files in a specific order to ensure consistency and reliability.

Currently, InfluxDB 3 Enterprise does not include built-in backup and restore tools. Because copying files during periods of activity is a transient process, we highly recommended you follow the below procedures and copy order to minimize risk of creating inconsistent backups.

Supported object storage

InfluxDB 3 supports the following object storage backends for data persistence:

  • File system (local directory)
  • AWS S3 and S3-compatible storage (MinIO)
  • Azure Blob Storage
  • Google Cloud Storage

Backup and restore procedures don’t apply to memory-based object stores.

File structure

LocationDescription
Cluster files
<cluster_id>/_catalog_checkpointCatalog state checkpoint file
<cluster_id>/catalog/Catalog log files tracking catalog state changes
<cluster_id>/commercial_licenseCommercial license file (if applicable)
<cluster_id>/trial_or_home_licenseTrial or home license file (if applicable)
<cluster_id>/enterpriseEnterprise configuration file
Node files
<node_id>/wal/Write-ahead log files containing written data
<node_id>/snapshots/Snapshot files
<node_id>/dbs/<db>/<table>/<date>/Parquet files organized by database, table, and time
<node_id>/table-snapshots/<db>/<table>/Table snapshot files (regenerated on restart, optional for backup)
Compactor node additional files
<node_id>/csCompaction summary files
<node_id>/cdCompaction detail files
<node_id>/cGeneration detail and Parquet files

Backup process

Copy files in the recommended order to reduce risk of creating inconsistent backups. Perform backups during downtime or minimal load periods when possible.

Recommended backup order:

  1. Compactor node directories (cs, cd, c)
  2. All nodes’ snapshots, dbs, wal directories
  3. Cluster catalog and checkpoint
  4. License files
#!/bin/bash
CLUSTER_ID
="
CLUSTER_ID
"
COMPACTOR_NODE
="
COMPACTOR_NODE
"
DATA_NODES=("
NODE1
"
"
NODE2
"
"
NODE3
"
)
SOURCE_BUCKET
="
SOURCE_BUCKET
"
BACKUP_BUCKET
="
BACKUP_BUCKET
"
BACKUP_PREFIX="backup-$(date +%Y%m%d-%H%M%S)" # 1. Backup compactor node first echo "Backing up compactor node..." aws s3 sync s3://${
SOURCE_BUCKET
}/${
COMPACTOR_NODE
}/cs \
s3://${
BACKUP_BUCKET
}/${BACKUP_PREFIX}/${
COMPACTOR_NODE
}/cs/
aws s3 sync s3://${
SOURCE_BUCKET
}/${
COMPACTOR_NODE
}/cd \
s3://${
BACKUP_BUCKET
}/${BACKUP_PREFIX}/${
COMPACTOR_NODE
}/cd/
aws s3 sync s3://${
SOURCE_BUCKET
}/${
COMPACTOR_NODE
}/c \
s3://${
BACKUP_BUCKET
}/${BACKUP_PREFIX}/${
COMPACTOR_NODE
}/c/
# 2. Backup all nodes (including compactor) ALL_NODES=("${DATA_NODES[@]}" "$
COMPACTOR_NODE
")
for NODE_ID in "${ALL_NODES[@]}"; do echo "Backing up node: ${NODE_ID}" aws s3 sync s3://${
SOURCE_BUCKET
}/${NODE_ID}/snapshots \
s3://${
BACKUP_BUCKET
}/${BACKUP_PREFIX}/${NODE_ID}/snapshots/
aws s3 sync s3://${
SOURCE_BUCKET
}/${NODE_ID}/dbs \
s3://${
BACKUP_BUCKET
}/${BACKUP_PREFIX}/${NODE_ID}/dbs/
aws s3 sync s3://${
SOURCE_BUCKET
}/${NODE_ID}/wal \
s3://${
BACKUP_BUCKET
}/${BACKUP_PREFIX}/${NODE_ID}/wal/
done # 3. Backup cluster catalog echo "Backing up cluster catalog..." aws s3 sync s3://${
SOURCE_BUCKET
}/${
CLUSTER_ID
}/catalog \
s3://${
BACKUP_BUCKET
}/${BACKUP_PREFIX}/${
CLUSTER_ID
}/catalog/
aws s3 cp s3://${
SOURCE_BUCKET
}/${
CLUSTER_ID
}/_catalog_checkpoint \
s3://${
BACKUP_BUCKET
}/${BACKUP_PREFIX}/${
CLUSTER_ID
}/
aws s3 cp s3://${
SOURCE_BUCKET
}/${
CLUSTER_ID
}/enterprise \
s3://${
BACKUP_BUCKET
}/${BACKUP_PREFIX}/${
CLUSTER_ID
}/
# 4. Backup license files (may not exist) aws s3 cp s3://${
SOURCE_BUCKET
}/${
CLUSTER_ID
}/commercial_license \
s3://${
BACKUP_BUCKET
}/${BACKUP_PREFIX}/${
CLUSTER_ID
}/ 2>/dev/null || true
aws s3 cp s3://${
SOURCE_BUCKET
}/${
CLUSTER_ID
}/trial_or_home_license \
s3://${
BACKUP_BUCKET
}/${BACKUP_PREFIX}/${
CLUSTER_ID
}/ 2>/dev/null || true
echo "Backup completed to s3://${
BACKUP_BUCKET
}/${BACKUP_PREFIX}"

Replace the following:

  • CLUSTER_ID: your cluster ID
  • COMPACTOR_NODE: your compactor node ID
  • NODE1, NODE2, NODE3: your data node IDs
  • SOURCE_BUCKET: your InfluxDB data bucket
  • BACKUP_BUCKET: your backup destination bucket
#!/bin/bash
CLUSTER_ID
="
CLUSTER_ID
"
COMPACTOR_NODE
="
COMPACTOR_NODE
"
DATA_NODES=("
NODE1
"
"
NODE2
"
"
NODE3
"
)
DATA_DIR="/path/to/data" BACKUP_DIR="/backup/$(date +%Y%m%d-%H%M%S)" mkdir -p "$BACKUP_DIR" # 1. Backup compactor node first echo "Backing up compactor node..." cp -r $DATA_DIR/${
COMPACTOR_NODE
}/cs "$BACKUP_DIR/${
COMPACTOR_NODE
}/"
cp -r $DATA_DIR/${
COMPACTOR_NODE
}/cd "$BACKUP_DIR/${
COMPACTOR_NODE
}/"
cp -r $DATA_DIR/${
COMPACTOR_NODE
}/c "$BACKUP_DIR/${
COMPACTOR_NODE
}/"
# 2. Backup all nodes ALL_NODES=("${DATA_NODES[@]}" "$
COMPACTOR_NODE
")
for NODE_ID in "${ALL_NODES[@]}"; do echo "Backing up node: ${NODE_ID}" mkdir -p "$BACKUP_DIR/${NODE_ID}" cp -r $DATA_DIR/${NODE_ID}/snapshots "$BACKUP_DIR/${NODE_ID}/" cp -r $DATA_DIR/${NODE_ID}/dbs "$BACKUP_DIR/${NODE_ID}/" cp -r $DATA_DIR/${NODE_ID}/wal "$BACKUP_DIR/${NODE_ID}/" done # 3. Backup cluster catalog echo "Backing up cluster catalog..." mkdir -p "$BACKUP_DIR/${
CLUSTER_ID
}"
cp -r $DATA_DIR/${
CLUSTER_ID
}/catalog "$BACKUP_DIR/${
CLUSTER_ID
}/"
cp $DATA_DIR/${
CLUSTER_ID
}/_catalog_checkpoint "$BACKUP_DIR/${
CLUSTER_ID
}/"
cp $DATA_DIR/${
CLUSTER_ID
}/enterprise "$BACKUP_DIR/${
CLUSTER_ID
}/"
# 4. Backup license files (if they exist) [ -f "$DATA_DIR/${
CLUSTER_ID
}/commercial_license" ] && \
cp $DATA_DIR/${
CLUSTER_ID
}/commercial_license "$BACKUP_DIR/${
CLUSTER_ID
}/"
[ -f "$DATA_DIR/${
CLUSTER_ID
}/trial_or_home_license" ] && \
cp $DATA_DIR/${
CLUSTER_ID
}/trial_or_home_license "$BACKUP_DIR/${
CLUSTER_ID
}/"
echo "Backup completed to $BACKUP_DIR"

Replace the following:

Restore process

Restoring overwrites existing data. Always verify you have correct backups before proceeding.

Recommended restore order:

  1. Cluster catalog and checkpoint
  2. License files
  3. All nodes’ snapshots, dbs, wal directories
  4. Compactor node directories (cs, cd, c)

S3 restore example

#!/bin/bash
CLUSTER_ID
="
CLUSTER_ID
"
COMPACTOR_NODE
="
COMPACTOR_NODE
"
DATA_NODES=("
NODE1
"
"
NODE2
"
"
NODE3
"
)
BACKUP_BUCKET
="
BACKUP_BUCKET
"
BACKUP_PREFIX="backup-
BACKUP_DATE
"
TARGET_BUCKET
="
TARGET_BUCKET
"
# 1. Stop all InfluxDB 3 Enterprise nodes # Implementation depends on your orchestration # 2. Restore cluster catalog and license first aws s3 cp s3://${
BACKUP_BUCKET
}/${BACKUP_PREFIX}/${
CLUSTER_ID
}/_catalog_checkpoint \
s3://${
TARGET_BUCKET
}/${
CLUSTER_ID
}/
aws s3 sync s3://${
BACKUP_BUCKET
}/${BACKUP_PREFIX}/${
CLUSTER_ID
}/catalog \
s3://${
TARGET_BUCKET
}/${
CLUSTER_ID
}/catalog/
aws s3 cp s3://${
BACKUP_BUCKET
}/${BACKUP_PREFIX}/${
CLUSTER_ID
}/enterprise \
s3://${
TARGET_BUCKET
}/${
CLUSTER_ID
}/
# Restore license files if they exist aws s3 cp s3://${
BACKUP_BUCKET
}/${BACKUP_PREFIX}/${
CLUSTER_ID
}/commercial_license \
s3://${
TARGET_BUCKET
}/${
CLUSTER_ID
}/ 2>/dev/null || true
aws s3 cp s3://${
BACKUP_BUCKET
}/${BACKUP_PREFIX}/${
CLUSTER_ID
}/trial_or_home_license \
s3://${
TARGET_BUCKET
}/${
CLUSTER_ID
}/ 2>/dev/null || true
# 3. Restore all nodes ALL_NODES=("${DATA_NODES[@]}" "$
COMPACTOR_NODE
")
for NODE_ID in "${ALL_NODES[@]}"; do echo "Restoring node: ${NODE_ID}" aws s3 sync s3://${
BACKUP_BUCKET
}/${BACKUP_PREFIX}/${NODE_ID}/ \
s3://${
TARGET_BUCKET
}/${NODE_ID}/
done # 4. Start InfluxDB Enterprise nodes # Start in order: data nodes, compactor

Replace the following:

  • CLUSTER_ID: your cluster ID
  • COMPACTOR_NODE: your compactor node ID
  • NODE1, NODE2, NODE3: your data node IDs
  • BACKUP_DATE: backup timestamp
  • BACKUP_BUCKET: bucket containing backup
  • TARGET_BUCKET: target bucket for restoration

File system restore example

#!/bin/bash
CLUSTER_ID
="
CLUSTER_ID
"
COMPACTOR_NODE
="
COMPACTOR_NODE
"
DATA_NODES=("
NODE1
"
"
NODE2
"
"
NODE3
"
)
BACKUP_DIR="/backup/
BACKUP_DATE
"
DATA_DIR="/path/to/data" # 1. Stop all InfluxDB 3 Enterprise nodes # Implementation depends on your deployment method # 2. Optional: Clear existing data ALL_NODES=("${DATA_NODES[@]}" "$
COMPACTOR_NODE
")
for NODE_ID in "${ALL_NODES[@]}"; do rm -rf ${DATA_DIR}/${NODE_ID}/* done rm -rf ${DATA_DIR}/${
CLUSTER_ID
}/*
# 3. Restore cluster catalog and license mkdir -p ${DATA_DIR}/${
CLUSTER_ID
}
cp ${BACKUP_DIR}/${
CLUSTER_ID
}/_catalog_checkpoint ${DATA_DIR}/${
CLUSTER_ID
}/
cp -r ${BACKUP_DIR}/${
CLUSTER_ID
}/catalog ${DATA_DIR}/${
CLUSTER_ID
}/
cp ${BACKUP_DIR}/${
CLUSTER_ID
}/enterprise ${DATA_DIR}/${
CLUSTER_ID
}/
# Restore license files if they exist [ -f "${BACKUP_DIR}/${
CLUSTER_ID
}/commercial_license" ] && \
cp ${BACKUP_DIR}/${
CLUSTER_ID
}/commercial_license ${DATA_DIR}/${
CLUSTER_ID
}/
[ -f "${BACKUP_DIR}/${
CLUSTER_ID
}/trial_or_home_license" ] && \
cp ${BACKUP_DIR}/${
CLUSTER_ID
}/trial_or_home_license ${DATA_DIR}/${
CLUSTER_ID
}/
# 4. Restore all nodes for NODE_ID in "${ALL_NODES[@]}"; do echo "Restoring node: ${NODE_ID}" mkdir -p ${DATA_DIR}/${NODE_ID} cp -r ${BACKUP_DIR}/${NODE_ID}/* ${DATA_DIR}/${NODE_ID}/ done # 5. Set correct permissions # NOTE: Adjust 'influxdb:influxdb' to match your actual deployment user/group configuration. chown -R influxdb:influxdb ${DATA_DIR} # 6. Start InfluxDB Enterprise nodes # Start in order: data nodes, compactor

Replace the following:

  • CLUSTER_ID: your cluster ID
  • COMPACTOR_NODE: your compactor node ID
  • NODE1, NODE2, NODE3: your data node IDs
  • BACKUP_DATE: backup directory timestamp

Important considerations

Recovery expectations

Recovery succeeds to a consistent point in time, which is the latest snapshot included in the backup. Data written after that snapshot may not be present if its WAL was deleted after the backup. Any Parquet files without a snapshot reference are ignored.

License files

License files are tied to the complete object store configuration and the cluster ID, not just the bucket or container name. Changing any component of your object store configuration or restoring to a different cluster invalidates your license.

License binding varies by cloud provider:

ProviderLicense bound to
AWS S3Endpoint + bucket name + region + cluster ID
Azure Blob StorageStorage account name + container name + cluster ID
Google Cloud StorageGCS base URL + bucket name + cluster ID
File storageExact file path + cluster ID

For disaster recovery planning:

  • Pre-provision a DR license: Request a license for your DR storage configuration before an incident occurs.
  • Contact support for migrations: If you need to restore to a different storage account, endpoint, bucket, or cluster, contact InfluxData support to obtain a new license.

Docker considerations

When running InfluxDB 3 Enterprise in containers:

  • Volume consistency: Use the same volume mounts for backup and restore operations
  • File permissions: Ensure container user can read restored files (use chown if needed)
  • Backup access: Mount a backup directory to copy files from containers to the host
  • Node coordination: Stop and start all Enterprise nodes (querier, ingester, compactor) in the correct order

Table snapshot files

Files in <node_id>/table-snapshots/ are intentionally excluded from backup:

  • These files are periodically overwritten
  • They regenerate automatically on server restart
  • Including them doesn’t harm but increases backup size unnecessarily

Timing recommendations

  • Perform backups during downtime or minimal load periods
  • Copying files while the database is active may create inconsistent backups
  • Consider using filesystem or storage snapshots if available
  • Compression is optional but recommended for long-term storage

Was this page helpful?

Thank you for your feedback!


InfluxDB OSS 2.9.0: API tokens are hashed by default

Stronger token security in InfluxDB OSS 2.9.0 — tokens are hashed on disk by default. Existing tokens are hashed on first startup and can’t be recovered afterward. Capture any plaintext tokens you still need before you upgrade.

View InfluxDB OSS 2.9.0 release notes

Hashed tokens authenticate exactly like unhashed tokens — clients and integrations keep working.

Also new in 2.9.0:

  • Configurable backup compression
  • Restore support for backups containing hashed tokens
  • Tighter Edge Data Replication queue validation
  • Flux upgrade
  • Compaction reliability improvements

Key enhancements in Explorer 1.8

Explorer 1.8 is now available with streaming data subscriptions (beta), line protocol preview, and query history & saved queries.

View Explorer 1.8 release notes

Explorer 1.8 includes new features and improvements that make it easier to ingest, explore, and manage data.

Highlights:

  • Streaming data subscriptions (beta): Stream data into Explorer from MQTT, Kafka, and AMQP sources.
  • Line protocol preview: Preview line protocol, schema, and parse errors before data is written.
  • Custom sample data: Generate custom sample datasets with line protocol and schema preview.
  • Query history and saved queries: Browse query history and save/re-run named queries.
  • Retention period management: Set, update, or clear retention periods on databases and tables.

For more details, see Explorer 1.8 release notes

InfluxDB 3.9: Performance upgrade preview

InfluxDB 3 Enterprise 3.9 includes a beta of major performance upgrades with faster single-series queries, wide-and-sparse table support, and more.

InfluxDB 3 Enterprise 3.9 includes a beta of major performance and feature updates.

Key improvements:

  • Faster single-series queries
  • Consistent resource usage
  • Wide-and-sparse table support
  • Automatic distinct value caches for reduced latency with metadata queries

Preview features are subject to breaking changes.

For more information, see:

Telegraf Enterprise now in public beta

Get early access to the Telegraf Controller and provide feedback to help shape the future of Telegraf Enterprise.

See the Blog Post

The upcoming Telegraf Enterprise offering is for organizations running Telegraf at scale and is comprised of two key components:

  • Telegraf Controller: A control plane (UI + API) that centralizes Telegraf configuration management and agent health visibility.
  • Telegraf Enterprise Support: Official support for Telegraf Controller and Telegraf plugins.

Join the Telegraf Enterprise beta to get early access to the Telegraf Controller and provide feedback to help shape the future of Telegraf Enterprise.

For more information:

Telegraf Controller v0.0.7-beta now available

Telegraf Controller v0.0.7-beta is now available with new features, improvements, bug fixes, and an important breaking change.

View the release notes
Download Telegraf Controller v0.0.7-beta

InfluxDB Docker latest tag changing to InfluxDB 3 Core

On May 27, 2026, the latest tag for InfluxDB Docker images will point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in your Docker deployments.

If using Docker to install and run InfluxDB, the latest tag will point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in your Docker deployments. For example, if using Docker to run InfluxDB v2, replace the latest version tag with a specific version tag in your Docker pull command–for example:

docker pull influxdb:2