Back up and restore data
InfluxDB 3 Enterprise persists all data and metadata to object storage. Back up your data by copying object storage files in a specific order to ensure consistency and reliability.
Currently, InfluxDB 3 Enterprise does not include built-in backup and restore tools. Because copying files during periods of activity is a transient process, we highly recommended you follow the below procedures and copy order to minimize risk of creating inconsistent backups.
Supported object storage
InfluxDB 3 supports the following object storage backends for data persistence:
- File system (local directory)
- AWS S3 and S3-compatible storage (MinIO)
- Azure Blob Storage
- Google Cloud Storage
Backup and restore procedures don’t apply to memory-based object stores.
File structure
Location | Description |
---|---|
Cluster files | |
<cluster_id>/_catalog_checkpoint | Catalog state checkpoint file |
<cluster_id>/catalogs/ | Catalog log files tracking catalog state changes |
<cluster_id>/commercial_license | Commercial license file (if applicable) |
<cluster_id>/trial_or_home_license | Trial or home license file (if applicable) |
<cluster_id>/enterprise | Enterprise configuration file |
Node files | |
<node_id>/wal/ | Write-ahead log files containing written data |
<node_id>/snapshots/ | Snapshot files |
<node_id>/dbs/<db>/<table>/<date>/ | Parquet files organized by database, table, and time |
<node_id>/table-snapshots/<db>/<table>/ | Table snapshot files (regenerated on restart, optional for backup) |
Compactor node additional files | |
<node_id>/cs | Compaction summary files |
<node_id>/cd | Compaction detail files |
<node_id>/c | Generation detail and Parquet files |
Backup process
Copy files in the recommended order to reduce risk of creating inconsistent backups. Perform backups during downtime or minimal load periods when possible.
Recommended backup order:
- Compactor node directories (cs, cd, c)
- All nodes’ snapshots, dbs, wal directories
- Cluster catalog and checkpoint
- License files
#!/bin/bash
CLUSTER_ID="CLUSTER_ID"
COMPACTOR_NODE="COMPACTOR_NODE"
DATA_NODES=("NODE1" "NODE2" "NODE3")
SOURCE_BUCKET="SOURCE_BUCKET"
BACKUP_BUCKET="BACKUP_BUCKET"
BACKUP_PREFIX="backup-$(date +%Y%m%d-%H%M%S)"
# 1. Backup compactor node first
echo "Backing up compactor node..."
aws s3 sync s3://${SOURCE_BUCKET}/${COMPACTOR_NODE}/cs \
s3://${BACKUP_BUCKET}/${BACKUP_PREFIX}/${COMPACTOR_NODE}/cs/
aws s3 sync s3://${SOURCE_BUCKET}/${COMPACTOR_NODE}/cd \
s3://${BACKUP_BUCKET}/${BACKUP_PREFIX}/${COMPACTOR_NODE}/cd/
aws s3 sync s3://${SOURCE_BUCKET}/${COMPACTOR_NODE}/c \
s3://${BACKUP_BUCKET}/${BACKUP_PREFIX}/${COMPACTOR_NODE}/c/
# 2. Backup all nodes (including compactor)
ALL_NODES=("${DATA_NODES[@]}" "$COMPACTOR_NODE")
for NODE_ID in "${ALL_NODES[@]}"; do
echo "Backing up node: ${NODE_ID}"
aws s3 sync s3://${SOURCE_BUCKET}/${NODE_ID}/snapshots \
s3://${BACKUP_BUCKET}/${BACKUP_PREFIX}/${NODE_ID}/snapshots/
aws s3 sync s3://${SOURCE_BUCKET}/${NODE_ID}/dbs \
s3://${BACKUP_BUCKET}/${BACKUP_PREFIX}/${NODE_ID}/dbs/
aws s3 sync s3://${SOURCE_BUCKET}/${NODE_ID}/wal \
s3://${BACKUP_BUCKET}/${BACKUP_PREFIX}/${NODE_ID}/wal/
done
# 3. Backup cluster catalog
echo "Backing up cluster catalog..."
aws s3 sync s3://${SOURCE_BUCKET}/${CLUSTER_ID}/catalogs \
s3://${BACKUP_BUCKET}/${BACKUP_PREFIX}/${CLUSTER_ID}/catalogs/
aws s3 cp s3://${SOURCE_BUCKET}/${CLUSTER_ID}/_catalog_checkpoint \
s3://${BACKUP_BUCKET}/${BACKUP_PREFIX}/${CLUSTER_ID}/
aws s3 cp s3://${SOURCE_BUCKET}/${CLUSTER_ID}/enterprise \
s3://${BACKUP_BUCKET}/${BACKUP_PREFIX}/${CLUSTER_ID}/
# 4. Backup license files (may not exist)
aws s3 cp s3://${SOURCE_BUCKET}/${CLUSTER_ID}/commercial_license \
s3://${BACKUP_BUCKET}/${BACKUP_PREFIX}/${CLUSTER_ID}/ 2>/dev/null || true
aws s3 cp s3://${SOURCE_BUCKET}/${CLUSTER_ID}/trial_or_home_license \
s3://${BACKUP_BUCKET}/${BACKUP_PREFIX}/${CLUSTER_ID}/ 2>/dev/null || true
echo "Backup completed to s3://${BACKUP_BUCKET}/${BACKUP_PREFIX}"
Replace the following:
CLUSTER_ID
: your cluster IDCOMPACTOR_NODE
: your compactor node IDNODE1
,NODE2
,NODE3
: your data node IDsSOURCE_BUCKET
: your InfluxDB data bucketBACKUP_BUCKET
: your backup destination bucket
#!/bin/bash
CLUSTER_ID="CLUSTER_ID"
COMPACTOR_NODE="COMPACTOR_NODE"
DATA_NODES=("NODE1" "NODE2" "NODE3")
DATA_DIR="/path/to/data"
BACKUP_DIR="/backup/$(date +%Y%m%d-%H%M%S)"
mkdir -p "$BACKUP_DIR"
# 1. Backup compactor node first
echo "Backing up compactor node..."
cp -r $DATA_DIR/${COMPACTOR_NODE}/cs "$BACKUP_DIR/${COMPACTOR_NODE}/"
cp -r $DATA_DIR/${COMPACTOR_NODE}/cd "$BACKUP_DIR/${COMPACTOR_NODE}/"
cp -r $DATA_DIR/${COMPACTOR_NODE}/c "$BACKUP_DIR/${COMPACTOR_NODE}/"
# 2. Backup all nodes
ALL_NODES=("${DATA_NODES[@]}" "$COMPACTOR_NODE")
for NODE_ID in "${ALL_NODES[@]}"; do
echo "Backing up node: ${NODE_ID}"
mkdir -p "$BACKUP_DIR/${NODE_ID}"
cp -r $DATA_DIR/${NODE_ID}/snapshots "$BACKUP_DIR/${NODE_ID}/"
cp -r $DATA_DIR/${NODE_ID}/dbs "$BACKUP_DIR/${NODE_ID}/"
cp -r $DATA_DIR/${NODE_ID}/wal "$BACKUP_DIR/${NODE_ID}/"
done
# 3. Backup cluster catalog
echo "Backing up cluster catalog..."
mkdir -p "$BACKUP_DIR/${CLUSTER_ID}"
cp -r $DATA_DIR/${CLUSTER_ID}/catalogs "$BACKUP_DIR/${CLUSTER_ID}/"
cp $DATA_DIR/${CLUSTER_ID}/_catalog_checkpoint "$BACKUP_DIR/${CLUSTER_ID}/"
cp $DATA_DIR/${CLUSTER_ID}/enterprise "$BACKUP_DIR/${CLUSTER_ID}/"
# 4. Backup license files (if they exist)
[ -f "$DATA_DIR/${CLUSTER_ID}/commercial_license" ] && \
cp $DATA_DIR/${CLUSTER_ID}/commercial_license "$BACKUP_DIR/${CLUSTER_ID}/"
[ -f "$DATA_DIR/${CLUSTER_ID}/trial_or_home_license" ] && \
cp $DATA_DIR/${CLUSTER_ID}/trial_or_home_license "$BACKUP_DIR/${CLUSTER_ID}/"
echo "Backup completed to $BACKUP_DIR"
Replace the following:
CLUSTER_ID
: your cluster IDCOMPACTOR_NODE
: your compactor node IDNODE1
,NODE2
,NODE3
: your data node IDs
Restore process
Restoring overwrites existing data. Always verify you have correct backups before proceeding.
Recommended restore order:
- Cluster catalog and checkpoint
- License files
- All nodes’ snapshots, dbs, wal directories
- Compactor node directories (cs, cd, c)
S3 restore example
#!/bin/bash
CLUSTER_ID="CLUSTER_ID"
COMPACTOR_NODE="COMPACTOR_NODE"
DATA_NODES=("NODE1" "NODE2" "NODE3")
BACKUP_BUCKET="BACKUP_BUCKET"
BACKUP_PREFIX="backup-BACKUP_DATE"
TARGET_BUCKET="TARGET_BUCKET"
# 1. Stop all InfluxDB 3 Enterprise nodes
# Implementation depends on your orchestration
# 2. Restore cluster catalog and license first
aws s3 cp s3://${BACKUP_BUCKET}/${BACKUP_PREFIX}/${CLUSTER_ID}/_catalog_checkpoint \
s3://${TARGET_BUCKET}/${CLUSTER_ID}/
aws s3 sync s3://${BACKUP_BUCKET}/${BACKUP_PREFIX}/${CLUSTER_ID}/catalogs \
s3://${TARGET_BUCKET}/${CLUSTER_ID}/catalogs/
aws s3 cp s3://${BACKUP_BUCKET}/${BACKUP_PREFIX}/${CLUSTER_ID}/enterprise \
s3://${TARGET_BUCKET}/${CLUSTER_ID}/
# Restore license files if they exist
aws s3 cp s3://${BACKUP_BUCKET}/${BACKUP_PREFIX}/${CLUSTER_ID}/commercial_license \
s3://${TARGET_BUCKET}/${CLUSTER_ID}/ 2>/dev/null || true
aws s3 cp s3://${BACKUP_BUCKET}/${BACKUP_PREFIX}/${CLUSTER_ID}/trial_or_home_license \
s3://${TARGET_BUCKET}/${CLUSTER_ID}/ 2>/dev/null || true
# 3. Restore all nodes
ALL_NODES=("${DATA_NODES[@]}" "$COMPACTOR_NODE")
for NODE_ID in "${ALL_NODES[@]}"; do
echo "Restoring node: ${NODE_ID}"
aws s3 sync s3://${BACKUP_BUCKET}/${BACKUP_PREFIX}/${NODE_ID}/ \
s3://${TARGET_BUCKET}/${NODE_ID}/
done
# 4. Start InfluxDB Enterprise nodes
# Start in order: data nodes, compactor
Replace the following:
CLUSTER_ID
: your cluster IDCOMPACTOR_NODE
: your compactor node IDNODE1
,NODE2
,NODE3
: your data node IDsBACKUP_DATE
: backup timestampBACKUP_BUCKET
: bucket containing backupTARGET_BUCKET
: target bucket for restoration
File system restore example
#!/bin/bash
CLUSTER_ID="CLUSTER_ID"
COMPACTOR_NODE="COMPACTOR_NODE"
DATA_NODES=("NODE1" "NODE2" "NODE3")
BACKUP_DIR="/backup/BACKUP_DATE"
DATA_DIR="/path/to/data"
# 1. Stop all InfluxDB 3 Enterprise nodes
# Implementation depends on your deployment method
# 2. Optional: Clear existing data
ALL_NODES=("${DATA_NODES[@]}" "$COMPACTOR_NODE")
for NODE_ID in "${ALL_NODES[@]}"; do
rm -rf ${DATA_DIR}/${NODE_ID}/*
done
rm -rf ${DATA_DIR}/${CLUSTER_ID}/*
# 3. Restore cluster catalog and license
mkdir -p ${DATA_DIR}/${CLUSTER_ID}
cp ${BACKUP_DIR}/${CLUSTER_ID}/_catalog_checkpoint ${DATA_DIR}/${CLUSTER_ID}/
cp -r ${BACKUP_DIR}/${CLUSTER_ID}/catalogs ${DATA_DIR}/${CLUSTER_ID}/
cp ${BACKUP_DIR}/${CLUSTER_ID}/enterprise ${DATA_DIR}/${CLUSTER_ID}/
# Restore license files if they exist
[ -f "${BACKUP_DIR}/${CLUSTER_ID}/commercial_license" ] && \
cp ${BACKUP_DIR}/${CLUSTER_ID}/commercial_license ${DATA_DIR}/${CLUSTER_ID}/
[ -f "${BACKUP_DIR}/${CLUSTER_ID}/trial_or_home_license" ] && \
cp ${BACKUP_DIR}/${CLUSTER_ID}/trial_or_home_license ${DATA_DIR}/${CLUSTER_ID}/
# 4. Restore all nodes
for NODE_ID in "${ALL_NODES[@]}"; do
echo "Restoring node: ${NODE_ID}"
mkdir -p ${DATA_DIR}/${NODE_ID}
cp -r ${BACKUP_DIR}/${NODE_ID}/* ${DATA_DIR}/${NODE_ID}/
done
# 5. Set correct permissions
# NOTE: Adjust 'influxdb:influxdb' to match your actual deployment user/group configuration.
chown -R influxdb:influxdb ${DATA_DIR}
# 6. Start InfluxDB Enterprise nodes
# Start in order: data nodes, compactor
Replace the following:
CLUSTER_ID
: your cluster IDCOMPACTOR_NODE
: your compactor node IDNODE1
,NODE2
,NODE3
: your data node IDsBACKUP_DATE
: backup directory timestamp
Important considerations
Recovery expectations
Recovery succeeds to a consistent point in time, which is the latest snapshot included in the backup. Data written after that snapshot may not be present if its WAL was deleted after the backup. Any Parquet files without a snapshot reference are ignored.
License files
License files are tied to:
- The specific cloud provider (AWS, Azure, GCS)
- The specific bucket name
- For file storage: the exact file path
You cannot restore a license file to a different bucket or path. Contact InfluxData support if you need to migrate to a different bucket.
Docker considerations
When running InfluxDB 3 Enterprise in containers:
- Volume consistency: Use the same volume mounts for backup and restore operations
- File permissions: Ensure container user can read restored files (use
chown
if needed) - Backup access: Mount a backup directory to copy files from containers to the host
- Node coordination: Stop and start all Enterprise nodes (querier, ingester, compactor) in the correct order
Table snapshot files
Files in <node_id>/table-snapshots/
are intentionally excluded from backup:
- These files are periodically overwritten
- They regenerate automatically on server restart
- Including them doesn’t harm but increases backup size unnecessarily
Timing recommendations
- Perform backups during downtime or minimal load periods
- Copying files while the database is active may create inconsistent backups
- Consider using filesystem or storage snapshots if available
- Compression is optional but recommended for long-term storage
Was this page helpful?
Thank you for your feedback!
Support and feedback
Thank you for being part of our community! We welcome and encourage your feedback and bug reports for InfluxDB 3 Enterprise and this documentation. To find support, use the following resources:
Customers with an annual or support contract can contact InfluxData Support. Customers using a trial license can email trial@influxdata.com for assistance.