Documentation

Back up and restore data

InfluxDB 3 Enterprise persists all data and metadata to object storage. Back up your data by copying object storage files in a specific order to ensure consistency and reliability.

Currently, InfluxDB 3 Enterprise does not include built-in backup and restore tools. Because copying files during periods of activity is a transient process, we highly recommended you follow the below procedures and copy order to minimize risk of creating inconsistent backups.

Supported object storage

InfluxDB 3 supports the following object storage backends for data persistence:

  • File system (local directory)
  • AWS S3 and S3-compatible storage (MinIO)
  • Azure Blob Storage
  • Google Cloud Storage

Backup and restore procedures don’t apply to memory-based object stores.

File structure

LocationDescription
Cluster files
<cluster_id>/_catalog_checkpointCatalog state checkpoint file
<cluster_id>/catalogs/Catalog log files tracking catalog state changes
<cluster_id>/commercial_licenseCommercial license file (if applicable)
<cluster_id>/trial_or_home_licenseTrial or home license file (if applicable)
<cluster_id>/enterpriseEnterprise configuration file
Node files
<node_id>/wal/Write-ahead log files containing written data
<node_id>/snapshots/Snapshot files
<node_id>/dbs/<db>/<table>/<date>/Parquet files organized by database, table, and time
<node_id>/table-snapshots/<db>/<table>/Table snapshot files (regenerated on restart, optional for backup)
Compactor node additional files
<node_id>/csCompaction summary files
<node_id>/cdCompaction detail files
<node_id>/cGeneration detail and Parquet files

Backup process

Copy files in the recommended order to reduce risk of creating inconsistent backups. Perform backups during downtime or minimal load periods when possible.

Recommended backup order:

  1. Compactor node directories (cs, cd, c)
  2. All nodes’ snapshots, dbs, wal directories
  3. Cluster catalog and checkpoint
  4. License files
#!/bin/bash
CLUSTER_ID
="
CLUSTER_ID
"
COMPACTOR_NODE
="
COMPACTOR_NODE
"
DATA_NODES=("
NODE1
"
"
NODE2
"
"
NODE3
"
)
SOURCE_BUCKET
="
SOURCE_BUCKET
"
BACKUP_BUCKET
="
BACKUP_BUCKET
"
BACKUP_PREFIX="backup-$(date +%Y%m%d-%H%M%S)" # 1. Backup compactor node first echo "Backing up compactor node..." aws s3 sync s3://${
SOURCE_BUCKET
}/${
COMPACTOR_NODE
}/cs \
s3://${
BACKUP_BUCKET
}/${BACKUP_PREFIX}/${
COMPACTOR_NODE
}/cs/
aws s3 sync s3://${
SOURCE_BUCKET
}/${
COMPACTOR_NODE
}/cd \
s3://${
BACKUP_BUCKET
}/${BACKUP_PREFIX}/${
COMPACTOR_NODE
}/cd/
aws s3 sync s3://${
SOURCE_BUCKET
}/${
COMPACTOR_NODE
}/c \
s3://${
BACKUP_BUCKET
}/${BACKUP_PREFIX}/${
COMPACTOR_NODE
}/c/
# 2. Backup all nodes (including compactor) ALL_NODES=("${DATA_NODES[@]}" "$
COMPACTOR_NODE
")
for NODE_ID in "${ALL_NODES[@]}"; do echo "Backing up node: ${NODE_ID}" aws s3 sync s3://${
SOURCE_BUCKET
}/${NODE_ID}/snapshots \
s3://${
BACKUP_BUCKET
}/${BACKUP_PREFIX}/${NODE_ID}/snapshots/
aws s3 sync s3://${
SOURCE_BUCKET
}/${NODE_ID}/dbs \
s3://${
BACKUP_BUCKET
}/${BACKUP_PREFIX}/${NODE_ID}/dbs/
aws s3 sync s3://${
SOURCE_BUCKET
}/${NODE_ID}/wal \
s3://${
BACKUP_BUCKET
}/${BACKUP_PREFIX}/${NODE_ID}/wal/
done # 3. Backup cluster catalog echo "Backing up cluster catalog..." aws s3 sync s3://${
SOURCE_BUCKET
}/${
CLUSTER_ID
}/catalogs \
s3://${
BACKUP_BUCKET
}/${BACKUP_PREFIX}/${
CLUSTER_ID
}/catalogs/
aws s3 cp s3://${
SOURCE_BUCKET
}/${
CLUSTER_ID
}/_catalog_checkpoint \
s3://${
BACKUP_BUCKET
}/${BACKUP_PREFIX}/${
CLUSTER_ID
}/
aws s3 cp s3://${
SOURCE_BUCKET
}/${
CLUSTER_ID
}/enterprise \
s3://${
BACKUP_BUCKET
}/${BACKUP_PREFIX}/${
CLUSTER_ID
}/
# 4. Backup license files (may not exist) aws s3 cp s3://${
SOURCE_BUCKET
}/${
CLUSTER_ID
}/commercial_license \
s3://${
BACKUP_BUCKET
}/${BACKUP_PREFIX}/${
CLUSTER_ID
}/ 2>/dev/null || true
aws s3 cp s3://${
SOURCE_BUCKET
}/${
CLUSTER_ID
}/trial_or_home_license \
s3://${
BACKUP_BUCKET
}/${BACKUP_PREFIX}/${
CLUSTER_ID
}/ 2>/dev/null || true
echo "Backup completed to s3://${
BACKUP_BUCKET
}/${BACKUP_PREFIX}"

Replace the following:

  • CLUSTER_ID: your cluster ID
  • COMPACTOR_NODE: your compactor node ID
  • NODE1, NODE2, NODE3: your data node IDs
  • SOURCE_BUCKET: your InfluxDB data bucket
  • BACKUP_BUCKET: your backup destination bucket
#!/bin/bash
CLUSTER_ID
="
CLUSTER_ID
"
COMPACTOR_NODE
="
COMPACTOR_NODE
"
DATA_NODES=("
NODE1
"
"
NODE2
"
"
NODE3
"
)
DATA_DIR="/path/to/data" BACKUP_DIR="/backup/$(date +%Y%m%d-%H%M%S)" mkdir -p "$BACKUP_DIR" # 1. Backup compactor node first echo "Backing up compactor node..." cp -r $DATA_DIR/${
COMPACTOR_NODE
}/cs "$BACKUP_DIR/${
COMPACTOR_NODE
}/"
cp -r $DATA_DIR/${
COMPACTOR_NODE
}/cd "$BACKUP_DIR/${
COMPACTOR_NODE
}/"
cp -r $DATA_DIR/${
COMPACTOR_NODE
}/c "$BACKUP_DIR/${
COMPACTOR_NODE
}/"
# 2. Backup all nodes ALL_NODES=("${DATA_NODES[@]}" "$
COMPACTOR_NODE
")
for NODE_ID in "${ALL_NODES[@]}"; do echo "Backing up node: ${NODE_ID}" mkdir -p "$BACKUP_DIR/${NODE_ID}" cp -r $DATA_DIR/${NODE_ID}/snapshots "$BACKUP_DIR/${NODE_ID}/" cp -r $DATA_DIR/${NODE_ID}/dbs "$BACKUP_DIR/${NODE_ID}/" cp -r $DATA_DIR/${NODE_ID}/wal "$BACKUP_DIR/${NODE_ID}/" done # 3. Backup cluster catalog echo "Backing up cluster catalog..." mkdir -p "$BACKUP_DIR/${
CLUSTER_ID
}"
cp -r $DATA_DIR/${
CLUSTER_ID
}/catalogs "$BACKUP_DIR/${
CLUSTER_ID
}/"
cp $DATA_DIR/${
CLUSTER_ID
}/_catalog_checkpoint "$BACKUP_DIR/${
CLUSTER_ID
}/"
cp $DATA_DIR/${
CLUSTER_ID
}/enterprise "$BACKUP_DIR/${
CLUSTER_ID
}/"
# 4. Backup license files (if they exist) [ -f "$DATA_DIR/${
CLUSTER_ID
}/commercial_license" ] && \
cp $DATA_DIR/${
CLUSTER_ID
}/commercial_license "$BACKUP_DIR/${
CLUSTER_ID
}/"
[ -f "$DATA_DIR/${
CLUSTER_ID
}/trial_or_home_license" ] && \
cp $DATA_DIR/${
CLUSTER_ID
}/trial_or_home_license "$BACKUP_DIR/${
CLUSTER_ID
}/"
echo "Backup completed to $BACKUP_DIR"

Replace the following:

Restore process

Restoring overwrites existing data. Always verify you have correct backups before proceeding.

Recommended restore order:

  1. Cluster catalog and checkpoint
  2. License files
  3. All nodes’ snapshots, dbs, wal directories
  4. Compactor node directories (cs, cd, c)

S3 restore example

#!/bin/bash
CLUSTER_ID
="
CLUSTER_ID
"
COMPACTOR_NODE
="
COMPACTOR_NODE
"
DATA_NODES=("
NODE1
"
"
NODE2
"
"
NODE3
"
)
BACKUP_BUCKET
="
BACKUP_BUCKET
"
BACKUP_PREFIX="backup-
BACKUP_DATE
"
TARGET_BUCKET
="
TARGET_BUCKET
"
# 1. Stop all InfluxDB 3 Enterprise nodes # Implementation depends on your orchestration # 2. Restore cluster catalog and license first aws s3 cp s3://${
BACKUP_BUCKET
}/${BACKUP_PREFIX}/${
CLUSTER_ID
}/_catalog_checkpoint \
s3://${
TARGET_BUCKET
}/${
CLUSTER_ID
}/
aws s3 sync s3://${
BACKUP_BUCKET
}/${BACKUP_PREFIX}/${
CLUSTER_ID
}/catalogs \
s3://${
TARGET_BUCKET
}/${
CLUSTER_ID
}/catalogs/
aws s3 cp s3://${
BACKUP_BUCKET
}/${BACKUP_PREFIX}/${
CLUSTER_ID
}/enterprise \
s3://${
TARGET_BUCKET
}/${
CLUSTER_ID
}/
# Restore license files if they exist aws s3 cp s3://${
BACKUP_BUCKET
}/${BACKUP_PREFIX}/${
CLUSTER_ID
}/commercial_license \
s3://${
TARGET_BUCKET
}/${
CLUSTER_ID
}/ 2>/dev/null || true
aws s3 cp s3://${
BACKUP_BUCKET
}/${BACKUP_PREFIX}/${
CLUSTER_ID
}/trial_or_home_license \
s3://${
TARGET_BUCKET
}/${
CLUSTER_ID
}/ 2>/dev/null || true
# 3. Restore all nodes ALL_NODES=("${DATA_NODES[@]}" "$
COMPACTOR_NODE
")
for NODE_ID in "${ALL_NODES[@]}"; do echo "Restoring node: ${NODE_ID}" aws s3 sync s3://${
BACKUP_BUCKET
}/${BACKUP_PREFIX}/${NODE_ID}/ \
s3://${
TARGET_BUCKET
}/${NODE_ID}/
done # 4. Start InfluxDB Enterprise nodes # Start in order: data nodes, compactor

Replace the following:

  • CLUSTER_ID: your cluster ID
  • COMPACTOR_NODE: your compactor node ID
  • NODE1, NODE2, NODE3: your data node IDs
  • BACKUP_DATE: backup timestamp
  • BACKUP_BUCKET: bucket containing backup
  • TARGET_BUCKET: target bucket for restoration

File system restore example

#!/bin/bash
CLUSTER_ID
="
CLUSTER_ID
"
COMPACTOR_NODE
="
COMPACTOR_NODE
"
DATA_NODES=("
NODE1
"
"
NODE2
"
"
NODE3
"
)
BACKUP_DIR="/backup/
BACKUP_DATE
"
DATA_DIR="/path/to/data" # 1. Stop all InfluxDB 3 Enterprise nodes # Implementation depends on your deployment method # 2. Optional: Clear existing data ALL_NODES=("${DATA_NODES[@]}" "$
COMPACTOR_NODE
")
for NODE_ID in "${ALL_NODES[@]}"; do rm -rf ${DATA_DIR}/${NODE_ID}/* done rm -rf ${DATA_DIR}/${
CLUSTER_ID
}/*
# 3. Restore cluster catalog and license mkdir -p ${DATA_DIR}/${
CLUSTER_ID
}
cp ${BACKUP_DIR}/${
CLUSTER_ID
}/_catalog_checkpoint ${DATA_DIR}/${
CLUSTER_ID
}/
cp -r ${BACKUP_DIR}/${
CLUSTER_ID
}/catalogs ${DATA_DIR}/${
CLUSTER_ID
}/
cp ${BACKUP_DIR}/${
CLUSTER_ID
}/enterprise ${DATA_DIR}/${
CLUSTER_ID
}/
# Restore license files if they exist [ -f "${BACKUP_DIR}/${
CLUSTER_ID
}/commercial_license" ] && \
cp ${BACKUP_DIR}/${
CLUSTER_ID
}/commercial_license ${DATA_DIR}/${
CLUSTER_ID
}/
[ -f "${BACKUP_DIR}/${
CLUSTER_ID
}/trial_or_home_license" ] && \
cp ${BACKUP_DIR}/${
CLUSTER_ID
}/trial_or_home_license ${DATA_DIR}/${
CLUSTER_ID
}/
# 4. Restore all nodes for NODE_ID in "${ALL_NODES[@]}"; do echo "Restoring node: ${NODE_ID}" mkdir -p ${DATA_DIR}/${NODE_ID} cp -r ${BACKUP_DIR}/${NODE_ID}/* ${DATA_DIR}/${NODE_ID}/ done # 5. Set correct permissions # NOTE: Adjust 'influxdb:influxdb' to match your actual deployment user/group configuration. chown -R influxdb:influxdb ${DATA_DIR} # 6. Start InfluxDB Enterprise nodes # Start in order: data nodes, compactor

Replace the following:

  • CLUSTER_ID: your cluster ID
  • COMPACTOR_NODE: your compactor node ID
  • NODE1, NODE2, NODE3: your data node IDs
  • BACKUP_DATE: backup directory timestamp

Important considerations

Recovery expectations

Recovery succeeds to a consistent point in time, which is the latest snapshot included in the backup. Data written after that snapshot may not be present if its WAL was deleted after the backup. Any Parquet files without a snapshot reference are ignored.

License files

License files are tied to:

  • The specific cloud provider (AWS, Azure, GCS)
  • The specific bucket name
  • For file storage: the exact file path

You cannot restore a license file to a different bucket or path. Contact InfluxData support if you need to migrate to a different bucket.

Docker considerations

When running InfluxDB 3 Enterprise in containers:

  • Volume consistency: Use the same volume mounts for backup and restore operations
  • File permissions: Ensure container user can read restored files (use chown if needed)
  • Backup access: Mount a backup directory to copy files from containers to the host
  • Node coordination: Stop and start all Enterprise nodes (querier, ingester, compactor) in the correct order

Table snapshot files

Files in <node_id>/table-snapshots/ are intentionally excluded from backup:

  • These files are periodically overwritten
  • They regenerate automatically on server restart
  • Including them doesn’t harm but increases backup size unnecessarily

Timing recommendations

  • Perform backups during downtime or minimal load periods
  • Copying files while the database is active may create inconsistent backups
  • Consider using filesystem or storage snapshots if available
  • Compression is optional but recommended for long-term storage

Was this page helpful?

Thank you for your feedback!


The future of Flux

Flux is going into maintenance mode. You can continue using it as you currently are without any changes to your code.

Read more

New in InfluxDB 3.4

Key enhancements in InfluxDB 3.4 and the InfluxDB 3 Explorer 1.2.

See the Blog Post

InfluxDB 3.4 is now available for both Core and Enterprise, which introduces offline token generation for use in automated deployments and configurable license type selection that lets you bypass the interactive license prompt. InfluxDB 3 Explorer 1.2 is also available, which includes InfluxDB cache management and other new features.

For more information, check out: