---
title: Back up and restore data
description: Back up and restore your InfluxDB 3 Enterprise cluster by copying object storage files in the recommended order for each node type.
url: https://docs.influxdata.com/influxdb3/enterprise/admin/backup-restore/
estimated_tokens: 21892
product: InfluxDB 3 Enterprise
version: enterprise
---

# Back up and restore data

InfluxDB 3 Enterprise persists all data and metadata to object storage. Back up your data by copying object storage files in a specific order to ensure consistency and reliability.

Currently, InfluxDB 3 Enterprise does not include built-in backup and restore tools. Because copying files during periods of activity is a transient process, we highly recommended you follow the below procedures and copy order to minimize risk of creating inconsistent backups.

## Supported object storage

InfluxDB 3 supports the following object storage backends for data persistence:

-   **File system** (local directory)
-   **AWS S3** and S3-compatible storage ([MinIO](/influxdb3/enterprise/object-storage/minio/))
-   **Azure Blob Storage**
-   **Google Cloud Storage**

Backup and restore procedures don’t apply to memory-based [object stores](/influxdb3/enterprise/reference/config-options/#object-store).

## File structure

| Location | Description |
| --- | --- |
| Cluster files |  |
| <cluster_id>/_catalog_checkpoint | Catalog state checkpoint file |
| <cluster_id>/catalog/ | Catalog log files tracking catalog state changes |
| <cluster_id>/commercial_license | Commercial license file (if applicable) |
| <cluster_id>/trial_or_home_license | Trial or home license file (if applicable) |
| <cluster_id>/enterprise | Enterprise configuration file |
| Node files |  |
| <node_id>/wal/ | Write-ahead log files containing written data |
| <node_id>/snapshots/ | Snapshot files |
| <node_id>/dbs/<db>/<table>/<date>/ | Parquet files organized by database, table, and time |
| <node_id>/table-snapshots/<db>/<table>/ | Table snapshot files (regenerated on restart, optional for backup) |
| Compactor node additional files |  |
| <node_id>/cs | Compaction summary files |
| <node_id>/cd | Compaction detail files |
| <node_id>/c | Generation detail and Parquet files |

## Backup process

Copy files in the recommended order to reduce risk of creating inconsistent backups. Perform backups during downtime or minimal load periods when possible.

**Recommended backup order:**

1. Compactor node directories (cs, cd, c)
2. All nodes’ snapshots, dbs, wal directories
3. Cluster catalog and checkpoint
4. License files

<!-- Tabbed content: Select one of the following options -->

**S3:**

```bash
#!/bin/bash
CLUSTER_ID="CLUSTER_ID"
COMPACTOR_NODE="COMPACTOR_NODE"
DATA_NODES=("NODE1" "NODE2" "NODE3")
SOURCE_BUCKET="SOURCE_BUCKET"
BACKUP_BUCKET="BACKUP_BUCKET"
BACKUP_PREFIX="backup-$(date +%Y%m%d-%H%M%S)"

# 1. Backup compactor node first
echo "Backing up compactor node..."
aws s3 sync s3://${SOURCE_BUCKET}/${COMPACTOR_NODE}/cs \
  s3://${BACKUP_BUCKET}/${BACKUP_PREFIX}/${COMPACTOR_NODE}/cs/

aws s3 sync s3://${SOURCE_BUCKET}/${COMPACTOR_NODE}/cd \
  s3://${BACKUP_BUCKET}/${BACKUP_PREFIX}/${COMPACTOR_NODE}/cd/

aws s3 sync s3://${SOURCE_BUCKET}/${COMPACTOR_NODE}/c \
  s3://${BACKUP_BUCKET}/${BACKUP_PREFIX}/${COMPACTOR_NODE}/c/

# 2. Backup all nodes (including compactor)
ALL_NODES=("${DATA_NODES[@]}" "$COMPACTOR_NODE")
for NODE_ID in "${ALL_NODES[@]}"; do
  echo "Backing up node: ${NODE_ID}"
  aws s3 sync s3://${SOURCE_BUCKET}/${NODE_ID}/snapshots \
    s3://${BACKUP_BUCKET}/${BACKUP_PREFIX}/${NODE_ID}/snapshots/
  
  aws s3 sync s3://${SOURCE_BUCKET}/${NODE_ID}/dbs \
    s3://${BACKUP_BUCKET}/${BACKUP_PREFIX}/${NODE_ID}/dbs/
  
  aws s3 sync s3://${SOURCE_BUCKET}/${NODE_ID}/wal \
    s3://${BACKUP_BUCKET}/${BACKUP_PREFIX}/${NODE_ID}/wal/
done

# 3. Backup cluster catalog
echo "Backing up cluster catalog..."
aws s3 sync s3://${SOURCE_BUCKET}/${CLUSTER_ID}/catalog \
  s3://${BACKUP_BUCKET}/${BACKUP_PREFIX}/${CLUSTER_ID}/catalog/

aws s3 cp s3://${SOURCE_BUCKET}/${CLUSTER_ID}/_catalog_checkpoint \
  s3://${BACKUP_BUCKET}/${BACKUP_PREFIX}/${CLUSTER_ID}/

aws s3 cp s3://${SOURCE_BUCKET}/${CLUSTER_ID}/enterprise \
  s3://${BACKUP_BUCKET}/${BACKUP_PREFIX}/${CLUSTER_ID}/

# 4. Backup license files (may not exist)
aws s3 cp s3://${SOURCE_BUCKET}/${CLUSTER_ID}/commercial_license \
  s3://${BACKUP_BUCKET}/${BACKUP_PREFIX}/${CLUSTER_ID}/ 2>/dev/null || true

aws s3 cp s3://${SOURCE_BUCKET}/${CLUSTER_ID}/trial_or_home_license \
  s3://${BACKUP_BUCKET}/${BACKUP_PREFIX}/${CLUSTER_ID}/ 2>/dev/null || true

echo "Backup completed to s3://${BACKUP_BUCKET}/${BACKUP_PREFIX}"
```

Replace the following:

-   `CLUSTER_ID`: your [cluster ID](/influxdb3/enterprise/reference/config-options/#cluster-id)
-   `COMPACTOR_NODE`: your [compactor](/influxdb3/enterprise/get-started/multi-server/#high-availability-with-a-dedicated-compactor) node ID
-   `NODE1`, `NODE2`, `NODE3`: your data [node IDs](/influxdb3/enterprise/reference/config-options/#node-id)
-   `SOURCE_BUCKET`: your InfluxDB data bucket
-   `BACKUP_BUCKET`: your backup destination bucket

**File system:**

```bash
#!/bin/bash
CLUSTER_ID="CLUSTER_ID"
COMPACTOR_NODE="COMPACTOR_NODE"
DATA_NODES=("NODE1" "NODE2" "NODE3")
DATA_DIR="/path/to/data"
BACKUP_DIR="/backup/$(date +%Y%m%d-%H%M%S)"

mkdir -p "$BACKUP_DIR"

# 1. Backup compactor node first
echo "Backing up compactor node..."
cp -r $DATA_DIR/${COMPACTOR_NODE}/cs "$BACKUP_DIR/${COMPACTOR_NODE}/"
cp -r $DATA_DIR/${COMPACTOR_NODE}/cd "$BACKUP_DIR/${COMPACTOR_NODE}/"
cp -r $DATA_DIR/${COMPACTOR_NODE}/c "$BACKUP_DIR/${COMPACTOR_NODE}/"

# 2. Backup all nodes
ALL_NODES=("${DATA_NODES[@]}" "$COMPACTOR_NODE")
for NODE_ID in "${ALL_NODES[@]}"; do
  echo "Backing up node: ${NODE_ID}"
  mkdir -p "$BACKUP_DIR/${NODE_ID}"
  cp -r $DATA_DIR/${NODE_ID}/snapshots "$BACKUP_DIR/${NODE_ID}/"
  cp -r $DATA_DIR/${NODE_ID}/dbs "$BACKUP_DIR/${NODE_ID}/"
  cp -r $DATA_DIR/${NODE_ID}/wal "$BACKUP_DIR/${NODE_ID}/"
done

# 3. Backup cluster catalog
echo "Backing up cluster catalog..."
mkdir -p "$BACKUP_DIR/${CLUSTER_ID}"
cp -r $DATA_DIR/${CLUSTER_ID}/catalog "$BACKUP_DIR/${CLUSTER_ID}/"
cp $DATA_DIR/${CLUSTER_ID}/_catalog_checkpoint "$BACKUP_DIR/${CLUSTER_ID}/"
cp $DATA_DIR/${CLUSTER_ID}/enterprise "$BACKUP_DIR/${CLUSTER_ID}/"

# 4. Backup license files (if they exist)
[ -f "$DATA_DIR/${CLUSTER_ID}/commercial_license" ] && \
  cp $DATA_DIR/${CLUSTER_ID}/commercial_license "$BACKUP_DIR/${CLUSTER_ID}/"
[ -f "$DATA_DIR/${CLUSTER_ID}/trial_or_home_license" ] && \
  cp $DATA_DIR/${CLUSTER_ID}/trial_or_home_license "$BACKUP_DIR/${CLUSTER_ID}/"

echo "Backup completed to $BACKUP_DIR"
```

Replace the following:

-   `CLUSTER_ID`: your [cluster ID](/influxdb3/enterprise/reference/config-options/#cluster-id)
-   `COMPACTOR_NODE`: your [compactor](/influxdb3/enterprise/get-started/multi-server/#high-availability-with-a-dedicated-compactor) node ID
-   `NODE1`, `NODE2`, `NODE3`: your data [node IDs](/influxdb3/enterprise/reference/config-options/#node-id)

<!-- End tabbed content -->

## Restore process

Restoring overwrites existing data. Always verify you have correct backups before proceeding.

**Recommended restore order:**

1. Cluster catalog and checkpoint
2. License files
3. All nodes’ snapshots, dbs, wal directories
4. Compactor node directories (cs, cd, c)

#### S3 restore example

```bash
#!/bin/bash
CLUSTER_ID="CLUSTER_ID"
COMPACTOR_NODE="COMPACTOR_NODE"
DATA_NODES=("NODE1" "NODE2" "NODE3")
BACKUP_BUCKET="BACKUP_BUCKET"
BACKUP_PREFIX="backup-BACKUP_DATE"
TARGET_BUCKET="TARGET_BUCKET"

# 1. Stop all InfluxDB 3 Enterprise nodes
# Implementation depends on your orchestration

# 2. Restore cluster catalog and license first
aws s3 cp s3://${BACKUP_BUCKET}/${BACKUP_PREFIX}/${CLUSTER_ID}/_catalog_checkpoint \
  s3://${TARGET_BUCKET}/${CLUSTER_ID}/

aws s3 sync s3://${BACKUP_BUCKET}/${BACKUP_PREFIX}/${CLUSTER_ID}/catalog \
  s3://${TARGET_BUCKET}/${CLUSTER_ID}/catalog/

aws s3 cp s3://${BACKUP_BUCKET}/${BACKUP_PREFIX}/${CLUSTER_ID}/enterprise \
  s3://${TARGET_BUCKET}/${CLUSTER_ID}/

# Restore license files if they exist
aws s3 cp s3://${BACKUP_BUCKET}/${BACKUP_PREFIX}/${CLUSTER_ID}/commercial_license \
  s3://${TARGET_BUCKET}/${CLUSTER_ID}/ 2>/dev/null || true

aws s3 cp s3://${BACKUP_BUCKET}/${BACKUP_PREFIX}/${CLUSTER_ID}/trial_or_home_license \
  s3://${TARGET_BUCKET}/${CLUSTER_ID}/ 2>/dev/null || true

# 3. Restore all nodes
ALL_NODES=("${DATA_NODES[@]}" "$COMPACTOR_NODE")
for NODE_ID in "${ALL_NODES[@]}"; do
  echo "Restoring node: ${NODE_ID}"
  aws s3 sync s3://${BACKUP_BUCKET}/${BACKUP_PREFIX}/${NODE_ID}/ \
    s3://${TARGET_BUCKET}/${NODE_ID}/
done

# 4. Start InfluxDB Enterprise nodes
# Start in order: data nodes, compactor
```

Replace the following:

-   `CLUSTER_ID`: your [cluster ID](/influxdb3/enterprise/reference/config-options/#cluster-id)
-   `COMPACTOR_NODE`: your [compactor](/influxdb3/enterprise/get-started/multi-server/#high-availability-with-a-dedicated-compactor) node ID
-   `NODE1`, `NODE2`, `NODE3`: your data [node IDs](/influxdb3/enterprise/reference/config-options/#node-id)
-   `BACKUP_DATE`: backup timestamp
-   `BACKUP_BUCKET`: bucket containing backup
-   `TARGET_BUCKET`: target bucket for restoration

#### File system restore example

```bash
#!/bin/bash
CLUSTER_ID="CLUSTER_ID"
COMPACTOR_NODE="COMPACTOR_NODE"
DATA_NODES=("NODE1" "NODE2" "NODE3")
BACKUP_DIR="/backup/BACKUP_DATE"
DATA_DIR="/path/to/data"

# 1. Stop all InfluxDB 3 Enterprise nodes
# Implementation depends on your deployment method

# 2. Optional: Clear existing data
ALL_NODES=("${DATA_NODES[@]}" "$COMPACTOR_NODE")
for NODE_ID in "${ALL_NODES[@]}"; do
  rm -rf ${DATA_DIR}/${NODE_ID}/*
done
rm -rf ${DATA_DIR}/${CLUSTER_ID}/*

# 3. Restore cluster catalog and license
mkdir -p ${DATA_DIR}/${CLUSTER_ID}
cp ${BACKUP_DIR}/${CLUSTER_ID}/_catalog_checkpoint ${DATA_DIR}/${CLUSTER_ID}/
cp -r ${BACKUP_DIR}/${CLUSTER_ID}/catalog ${DATA_DIR}/${CLUSTER_ID}/
cp ${BACKUP_DIR}/${CLUSTER_ID}/enterprise ${DATA_DIR}/${CLUSTER_ID}/

# Restore license files if they exist
[ -f "${BACKUP_DIR}/${CLUSTER_ID}/commercial_license" ] && \
  cp ${BACKUP_DIR}/${CLUSTER_ID}/commercial_license ${DATA_DIR}/${CLUSTER_ID}/
[ -f "${BACKUP_DIR}/${CLUSTER_ID}/trial_or_home_license" ] && \
  cp ${BACKUP_DIR}/${CLUSTER_ID}/trial_or_home_license ${DATA_DIR}/${CLUSTER_ID}/

# 4. Restore all nodes
for NODE_ID in "${ALL_NODES[@]}"; do
  echo "Restoring node: ${NODE_ID}"
  mkdir -p ${DATA_DIR}/${NODE_ID}
  cp -r ${BACKUP_DIR}/${NODE_ID}/* ${DATA_DIR}/${NODE_ID}/
done

# 5. Set correct permissions
# NOTE: Adjust 'influxdb:influxdb' to match your actual deployment user/group configuration.
chown -R influxdb:influxdb ${DATA_DIR}

# 6. Start InfluxDB Enterprise nodes
# Start in order: data nodes, compactor
```

Replace the following:

-   `CLUSTER_ID`: your [cluster ID](/influxdb3/enterprise/reference/config-options/#cluster-id)
-   `COMPACTOR_NODE`: your [compactor](/influxdb3/enterprise/get-started/multi-server/#high-availability-with-a-dedicated-compactor) node ID
-   `NODE1`, `NODE2`, `NODE3`: your data [node IDs](/influxdb3/enterprise/reference/config-options/#node-id)
-   `BACKUP_DATE`: backup directory timestamp

## Important considerations

### Recovery expectations

Recovery succeeds to a consistent point in time, which is the **latest snapshot included** in the backup. Data written after that snapshot may not be present if its WAL was deleted after the backup. Any Parquet files without a snapshot reference are ignored.

### License files

License files are tied to the complete object store configuration and the cluster ID, not just the bucket or container name. Changing any component of your object store configuration or restoring to a different cluster invalidates your license.

License binding varies by cloud provider:

| Provider | License bound to |
| --- | --- |
| AWS S3 | Endpoint + bucket name + region + cluster ID |
| Azure Blob Storage | Storage account name + container name + cluster ID |
| Google Cloud Storage | GCS base URL + bucket name + cluster ID |
| File storage | Exact file path + cluster ID |

For disaster recovery planning:

-   **Pre-provision a DR license**: Request a license for your DR storage configuration before an incident occurs.
-   **Contact support for migrations**: If you need to restore to a different storage account, endpoint, bucket, or cluster, [contact InfluxData support](https://support.influxdata.com) to obtain a new license.

### Docker considerations

When running InfluxDB 3 Enterprise in containers:

-   **Volume consistency**: Use the same volume mounts for backup and restore operations
-   **File permissions**: Ensure container user can read restored files (use `chown` if needed)
-   **Backup access**: Mount a backup directory to copy files from containers to the host
-   **Node coordination**: Stop and start all Enterprise nodes (querier, ingester, compactor) in the correct order

### Table snapshot files

Files in `<node_id>/table-snapshots/` are intentionally excluded from backup:

-   These files are periodically overwritten
-   They regenerate automatically on server restart
-   Including them doesn’t harm but increases backup size unnecessarily

### Timing recommendations

-   Perform backups during downtime or minimal load periods
-   Copying files while the database is active may create inconsistent backups
-   Consider using filesystem or storage snapshots if available
-   Compression is optional but recommended for long-term storage

#### Related

-   [Manage databases](/influxdb3/enterprise/admin/databases/)
-   [Manage your InfluxDB 3 Enterprise license](/influxdb3/enterprise/admin/license/)
-   [influxdb3 serve](/influxdb3/enterprise/reference/cli/influxdb3/serve/)
-   [Install InfluxDB 3 Enterprise](/influxdb3/enterprise/install/)
-   [Create a multi-node cluster](/influxdb3/enterprise/get-started/multi-server/)
-   [InfluxDB 3 Enterprise Internals](/influxdb3/enterprise/reference/internals/durability/)

[backup](/influxdb3/enterprise/tags/backup/) [restore](/influxdb3/enterprise/tags/restore/) [administration](/influxdb3/enterprise/tags/administration/) [cluster](/influxdb3/enterprise/tags/cluster/) [object storage](/influxdb3/enterprise/tags/object-storage/)
