Documentation

InfluxDB Cloud Dedicated data durability

InfluxDB Cloud Dedicated writes data to multiple Write-Ahead-Log (WAL) files on local storage and retains WALs until the data is persisted to Parquet files in object storage. Parquet data files in object storage are redundantly stored on multiple devices across a minimum of three availability zones in a cloud region.

Data storage

In InfluxDB Cloud Dedicated, all measurements are stored in Apache Parquet files that represent a point-in-time snapshot of the data. The Parquet files are immutable and are never replaced nor modified. Parquet files are stored in object storage and referenced in the Catalog, which InfluxDB uses to find the appropriate Parquet files for a particular set of data.

Data deletion

When data is deleted or expires (reaches the database’s retention period), InfluxDB performs the following steps:

  1. Marks the associated Parquet files as deleted in the catalog.
  2. Filters out data marked for deletion from all queries.
  3. Retains Parquet files marked for deletion in object storage for approximately 30 days after the youngest data in the file ages out of retention.

Data ingest

When data is written to InfluxDB Cloud Dedicated, InfluxDB first writes the data to a Write-Ahead-Log (WAL) on locally attached storage on the Ingester node before acknowledging the write request. After acknowledging the write request, the Ingester holds the data in memory temporarily and then writes the contents of the WAL to Parquet files in object storage and updates the Catalog to reference the newly created Parquet files. If an Ingester node is gracefully shut down (for example, during a new software deployment), it flushes the contents of the WAL to the Parquet files before shutting down.

Backups

InfluxDB Cloud Dedicated implements the following data backup strategies:

  • Backup of WAL file: The WAL file is written on locally attached storage. If an ingester process fails, the new ingester simply reads the WAL file on startup and continues normal operation. WAL files are maintained until their contents have been written to the Parquet files in object storage. For added protection, ingesters can be configured for write replication, where each measurement is written to two different WAL files before acknowledging the write.

  • Backup of Parquet files: Parquet files are stored in object storage where they are redundantly stored on multiple devices across a minimum of three availability zones in a cloud region. Parquet files associated with each database are kept in object storage for the duration of database retention period plus an additional time period (approximately 30 days).

  • Backup of catalog: InfluxData keeps a transaction log of all recent updates to the InfluxDB catalog and generates a daily backup of the catalog. Backups are preserved for at least 30 days in object storage across a minimum of three availability zones.

Recovery

InfluxData can perform the following recovery operations:

  • Recovery after ingester failure: If an ingester fails, a new ingester is started up and reads from the WAL file for the recently ingested data.

  • Recovery of Parquet files: InfluxDB Cloud Dedicated uses the provided object storage data durability to recover Parquet files.

  • Recovery of the catalog: InfluxData can restore the Catalog to the most recent daily backup and then reapply any transactions that occurred since the interruption.


Was this page helpful?

Thank you for your feedback!


The future of Flux

Flux is going into maintenance mode. You can continue using it as you currently are without any changes to your code.

Read more

InfluxDB 3 Open Source Now in Public Alpha

InfluxDB 3 Open Source is now available for alpha testing, licensed under MIT or Apache 2 licensing.

We are releasing two products as part of the alpha.

InfluxDB 3 Core, is our new open source product. It is a recent-data engine for time series and event data. InfluxDB 3 Enterprise is a commercial version that builds on Core’s foundation, adding historical query capability, read replicas, high availability, scalability, and fine-grained security.

For more information on how to get started, check out: