Documentation

Configure the performance upgrade preview

Performance preview beta

The performance upgrade preview is available to InfluxDB 3 Enterprise Trial and Commercial users as a beta. These features are subject to breaking changes and should not be used for production workloads.

This page provides a complete reference for all configuration options available with the performance upgrade preview. All --pt-* performance upgrade options require the --use-pacha-tree flag.

If an option is omitted, the preview either derives a value from the existing influxdb3 serve configuration or falls back to an engine-specific default that balances resource usage and throughput.

Set --num-io-threads to the number of cores on the machine when using the performance upgrade preview.

General

OptionDescriptionDefault
--use-pacha-treeEnable the performance upgrade preview. Required for any other --pt- option to have effect.Disabled
--pt-engine-path-prefixOptional path prefix for all engine data (WAL and compaction generations). Max 32 characters. Must start and end with alphanumeric; inner characters allow [a-zA-Z0-9._-]. Shorter paths improve partitioning in object stores.No prefix
--pt-max-columnsMaximum total columns across the entire instance. Must be at least 2.10,000,000 (10M)
--pt-enable-retentionEnable retention enforcement.true
--pt-disable-hybrid-queryDisable hybrid query mode. When the preview is enabled with existing Parquet data, queries normally merge results across both Parquet and .pt files. Set this flag to query only .pt data.false
--pt-enable-auto-dvcEnable automatic distinct value caching for SHOW TAG VALUES queries and the tag_values() SQL function.Disabled
--pt-upgrade-poll-intervalPolling interval for Parquet-to-PachaTree upgrade status monitoring. See Upgrade from Parquet.5s

Engine path prefix

Use a short prefix to improve partitioning in object stores:

influxdb3 serve \
  # ...
  --use-pacha-tree \
  --pt-engine-path-prefix mydata

Hybrid query mode

When you enable the preview on an instance with existing Parquet data, hybrid query mode merges results across both Parquet and .pt files. Disable hybrid mode to query only .pt data:

influxdb3 serve \
  # ...
  --use-pacha-tree \
  --pt-disable-hybrid-query

WAL

Configure Write-Ahead Log (WAL) behavior for durability and performance.

OptionDescriptionDefault
--pt-wal-flush-intervalFlush interval for the WAL.Inherits --wal-flush-interval (1s)
--pt-wal-flush-concurrencyWAL flush concurrency.max(io_threads - 2, 2)
--pt-wal-max-buffer-sizeMaximum in-memory WAL buffer before a flush is triggered regardless of the flush interval. Increase this if WAL files are flushed before the interval elapses.15MB
--pt-wal-snapshots-to-keepNumber of snapshot manifests worth of WAL history to retain. Must be greater than 0.5

WAL buffer size

The WAL buffer accumulates incoming writes before flushing to object storage. Larger buffers reduce flush frequency and produce larger WAL files, but increase memory usage:

influxdb3 serve \
  # ...
  --use-pacha-tree \
  --pt-wal-max-buffer-size 30MB

Flush interval and concurrency

Control how frequently the WAL flushes and how many workers run flushes in parallel:

influxdb3 serve \
  # ...
  --use-pacha-tree \
  --pt-wal-flush-interval 2s \
  --pt-wal-flush-concurrency 8

Snapshot

Configure snapshot buffer behavior, which controls how WAL files are merged into Gen0 files.

OptionDescriptionDefault
--pt-snapshot-sizeMaximum size of the active snapshot bucket before it is rotated for snapshotting.250MB
--pt-snapshot-durationTime-based snapshot rotation trigger. Controls how often the ingester creates snapshots. Also used on query nodes as the bucket rotation interval for the replica buffer.10s
--pt-max-concurrent-snapshotsMaximum number of concurrent snapshot operations before applying backpressure to writers.5
--pt-merge-threshold-sizeMaximum unmerged file size before triggering a merge operation.--pt-snapshot-size / 4 (62.5MB)

Snapshot size and duration

Control when snapshot rotation triggers:

influxdb3 serve \
  # ...
  --use-pacha-tree \
  --pt-snapshot-size 500MB \
  --pt-snapshot-duration 15s

Merge threshold

Set the size threshold that triggers background merge operations. Lower values result in more frequent merges:

influxdb3 serve \
  # ...
  --use-pacha-tree \
  --pt-merge-threshold-size 125MB

Gen0

Control the size of Gen0 files produced during merge operations.

OptionDescriptionDefault
--pt-gen0-max-rows-per-fileUpper bound on rows per Gen0 file emitted during merge.10000000 (10M)
--pt-gen0-max-bytes-per-fileUpper bound on bytes per Gen0 file emitted during merge.100MB

Gen0 file size limits

Control the size of Gen0 files for query and compaction performance:

influxdb3 serve \
  # ...
  --use-pacha-tree \
  --pt-gen0-max-rows-per-file 5000000 \
  --pt-gen0-max-bytes-per-file 50MB

File cache

Configure data file caching for query performance.

OptionDescriptionDefault
--pt-file-cache-sizeSize of the data file cache (bytes or %). Set to 0 on dedicated ingest nodes.Mirrors --parquet-mem-cache-size
--pt-disable-data-file-cacheDisable data file caching. Set to true on dedicated ingest nodes.false (automatically true if --disable-parquet-mem-cache is set)
--pt-file-cache-recencyOnly cache files newer than this age. Pre-caching on all-in-one and query nodes is based on this value.Mirrors --parquet-mem-cache-query-path-duration (3d)
--pt-file-cache-evict-afterEvict cached files that have not been read within this duration.24h

File cache size

Set the maximum size for the data file cache:

influxdb3 serve \
  # ...
  --use-pacha-tree \
  --pt-file-cache-size 8GB

Cache recency filter

Only cache files containing data within a recent time window:

influxdb3 serve \
  # ...
  --use-pacha-tree \
  --pt-file-cache-recency 24h

Disable caching on ingest nodes

For dedicated ingest nodes, disable the data file cache to save memory:

influxdb3 serve \
  # ...
  --use-pacha-tree \
  --mode ingest \
  --pt-disable-data-file-cache

Replication (query nodes)

Configure replication behavior for query nodes in distributed deployments.

OptionDescriptionDefault
--pt-wal-replication-intervalPolling interval to check for new WAL files to replicate from ingest nodes.250ms
--pt-wal-replica-recovery-concurrencyNumber of concurrent WAL file fetches during replica recovery or catchup.8
--pt-wal-replica-steady-concurrencyNumber of concurrent WAL file fetches during steady-state replication.8
--pt-wal-replica-queue-sizeSize of the queue between WAL file fetching and replica buffer merging.100
--pt-wal-replica-recovery-tail-skip-limitNumber of consecutive missing WAL files before stopping replica recovery.128
--pt-replica-gen0-load-concurrencyLimit on the number of Gen0 files loaded concurrently when the replica starts.16
--pt-replica-max-buffer-sizeMaximum replica buffer size (bytes or %). Used by query nodes to store WAL files replicated from ingest nodes.50% of available memory, capped at 16GB

Recovery concurrency

Control parallelism during query node recovery or catchup:

influxdb3 serve \
  # ...
  --use-pacha-tree \
  --mode query \
  --pt-wal-replica-recovery-concurrency 16

Steady-state replication

Configure ongoing replication performance:

influxdb3 serve \
  # ...
  --use-pacha-tree \
  --mode query \
  --pt-wal-replica-steady-concurrency 4 \
  --pt-wal-replica-queue-size 200

Replica buffer size

Control the maximum buffer size for replicated data on query nodes:

influxdb3 serve \
  # ...
  --use-pacha-tree \
  --mode query \
  --pt-replica-max-buffer-size 8GB

Compactor

Configure background compaction behavior. The compactor organizes data into fixed 24-hour UTC windows and progresses data through four compaction levels (L1 through L4).

OptionDescriptionDefault
--pt-partition-countTarget number of partitions per compaction window.1
--pt-compactor-input-size-budgetMaximum total input bytes across all active compaction jobs. Acts as an admission control budget for the compactor scheduler.50% of system memory at startup
--pt-final-compaction-ageAge threshold for final compaction. When all L1-L3 run sets in a window are older than this, a final compaction merges everything into L4.72h
--pt-compactor-cleanup-cooldownCooldown after checkpoint publish before replaced files can be cleaned up.10min

Compaction budget

Control total memory allocated to active compaction jobs:

influxdb3 serve \
  # ...
  --use-pacha-tree \
  --pt-compactor-input-size-budget 8GB

Final compaction age

Control when windows receive their final compaction into L4:

influxdb3 serve \
  # ...
  --use-pacha-tree \
  --pt-final-compaction-age 48h

L1-L4 level tuning

These options control per-level compaction parameters. Data enters L1 from snapshot batch compaction and promotes through levels based on run set count triggers.

LevelRoleDefault tail targetDefault file sizePromotion trigger
L1Ingest landing zone600MB25MB3 run sets
L2First promotion tier1.2GB40MB3 run sets
L3Second promotion tier2.5GB75MB4 run sets
L4Terminal (fully compacted)50GB125MBN/A

L1 options

OptionDescriptionDefault
--pt-l1-tail-target-bytesL1 tail run set target size.600MB
--pt-l1-target-file-bytesL1 target file size.25MB
--pt-l1-promotion-countNumber of L1 run sets that triggers promotion to L2.3

L2 options

OptionDescriptionDefault
--pt-l2-tail-target-bytesL2 tail run set target size.1.2GB
--pt-l2-target-file-bytesL2 target file size.40MB
--pt-l2-promotion-countNumber of L2 run sets that triggers promotion to L3.3

L3 options

OptionDescriptionDefault
--pt-l3-tail-target-bytesL3 tail run set target size.2.5GB
--pt-l3-target-file-bytesL3 target file size.75MB
--pt-l3-promotion-countNumber of L3 run sets that triggers promotion to L4.4

L4 options

OptionDescriptionDefault
--pt-l4-tail-target-bytesL4 tail run set target size.50GB
--pt-l4-target-file-bytesL4 target file size.125MB

Example configurations

Development (minimal resources)

influxdb3 serve \
  --node-id dev01 \
  --cluster-id dev-cluster \
  --object-store file \
  --data-dir ~/.influxdb3 \
  --use-pacha-tree \
  --num-io-threads 2 \
  --pt-file-cache-size 512MB \
  --pt-wal-max-buffer-size 5MB \
  --pt-snapshot-size 100MB

Production all-in-one (8 cores, 32 GB RAM)

influxdb3 serve \
  --node-id prod01 \
  --cluster-id prod-cluster \
  --object-store s3 \
  --bucket S3_BUCKET \
  --aws-access-key-id AWS_ACCESS_KEY_ID \
  --aws-secret-access-key AWS_SECRET_ACCESS_KEY \
  --use-pacha-tree \
  --num-io-threads 8 \
  --pt-file-cache-size 8GB \
  --pt-wal-max-buffer-size 30MB \
  --pt-snapshot-size 500MB \
  --pt-wal-flush-concurrency 4

High-throughput ingest node

influxdb3 serve \
  --node-id ingest01 \
  --cluster-id prod-cluster \
  --object-store s3 \
  --bucket S3_BUCKET \
  --aws-access-key-id AWS_ACCESS_KEY_ID \
  --aws-secret-access-key AWS_SECRET_ACCESS_KEY \
  --use-pacha-tree \
  --mode ingest \
  --num-io-threads 16 \
  --pt-wal-max-buffer-size 50MB \
  --pt-wal-flush-interval 2s \
  --pt-wal-flush-concurrency 8 \
  --pt-snapshot-size 1GB \
  --pt-disable-data-file-cache

Query-optimized node

influxdb3 serve \
  --node-id query01 \
  --cluster-id prod-cluster \
  --object-store s3 \
  --bucket S3_BUCKET \
  --aws-access-key-id AWS_ACCESS_KEY_ID \
  --aws-secret-access-key AWS_SECRET_ACCESS_KEY \
  --use-pacha-tree \
  --mode query \
  --num-io-threads 16 \
  --pt-file-cache-size 16GB \
  --pt-file-cache-recency 24h \
  --pt-replica-max-buffer-size 8GB

Dedicated compactor

influxdb3 serve \
  --node-id compact01 \
  --cluster-id prod-cluster \
  --object-store s3 \
  --bucket S3_BUCKET \
  --aws-access-key-id AWS_ACCESS_KEY_ID \
  --aws-secret-access-key AWS_SECRET_ACCESS_KEY \
  --use-pacha-tree \
  --mode compact \
  --num-io-threads 8 \
  --pt-compactor-input-size-budget 12GB

Downgrade options

The influxdb3 downgrade-to-parquet command reverts a cluster from the performance preview back to standard Parquet storage. For the downgrade procedure, see Downgrade to Parquet.

OptionDescription
--cluster-id(Required) Cluster identifier.
--object-store(Required) Object storage type (file, s3, gcs, azure).
--data-dirLocation of data files for a local (file) object store.
--bucketObject store bucket name (for s3, gcs, azure).
--dry-runPreview mode–list files that would be deleted without making changes.
--yesSkip the confirmation prompt.
--ignore-runningProceed even if nodes appear to be running. Warning: may cause data inconsistency if nodes are actively writing.

Was this page helpful?

Thank you for your feedback!


InfluxDB 3.9: Performance upgrade preview

InfluxDB 3 Enterprise 3.9 includes a beta of major performance upgrades with faster single-series queries, wide-and-sparse table support, and more.

InfluxDB 3 Enterprise 3.9 includes a beta of major performance and feature updates.

Key improvements:

  • Faster single-series queries
  • Consistent resource usage
  • Wide-and-sparse table support
  • Automatic distinct value caches for reduced latency with metadata queries

Preview features are subject to breaking changes.

For more information, see:

Telegraf Enterprise now in public beta

Get early access to the Telegraf Controller and provide feedback to help shape the future of Telegraf Enterprise.

See the Blog Post

The upcoming Telegraf Enterprise offering is for organizations running Telegraf at scale and is comprised of two key components:

  • Telegraf Controller: A control plane (UI + API) that centralizes Telegraf configuration management and agent health visibility.
  • Telegraf Enterprise Support: Official support for Telegraf Controller and Telegraf plugins.

Join the Telegraf Enterprise beta to get early access to the Telegraf Controller and provide feedback to help shape the future of Telegraf Enterprise.

For more information:

InfluxDB Docker latest tag changing to InfluxDB 3 Core

On May 27, 2026, the latest tag for InfluxDB Docker images will point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in your Docker deployments.

If using Docker to install and run InfluxDB, the latest tag will point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in your Docker deployments. For example, if using Docker to run InfluxDB v2, replace the latest version tag with a specific version tag in your Docker pull command–for example:

docker pull influxdb:2