Documentation

Performance tuning

Configure thread allocation, memory settings, and other parameters to optimize InfluxDB 3 Core performance based on your workload characteristics.

Best practices

  1. Start with monitoring: Understand your current bottlenecks before tuning
  2. Change one parameter at a time: Isolate the impact of each change
  3. Test with production-like workloads: Use realistic data and query patterns
  4. Document your configuration: Keep track of what works for your workload
  5. Plan for growth: Leave headroom for traffic increases
  6. Regular review: Periodically reassess as workloads evolve

General monitoring principles

Before tuning performance, establish baseline metrics to identify bottlenecks:

Key metrics to monitor

  1. CPU usage per core

    • Monitor individual core utilization to identify thread pool imbalances
    • Watch for cores at 100% while others are idle (indicates thread allocation issues)
    • Use top -H or htop to view per-thread CPU usage
  2. Memory consumption

    • Track heap usage vs available RAM
    • Monitor query execution memory pool utilization
    • Watch for OOM errors or excessive swapping
  3. IO and network

    • Measure write throughput (points/second)
    • Track query response times
    • Monitor object store latency for cloud deployments
    • Check disk IO wait times with iostat

Establish baselines

# Monitor CPU per thread
top -H -p $(pgrep influxdb3)

# Track memory usage
free -h
watch -n 1 "free -h"

# Check IO wait
iostat -x 1

For comprehensive metrics monitoring, see Monitor metrics.

Essential settings for performance

Thread allocation (–num-io-threads)

IO threads handle HTTP requests and line protocol parsing. Default: 2 (often insufficient).

InfluxDB 3 Core automatically allocates remaining cores to DataFusion after reserving IO threads. You can configure both thread pools explicitly by setting the --num-io-threads and --datafusion-num-threads options.

# Write-heavy: More IO threads
influxdb3 --num-io-threads=12 serve \
  --node-id=node0 \
  --object-store=file --data-dir=~/.influxdb3

# Query-heavy: Fewer IO threads
influxdb3 --num-io-threads=4 serve \
  --node-id=node0 \
  --object-store=file --data-dir=~/.influxdb3

Increase IO threads for concurrent writers

If you have multiple concurrent writers (for example, Telegraf agents), the default of 2 IO threads can bottleneck write performance.

Memory pool (–exec-mem-pool-bytes)

Controls memory for query execution. Default: 70% of RAM.

# Increase for query-heavy workloads
influxdb3 --exec-mem-pool-bytes=90% serve \
  --node-id=node0 \
  --object-store=file --data-dir=~/.influxdb3

# Decrease if experiencing memory pressure
influxdb3 --exec-mem-pool-bytes=60% serve \
  --node-id=node0 \
  --object-store=file --data-dir=~/.influxdb3

Parquet cache (–parquet-mem-cache-size)

Caches frequently accessed data files in memory.

# Enable caching for better query performance
influxdb3 serve \
  --parquet-mem-cache-size=4096 \
  --node-id=node0 \
  --object-store=file --data-dir=~/.influxdb3

WAL flush interval (–wal-flush-interval)

Controls write latency vs throughput. Default: 1s.

# Reduce latency for real-time data
influxdb3 --wal-flush-interval=100ms serve \
  --node-id=node0 \
  --object-store=file --data-dir=~/.influxdb3

Common performance issues

High write latency

Symptoms: Increasing write response times, timeouts, points dropped

Solutions:

  1. Increase IO threads (default is only 2)
  2. Reduce WAL flush interval (from 1s to 100ms)
  3. Check disk IO performance

Slow query performance

Symptoms: Long execution times, high memory usage, query timeouts

Solutions:

  1. Increase execution memory pool (to 90%)
  2. Enable Parquet caching

Memory pressure

Symptoms: OOM errors, swapping, high memory usage

Solutions:

  1. Reduce execution memory pool (to 60%)
  2. Lower snapshot threshold (--force-snapshot-mem-threshold=70%)

CPU bottlenecks

Symptoms: 100% CPU utilization, uneven thread usage (only 2 cores for writes)

Solutions:

  1. Rebalance thread allocation
  2. Check if only 2 cores are used for write parsing (increase IO threads)

“My ingesters are only using 2 cores”

Increase --num-io-threads to 8-16+ for ingest nodes.

Configuration examples by workload

Write-heavy workloads (>100k points/second)

# 32-core system, high ingest rate
influxdb3 --num-io-threads=12 \
  --exec-mem-pool-bytes=80% \
  --wal-flush-interval=100ms \
  serve \
  --node-id=node0 \
  --object-store=file \
  --data-dir=~/.influxdb3

Query-heavy workloads (complex analytics)

# 32-core system, analytical queries
influxdb3 --num-io-threads=4 serve \
  --exec-mem-pool-bytes=90% \
  --parquet-mem-cache-size=2048 \
  --node-id=node0 \
  --object-store=file \
  --data-dir=~/.influxdb3

Mixed workloads (real-time dashboards)

# 32-core system, balanced operations
influxdb3 --num-io-threads=8 serve \
  --exec-mem-pool-bytes=70% \
  --parquet-mem-cache-size=1024 \
  --node-id=node0 \
  --object-store=file \
  --data-dir=~/.influxdb3

Thread allocation details

Calculate optimal thread counts

Use this formula as a starting point:

Total cores = N
Concurrent writers = W
Query complexity factor = Q (1-10, where 10 is most complex)

IO threads = min(W + 2, N * 0.4)
DataFusion threads = N - IO threads

Example configurations by system size

Small system (4 cores, 16 GB RAM)

# Balanced configuration
influxdb3 --num-io-threads=2 serve \
  --exec-mem-pool-bytes=10GB \
  --parquet-mem-cache-size=500 \
  --node-id=node0 \
  --object-store=file \
  --data-dir=~/.influxdb3

Medium system (16 cores, 64 GB RAM)

# Write-optimized configuration
influxdb3 --num-io-threads=6 serve \
  --exec-mem-pool-bytes=45GB \
  --parquet-mem-cache-size=2048 \
  --node-id=node0 \
  --object-store=file \
  --data-dir=~/.influxdb3

Large system (64 cores, 256 GB RAM)

# Query-optimized configuration
influxdb3 --num-io-threads=8 serve \
  --exec-mem-pool-bytes=200GB \
  --parquet-mem-cache-size=10240 \
  --object-store-connection-limit=200 \
  --node-id=node0 \
  --object-store=file \
  --data-dir=~/.influxdb3

Memory tuning

Execution memory pool

Configure the query execution memory pool:

# Absolute value in bytes
--exec-mem-pool-bytes=8589934592  # 8GB

# Percentage of available RAM
--exec-mem-pool-bytes=80%  # 80% of system RAM

Guidelines:

  • Write-heavy: 60-70% (leave room for OS cache)
  • Query-heavy: 80-90% (maximize query memory)
  • Mixed: 70% (balanced approach)

Parquet cache configuration

Cache frequently accessed Parquet files:

# Set cache size
--parquet-mem-cache-size=2147483648  # 2GB

# Configure cache behavior
--parquet-mem-cache-prune-interval=1m \
--parquet-mem-cache-prune-percentage=20

WAL and snapshot tuning

Control memory pressure from write buffers:

# Force snapshot when memory usage exceeds threshold
--force-snapshot-mem-threshold=80%

# Configure WAL rotation
--wal-flush-interval=10s \
--wal-snapshot-size=100MB

Advanced tuning options

For less common performance optimizations and detailed configuration options, see:

DataFusion engine tuning

Advanced DataFusion runtime parameters:

HTTP and network tuning

Request size and network optimization:

Object store optimization

Performance tuning for cloud object stores:

Complete configuration reference

For all available configuration options, see:

Monitoring and validation

Monitor thread utilization

# Linux: View per-thread CPU usage
top -H -p $(pgrep influxdb3)

# Monitor specific threads
watch -n 1 "ps -eLf | grep influxdb3 | head -20"

Check performance metrics

Monitor key indicators:

-- Query system.threads table (Enterprise)
SELECT * FROM system.threads
WHERE cpu_usage > 90
ORDER BY cpu_usage DESC;

-- Check write throughput
SELECT
  count(*) as points_written,
  max(timestamp) - min(timestamp) as time_range
FROM your_measurement
WHERE timestamp > now() - INTERVAL '1 minute';

Validate configuration

Verify your tuning changes:

# Check effective configuration
influxdb3 serve --help-all | grep -E "num-io-threads|num-datafusion-threads"

# Monitor memory usage
free -h
watch -n 1 "free -h"

# Check IO wait
iostat -x 1

Common performance issues

High write latency

Symptoms:

  • Increasing write response times
  • Timeouts from write clients
  • Points dropped or rejected

Solutions:

  1. Increase IO threads: --num-io-threads=16
  2. Reduce batch sizes in writers
  3. Increase WAL flush frequency
  4. Check disk IO performance

Slow query performance

Symptoms:

  • Long query execution times
  • High memory usage during queries
  • Query timeouts

Solutions:

  1. Increase execution memory pool: --exec-mem-pool-bytes=90%
  2. Enable Parquet caching: --parquet-mem-cache-size=4GB
  3. Optimize query patterns (smaller time ranges, fewer fields)

Memory pressure

Symptoms:

  • Out of memory errors
  • Frequent garbage collection
  • System swapping

Solutions:

  1. Reduce execution memory pool: --exec-mem-pool-bytes=60%
  2. Lower snapshot threshold: --force-snapshot-mem-threshold=70%
  3. Decrease cache sizes
  4. Add more RAM or reduce workload

CPU bottlenecks

Symptoms:

  • 100% CPU utilization
  • Uneven thread pool usage
  • Performance plateaus

Solutions:

  1. Rebalance thread allocation based on workload
  2. Add more CPU cores
  3. Optimize client batching

Was this page helpful?

Thank you for your feedback!


New in InfluxDB 3.5

Key enhancements in InfluxDB 3.5 and the InfluxDB 3 Explorer 1.3.

See the Blog Post

InfluxDB 3.5 is now available for both Core and Enterprise, introducing custom plugin repository support, enhanced operational visibility with queryable CLI parameters and manual node management, stronger security controls, and general performance improvements.

InfluxDB 3 Explorer 1.3 brings powerful new capabilities including Dashboards (beta) for saving and organizing your favorite queries, and cache querying for instant access to Last Value and Distinct Value caches—making Explorer a more comprehensive workspace for time series monitoring and analysis.

For more information, check out:

InfluxDB Docker latest tag changing to InfluxDB 3 Core

On February 3, 2026, the latest tag for InfluxDB Docker images will point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in your Docker deployments.

If using Docker to install and run InfluxDB, the latest tag will point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in your Docker deployments. For example, if using Docker to run InfluxDB v2, replace the latest version tag with a specific version tag in your Docker pull command–for example:

docker pull influxdb:2