Documentation

Scale your InfluxDB cluster

InfluxDB Clustered lets you scale individual components of your cluster both vertically and horizontally to match your specific workload. Use the AppInstance resource defined in your influxdb.yml to manage resources available to each component.

Scaling strategies

The following scaling strategies can be applied to components in your InfluxDB cluster.

Vertical scaling

Vertical scaling (also known as “scaling up”) involves increasing the resources (such as RAM or CPU) available to a process or system. Vertical scaling is typically used to handle resource-intensive tasks that require more processing power.

Horizontal scaling

Horizontal scaling (also known as “scaling out”) involves increasing the number of nodes or processes available to perform a given task. Horizontal scaling is typically used to increase the amount of workload or throughput a system can manage, but also provides additional redundancy and failover.

Scale your cluster as a whole

Scaling your entire InfluxDB Cluster is done by scaling your Kubernetes cluster and is managed outside of InfluxDB. The process of scaling your entire Kubernetes cluster depends on your underlying Kubernetes provider. You can also use Kubernetes autoscaling to automatically scale your cluster as needed.

Scale components in your cluster

The following components of your InfluxDB cluster are scaled by modifying properties in your AppInstance resource:

  • Ingester
  • Querier
  • Compactor
  • Router
  • Garbage collector
  • Catalog service

Scale your Catalog and Object store

Your InfluxDB Catalog and Object store are managed outside of your AppInstance resource. Scaling mechanisms for these components depend on the technology and underlying provider used for each.

Use the spec.package.spec.resources property in your AppInstance resource defined in your influxdb.yml to define system resource minimums and limits for each pod and the number of replicas per component. requests are the minimum that the Kubernetes scheduler should reserve for a pod. limits are the maximum that a pod should be allowed to use.

Your AppInstance resource can include the following properties to define resource minimums and limits per pod and replicas per component:

  • spec.package.spec.resources
    • ingester
      • requests
        • cpu: Minimum CPU resource units to assign to ingesters
        • memory: Minimum memory resource units to assign to ingesters
        • replicas: Number of ingester replicas to provision
      • limits
        • cpu: Maximum CPU resource units to assign to ingesters
        • memory: Maximum memory resource units to assign to ingesters
    • compactor
      • requests
        • cpu: Minimum CPU resource units to assign to compactors
        • memory: Minimum memory resource units to assign to compactors
        • replicas: Number of compactor replicas to provision
      • limits
        • cpu: Maximum CPU resource units to assign to compactors
        • memory: Maximum memory resource units to assign to compactors
    • querier
      • requests
        • cpu: Minimum CPU resource units to assign to queriers
        • memory: Minimum memory resource units to assign to queriers
        • replicas: Number of querier replicas to provision
      • limits
        • cpu: Maximum CPU resource units to assign to queriers
        • memory: Maximum memory resource units to assign to queriers
    • router
      • requests
        • cpu: Minimum CPU resource units to assign to routers
        • memory: Minimum memory resource units to assign to routers
        • replicas: Number of router replicas to provision
      • limits
        • cpu: Maximum CPU Resource units to assign to routers
        • memory: Maximum memory resource units to assign to routers
    • garbage-collector
      • requests
        • cpu: Minimum CPU resource units to assign to the garbage collector
        • memory: Minimum memory resource units to assign to the garbage collector
      • limits
        • cpu: Maximum CPU Resource units to assign to the garbage collector
        • memory: Maximum memory resource units to assign to the garbage collector

View example AppInstance with resource requests and limits

Use the resources property in your values.yaml to define system resource minimums and limits for each pod and the number of replicas per component. requests are the minimum that the Kubernetes scheduler should reserve for a pod. limits are the maximum that a pod should be allowed to use.

Use the following properties to define resource minimums and limits per pod and replicas per component:

  • resources
    • ingester
      • requests
        • cpu: Minimum CPU resource units to assign to ingesters
        • memory: Minimum memory resource units to assign to ingesters
        • replicas: Number of ingester replicas to provision
      • limits
        • cpu: Maximum CPU resource units to assign to ingesters
        • memory: Maximum memory resource units to assign to ingesters
    • compactor
      • requests
        • cpu: Minimum CPU resource units to assign to compactors
        • memory: Minimum memory resource units to assign to compactors
        • replicas: Number of compactor replicas to provision
      • limits
        • cpu: Maximum CPU resource units to assign to compactors
        • memory: Maximum memory resource units to assign to compactors
    • querier
      • requests
        • cpu: Minimum CPU resource units to assign to queriers
        • memory: Minimum memory resource units to assign to queriers
        • replicas: Number of querier replicas to provision
      • limits
        • cpu: Maximum CPU resource units to assign to queriers
        • memory: Maximum memory resource units to assign to queriers
    • router
      • requests
        • cpu: Minimum CPU resource units to assign to routers
        • memory: Minimum memory resource units to assign to routers
        • replicas: Number of router replicas to provision
      • limits
        • cpu: Maximum CPU Resource units to assign to routers
        • memory: Maximum memory resource units to assign to routers
    • garbage-collector
      • requests
        • cpu: Minimum CPU resource units to assign to the garbage collector
        • memory: Minimum memory resource units to assign to the garbage collector
      • limits
        • cpu: Maximum CPU Resource units to assign to the garbage collector
        • memory: Maximum memory resource units to assign to the garbage collector

View example values.yaml with resource requests and limits

Applying resource limits to pods is optional, but provides better resource isolation and protects against pods using more resources than intended. For information, see Kubernetes resource requests and limits.

Horizontally scale a component

To horizontally scale a component in your InfluxDB cluster, increase or decrease the number of replicas for the component and apply the change.

Only use the AppInstance to scale component replicas

Only use the AppInstance resource to scale component replicas. Manually scaling replicas may cause errors.

For example–to horizontally scale your Ingester:

apiVersion: kubecfg.dev/v1alpha1
kind: AppInstance
# ...
spec:
  package:
    spec:
      resources:
        ingester:
          requests:
            # ...
            replicas: 6
  • Copy
  • Fill window
# ...
resources:
  ingester:
    requests:
      # ...
      replicas: 6
  • Copy
  • Fill window

Vertically scale a component

To vertically scale a component in your InfluxDB cluster, increase or decrease the CPU and memory resource units to assign to component pods and apply the change.

apiVersion: kubecfg.dev/v1alpha1
kind: AppInstance
# ...
spec:
  package:
    spec:
      resources:
        ingester:
          requests:
            cpu: "500m"
            memory: "512MiB"
            # ...
          limits:
            cpu: "1000m"
            memory: "1024MiB"
  • Copy
  • Fill window
# ...
resources:
  ingester:
    requests:
      cpu: "500m"
      memory: "512MiB"
      # ...
    limits:
      cpu: "1000m"
      memory: "1024MiB"
  • Copy
  • Fill window

Apply your changes

After modifying the AppInstance resource, use kubectl apply to apply the configuration changes to your cluster and scale the updated components.

kubectl apply \
  --filename myinfluxdb.yml \
  --namespace influxdb
  • Copy
  • Fill window
helm upgrade \
  influxdata/influxdb3-clustered \
  -f ./values.yml \
  --namespace influxdb
  • Copy
  • Fill window

Router

The Router can be scaled both vertically and horizontally.

  • Recommended: Horizontal scaling increases write throughput and is typically the most effective scaling strategy for the Router.
  • Vertical scaling (specifically increased CPU) improves the Router’s ability to parse incoming line protocol with lower latency.

Router latency

Latency of the Router’s write endpoint is directly impacted by:

  • Ingester latency–the router calls the Ingester during a client write request
  • Catalog latency during schema validation

Ingester

The Ingester can be scaled both vertically and horizontally.

  • Recommended: Vertical scaling is typically the most effective scaling strategy for the Ingester. Compared to horizontal scaling, vertical scaling not only increases write throughput but also lessens query, catalog, and compaction overheads as well as Object store costs.
  • Horizontal scaling can help distribute write load but comes with additional coordination overhead.

Ingester storage volume

Ingesters use an attached storage volume to store the Write-Ahead Log (WAL). With more storage available, Ingesters can keep bigger WAL buffers, which improves query performance and reduces pressure on the Compactor. Storage speed also helps with query performance.

Configure the storage volume attached to Ingester pods in the spec.package.spec.ingesterStorage property of your AppInstance resource or, if using Helm, the ingesterStorage property of your values.yaml.

View example Ingester storage configuration

Querier

The Querier can be scaled both vertically and horizontally.

  • Recommended: Vertical scaling improves the Querier’s ability to process concurrent or computationally intensive queries, and increases the effective cache capacity.
  • Horizontal scaling increases query throughput to handle more concurrent queries. Consider horizontal scaling if vertical scaling doesn’t adequately address concurrency demands or reaches the hardware limits of your underlying nodes.

Compactor

  • Recommended: Maintain 1 Compactor pod and use vertical scaling (especially increasing the available CPU) for the Compactor.
  • Because compaction is a compute-heavy process, horizontal scaling increases compaction throughput, but not as efficiently as vertical scaling.

Garbage collector

The Garbage collector is a lightweight process that typically doesn’t require significant system resources.

  • Don’t horizontally scale the Garbage collector; it isn’t designed for distributed load.
  • Consider vertical scaling only if you observe consistently high CPU usage or if the container regularly runs out of memory.

Catalog store

The Catalog store is a PostgreSQL-compatible database that stores critical metadata for your InfluxDB cluster. An underprovisioned Catalog store can cause write outages and system-wide performance issues.

  • Scaling strategies depend on your specific PostgreSQL implementation
  • All PostgreSQL implementations support vertical scaling
  • Most implementations support horizontal scaling for improved redundancy and failover

Catalog service

The Catalog service (iox-shared-catalog statefulset) caches and manages access to the Catalog store.

  • Recommended: Maintain exactly 3 replicas of the Catalog service for optimal redundancy. Additional replicas are discouraged.
  • If performance improvements are needed, use vertical scaling.

Managing Catalog components

The Catalog service is managed through the AppInstance resource, while the Catalog store is managed separately according to your PostgreSQL implementation.

Object store

The Object store contains time series data in Parquet format.

Scaling strategies depend on the underlying object storage services used. Most services support horizontal scaling for redundancy, failover, and increased capacity.


Was this page helpful?

Thank you for your feedback!


The future of Flux

Flux is going into maintenance mode. You can continue using it as you currently are without any changes to your code.

Read more

InfluxDB 3 Core and Enterprise are now in Beta

InfluxDB 3 Core and Enterprise are now available for beta testing, available under MIT or Apache 2 license.

InfluxDB 3 Core is a high-speed, recent-data engine that collects and processes data in real-time, while persisting it to local disk or object storage. InfluxDB 3 Enterprise is a commercial product that builds on Core’s foundation, adding high availability, read replicas, enhanced security, and data compaction for faster queries. A free tier of InfluxDB 3 Enterprise will also be available for at-home, non-commercial use for hobbyists to get the full historical time series database set of capabilities.

For more information, check out: