Documentation

Scale your InfluxDB cluster

InfluxDB Clustered lets you scale individual components of your cluster both vertically and horizontally to match your specific workload. Use the AppInstance resource defined in your influxdb.yml to manage resources available to each component.

Scaling strategies

The following scaling strategies can be applied to components in your InfluxDB cluster.

Vertical scaling

Vertical scaling (also known as “scaling up”) involves increasing the resources (such as RAM or CPU) available to a process or system. Vertical scaling is typically used to handle resource-intensive tasks that require more processing power.

Horizontal scaling

Horizontal scaling (also known as “scaling out”) involves increasing the number of nodes or processes available to perform a given task. Horizontal scaling is typically used to increase the amount of workload or throughput a system can manage, but also provides additional redundancy and failover.

Scale components in your cluster

The following components of your InfluxDB cluster are scaled by modifying properties in your AppInstance resource:

  • Ingester
  • Querier
  • Compactor
  • Router
  • Garbage collector

Scale your Catalog and Object store

Your InfluxDB Catalog and Object store are managed outside of your AppInstance resource. Scaling mechanisms for these components depend on the technology and underlying provider used for each.

Use the spec.package.spec.resources property in your AppInstance resource defined in your influxdb.yml to define system resource minimums and limits for each pod and the number of replicas per component. requests are the minimum that the Kubernetes scheduler should reserve for a pod. limits are the maximum that a pod should be allowed to use.

Your AppInstance resource can include the following properties to define resource minimums and limits per pod and replicas per component:

  • spec.package.spec.resources
    • ingester
      • requests
        • cpu: Minimum CPU resource units to assign to ingesters
        • memory: Minimum memory resource units to assign to ingesters
        • replicas: Number of ingester replicas to provision
      • limits
        • cpu: Maximum CPU resource units to assign to ingesters
        • memory: Maximum memory resource units to assign to ingesters
    • compactor
      • requests
        • cpu: Minimum CPU resource units to assign to compactors
        • memory: Minimum memory resource units to assign to compactors
        • replicas: Number of compactor replicas to provision
      • limits
        • cpu: Maximum CPU resource units to assign to compactors
        • memory: Maximum memory resource units to assign to compactors
    • querier
      • requests
        • cpu: Minimum CPU resource units to assign to queriers
        • memory: Minimum memory resource units to assign to queriers
        • replicas: Number of querier replicas to provision
      • limits
        • cpu: Maximum CPU resource units to assign to queriers
        • memory: Maximum memory resource units to assign to queriers
    • router
      • requests
        • cpu: Minimum CPU resource units to assign to routers
        • memory: Minimum memory resource units to assign to routers
        • replicas: Number of router replicas to provision
      • limits
        • cpu: Maximum CPU Resource units to assign to routers
        • memory: Maximum memory resource units to assign to routers
    • garbage-collector
      • requests
        • cpu: Minimum CPU resource units to assign to the garbage collector
        • memory: Minimum memory resource units to assign to the garbage collector
      • limits
        • cpu: Maximum CPU Resource units to assign to the garbage collector
        • memory: Maximum memory resource units to assign to the garbage collector

View example AppInstance with resource requests and limits

Use the resources property in your values.yaml to define system resource minimums and limits for each pod and the number of replicas per component. requests are the minimum that the Kubernetes scheduler should reserve for a pod. limits are the maximum that a pod should be allowed to use.

Use the following properties to define resource minimums and limits per pod and replicas per component:

  • resources
    • ingester
      • requests
        • cpu: Minimum CPU resource units to assign to ingesters
        • memory: Minimum memory resource units to assign to ingesters
        • replicas: Number of ingester replicas to provision
      • limits
        • cpu: Maximum CPU resource units to assign to ingesters
        • memory: Maximum memory resource units to assign to ingesters
    • compactor
      • requests
        • cpu: Minimum CPU resource units to assign to compactors
        • memory: Minimum memory resource units to assign to compactors
        • replicas: Number of compactor replicas to provision
      • limits
        • cpu: Maximum CPU resource units to assign to compactors
        • memory: Maximum memory resource units to assign to compactors
    • querier
      • requests
        • cpu: Minimum CPU resource units to assign to queriers
        • memory: Minimum memory resource units to assign to queriers
        • replicas: Number of querier replicas to provision
      • limits
        • cpu: Maximum CPU resource units to assign to queriers
        • memory: Maximum memory resource units to assign to queriers
    • router
      • requests
        • cpu: Minimum CPU resource units to assign to routers
        • memory: Minimum memory resource units to assign to routers
        • replicas: Number of router replicas to provision
      • limits
        • cpu: Maximum CPU Resource units to assign to routers
        • memory: Maximum memory resource units to assign to routers
    • garbage-collector
      • requests
        • cpu: Minimum CPU resource units to assign to the garbage collector
        • memory: Minimum memory resource units to assign to the garbage collector
      • limits
        • cpu: Maximum CPU Resource units to assign to the garbage collector
        • memory: Maximum memory resource units to assign to the garbage collector

View example values.yaml with resource requests and limits

Applying resource limits to pods is optional, but provides better resource isolation and protects against pods using more resources than intended. For information, see Kubernetes resource requests and limits.

Horizontally scale a component

To horizontally scale a component in your InfluxDB cluster, increase or decrease the number of replicas for the component and apply the change.

Only use the AppInstance to scale component replicas

Only use the AppInstance resource to scale component replicas. Manually scaling replicas may cause errors.

For example–to horizontally scale your Ingester:

apiVersion: kubecfg.dev/v1alpha1
kind: AppInstance
# ...
spec:
  package:
    spec:
      resources:
        ingester:
          requests:
            # ...
            replicas: 6
# ...
resources:
  ingester:
    requests:
      # ...
      replicas: 6

Vertically scale a component

To vertically scale a component in your InfluxDB cluster, increase or decrease the CPU and memory resource units to assign to component pods and apply the change.

apiVersion: kubecfg.dev/v1alpha1
kind: AppInstance
# ...
spec:
  package:
    spec:
      resources:
        ingester:
          requests:
            cpu: "500m"
            memory: "512MiB"
            # ...
          limits:
            cpu: "1000m"
            memory: "1024MiB"
# ...
resources:
  ingester:
    requests:
      cpu: "500m"
      memory: "512MiB"
      # ...
    limits:
      cpu: "1000m"
      memory: "1024MiB"

Apply your changes

After modifying the AppInstance resource, use kubectl apply to apply the configuration changes to your cluster and scale the updated components.

kubectl apply \
  --filename myinfluxdb.yml \
  --namespace influxdb
helm upgrade \
  influxdata/influxdb3-clustered \
  -f ./values.yml \
  --namespace influxdb

Scale your cluster as a whole

Scaling your entire InfluxDB Cluster is done by scaling your Kubernetes cluster and is managed outside of InfluxDB. The process of scaling your entire Kubernetes cluster depends on your underlying Kubernetes provider. You can also use Kubernetes autoscaling to automatically scale your cluster as needed.

Router

The Router can be scaled both vertically and horizontally. Horizontal scaling increases write throughput and is typically the most effective scaling strategy for the Router. Vertical scaling (specifically increased CPU) improves the Router’s ability to parse incoming line protocol with lower latency.

Ingester

The Ingester can be scaled both vertically and horizontally. Vertical scaling increases write throughput and is typically the most effective scaling strategy for the Ingester.

Ingester storage volume

Ingesters use an attached storage volume to store the Write-Ahead Log (WAL). With more storage available, Ingesters can keep bigger WAL buffers, which improves query performance and reduces pressure on the Compactor. Storage speed also helps with query performance.

Configure the storage volume attached to Ingester pods in the spec.package.spec.ingesterStorage property of your AppInstance resource or, if using Helm, the ingesterStorage property of your values.yaml.

View example Ingester storage configuration

Querier

The Querier can be scaled both vertically and horizontally. Horizontal scaling increases query throughput to handle more concurrent queries. Vertical scaling improves the Querier’s ability to process computationally intensive queries.

Compactor

The Compactor can be scaled both vertically and horizontally. Because compaction is a compute-heavy process, vertical scaling (especially increasing the available CPU) is the most effective scaling strategy for the Compactor. Horizontal scaling increases compaction throughput, but not as efficiently as vertical scaling.

Garbage collector

The Garbage collector can be scaled vertically. It is a light-weight process that typically doesn’t require many system resources, but if you begin to see high resource consumption on the garbage collector, you can scale it vertically to address the added workload.

Catalog

Scaling strategies available for the Catalog depend on the PostgreSQL-compatible database used to run the catalog. All support vertical scaling. Most support horizontal scaling for redundancy and failover.

Object store

Scaling strategies available for the Object store depend on the underlying object storage services used to run the object store. Most support horizontal scaling for redundancy, failover, and increased capacity.


Was this page helpful?

Thank you for your feedback!


The future of Flux

Flux is going into maintenance mode. You can continue using it as you currently are without any changes to your code.

Read more

InfluxDB v3 enhancements and InfluxDB Clustered is now generally available

New capabilities, including faster query performance and management tooling advance the InfluxDB v3 product line. InfluxDB Clustered is now generally available.

InfluxDB v3 performance and features

The InfluxDB v3 product line has seen significant enhancements in query performance and has made new management tooling available. These enhancements include an operational dashboard to monitor the health of your InfluxDB cluster, single sign-on (SSO) support in InfluxDB Cloud Dedicated, and new management APIs for tokens and databases.

Learn about the new v3 enhancements


InfluxDB Clustered general availability

InfluxDB Clustered is now generally available and gives you the power of InfluxDB v3 in your self-managed stack.

Talk to us about InfluxDB Clustered