Documentation

Use the InfluxDB AppInstance resource configuration

Configure your InfluxDB cluster by editing configuration options in the AppInstance resource provided by InfluxData. Resource configuration for your cluster includes the following:

  • influxdb-docker-config.json: an authenticated Docker configuration file. The InfluxDB Clustered software is in a secure container registry. This file grants access to the collection of container images required to install InfluxDB Clustered.

  • A tarball that contains the following files:

    • app-instance-schema.json: Defines the schema that you can use to validate example-customer.yml and your cluster configuration in tools like Visual Studio Code (VS Code).

    • example-customer.yml: Configuration for your InfluxDB cluster that includes information about prerequisites.

      The following sections refer to a myinfluxdb.yml file that you copy from example-customer.yml and edit for your InfluxDB cluster.

Configuration data

When ready to configure your InfluxDB cluster, have the following information available:

  • InfluxDB cluster hostname: the hostname Kubernetes uses to expose InfluxDB API endpoints
  • PostgreSQL-style data source name (DSN): used to access your PostgreSQL-compatible database that stores the InfluxDB Catalog.
  • Object store credentials (AWS S3 or S3-compatible)
    • Endpoint URL
    • Access key
    • Bucket name
    • Region (required for S3, might not be required for other object stores)
  • Local or attached storage information (for ingester pods)
    • Storage class
    • Storage size

You deploy InfluxDB Clustered to a Kubernetes namespace, referred to as the target namespace in the following procedures. For simplicity, we assume this namespace is influxdb–you can use any name you like.

To manage InfluxDB Clustered installation, updates, and upgrades, you edit
and apply a Kubernetes custom resource (CRD)
(AppInstance), which you define in a YAML file that conforms to the app-instance-schema.json schema.

InfluxDB Clustered includes example-customer.yml as a configuration template.

The AppInstance resource contains key information, such as:

  • Name of the target namespace
  • Version of the InfluxDB package
  • Reference to the InfluxDB container registry pull secrets
  • Hostname of your cluster’s InfluxDB API
  • Parameters to connect to external prerequisites

Update your namespace if using a namespace other than influxdb

If you use a namespace name other than influxdb, update the metadata.namespace property in your myinfluxdb.yml to use your custom namespace name.

Configure your cluster

  1. Create a cluster configuration file
  2. Configure access to the InfluxDB container registry
  3. Modify the configuration file to point to prerequisites

Create a cluster configuration file

Copy the provided example-customer.yml file to create a new configuration file specific to your InfluxDB cluster. For example, myinfluxdb.yml.

cp example-customer.yml myinfluxdb.yml

Use VS Code to edit your configuration file

We recommend using Visual Studio Code (VS Code) to edit your myinfluxdb.yml configuration file due to its JSON Schema support, including autocompletion and validation features that help when editing your InfluxDB configuration. InfluxData provides an app-instance-schema.json JSON schema file that VS Code can use to validate your configuration settings.

Configure access to the InfluxDB container registry

The provided influxdb-docker-config.json grants access to a collection of container images required to run InfluxDB Clustered. Your Kubernetes Cluster needs access to the container registry to pull and install InfluxDB.

When pulling the InfluxDB Clustered image, you’re likely in one of two scenarios:

  • Your Kubernetes cluster can pull from the InfluxData container registry.
  • Your cluster runs in an environment without network interfaces (“air-gapped”) and can only access a private container registry.

In both scenarios, you need a valid pull secret.

Public registry (non-air-gapped)

To pull from the InfluxData registry, you need to create a Kubernetes secret in the target namespace.

kubectl create secret docker-registry gar-docker-secret \
  --from-file=.dockerconfigjson=influxdb-docker-config.json \
  --namespace influxdb

If successful, the output is the following:

secret/gar-docker-secret created

By default, the name of the secret is gar-docker-secret. If you change the name of the secret, you must also change the value of the imagePullSecret field in the AppInstance custom resource to match.

Private registry (air-gapped)

If your Kubernetes cluster can’t use a public network to download container images from the InfluxData container registry, do the following:

  1. Copy the images from the InfluxData registry to your own private registry.
  2. Configure your AppInstance resource with a reference to your private registry name.
  3. Provide credentials to your private registry.
Copy the images

We recommend using crane to copy images into your private registry.

  1. Install crane for your system.
  2. Use the following command to create a container registry secret file and retrieve the necessary secrets:
mkdir /tmp/influxdbsecret
cp influxdb-docker-config.json /tmp/influxdbsecret/config.json
DOCKER_CONFIG=/tmp/influxdbsecret \
  crane manifest \
  us-docker.pkg.dev/influxdb2-artifacts/clustered/influxdb:
PACKAGE_VERSION

Replace PACKAGE_VERSION with your InfluxDB Clustered package version.


If your Docker configuration is valid and you’re able to connect to the container registry, the command succeeds and the output is the JSON manifest for the Docker image, similar to the following:

View JSON manifest

If there’s a problem with the Docker configuration, crane won’t retrieve the manifest and the output is similar to the following error:

Error: fetching manifest us-docker.pkg.dev/influxdb2-artifacts/clustered/influxdb:<package-version>: GET https://us-docker.pkg.dev/v2/token?scope=repository%3Ainfluxdb2-artifacts%2Fclustered%2Finfluxdb%3Apull&service=: DENIED: Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/influxdb2-artifacts/locations/us/repositories/clustered" (or it may not exist)

The list of images that you need to copy is included in the package metadata. You can obtain it with any standard OCI image inspection tool–for example:

DOCKER_CONFIG=/tmp/influxdbsecret \
crane config \
  us-docker.pkg.dev/influxdb2-artifacts/clustered/influxdb:
PACKAGE_VERSION
\
| jq -r '.metadata["oci.image.list"].images[]' \ > /tmp/images.txt

The output is a list of image names, similar to the following:

us-docker.pkg.dev/influxdb2-artifacts/idpe/idpe-cd-ioxauth@sha256:5f015a7f28a816df706b66d59cb9d6f087d24614f485610619f0e3a808a73864
us-docker.pkg.dev/influxdb2-artifacts/iox/iox@sha256:b59d80add235f29b806badf7410239a3176bc77cf2dc335a1b07ab68615b870c
...

Use crane to copy the images to your private registry:

</tmp/images.txt xargs -I% crane cp % 
REGISTRY_HOSTNAME
/%

Replace REGISTRY_HOSTNAME with the hostname of your private registry–for example:

myregistry.mydomain.io
Configure your AppInstance

Set the spec.package.spec.images.registryOverride field in your myinfluxdb.yml to the location of your private registry–for example:

apiVersion: kubecfg.dev/v1alpha1
kind: AppInstance
# ...
spec:
  package:
    spec:
      images:
        registryOverride: 
REGISTRY_HOSTNAME
Provide credentials to your private registry

If your private container registry requires pull secrets to access images, you can create the required kubernetes secrets, and then configure them in your AppInstance resource–for example:

apiVersion: kubecfg.dev/v1alpha1
kind: AppInstance
# ...
spec:
  imagePullSecrets:
    - name: 
PULL_SECRET_NAME

Modify the configuration file to point to prerequisites

Update your myinfluxdb.yml configuration file with credentials necessary to connect your cluster to your prerequisites.

Configure ingress

To configure ingress, provide values for the following fields in your myinfluxdb.yml configuration file:

  • spec.package.spec.ingress.hosts: Cluster hostnames

    Provide the hostnames that Kubernetes should use to expose the InfluxDB API endpoints. For example: cluster-host.com.

    You can provide multiple hostnames. The ingress layer accepts incoming requests for all listed hostnames. This can be useful if you want to have distinct paths for your internal and external traffic.

    You are responsible for configuring and managing DNS. Options include:

    • Manually managing DNS records
    • Using external-dns to synchronize exposed Kubernetes services and ingresses with DNS providers.
  • spec.package.spec.ingress.tlsSecretName: TLS certificate secret name

    (Optional): Provide the name of the secret that contains your TLS certificate and key. The examples in this guide use the name ingress-tls.

    Writing to and querying data from InfluxDB does not require TLS. For simplicity, you can wait to enable TLS before moving into production. For more information, see Phase 4 of the InfluxDB Clustered installation process, Secure your cluster.

apiVersion: kubecfg.dev/v1alpha1
kind: AppInstance
# ...
spec:
  package:
    spec:
# ...
ingress:
  hosts:
    - cluster-host.com
  tlsSecretName: ingress-tls

Configure the object store

To connect your InfluxDB cluster to your object store, provide the required credentials in your myinfluxdb.yml. The credentials required depend on your object storage provider.

If using Amazon S3 or an S3-compatible object store, provide values for the following fields in your myinfluxdb.yml:

  • spec.package.spec.objectStore
    • bucket: Object storage bucket name
    • s3
      • endpoint: Object storage endpoint URL
      • allowHttp: Set to true to allow unencrypted HTTP connections
      • accessKey.value: Object storage access key
      • secretKey.value: Object storage secret key
      • region: Object storage region
apiVersion: kubecfg.dev/v1alpha1
kind: AppInstance
# ...
spec:
  package:
    spec:
      objectStore:
        s3:
          # URL for S3 Compatible object store
          endpoint: 
S3_URL
# Set to true to allow communication over HTTP (instead of HTTPS) allowHttp: 'true' # S3 Access Key # This can also be provided as a valueFrom: secretKeyRef: accessKey: value:
S3_ACCESS_KEY
# S3 Secret Key # This can also be provided as a valueFrom: secretKeyRef: secretKey: value:
S3_SECRET_KEY
# Bucket that the Parquet files will be stored in bucket:
S3_BUCKET_NAME
# This value is required for AWS S3, it may or may not be required for other providers. region:
S3_REGION

Replace the following:

  • S3_URL: Object storage endpoint URL
  • S3_ACCESS_KEY: Object storage access key
  • S3_SECRET_KEY: Object storage secret key
  • S3_BUCKET_NAME: Object storage bucket name
  • S3_REGION: Object storage region

If using Azure Blob Storage as your object store, provide values for the following fields in your myinfluxdb.yml:

  • spec.package.spec.objectStore
    • bucket: Azure Blob Storage bucket name
    • azure:
      • accessKey.value: Azure Blob Storage access key (can use a value literal or valueFrom to retrieve the value from a secret)
      • account.value: Azure Blob Storage account ID (can use a value literal or valueFrom to retrieve the value from a secret)
apiVersion: kubecfg.dev/v1alpha1
kind: AppInstance
# ...
spec:
  package:
    spec:
      objectStore:
          # Bucket that the Parquet files will be stored in
        bucket: 
AZURE_BUCKET_NAME
azure: # Azure Blob Storage Access Key # This can also be provided as a valueFrom: accessKey: value:
AZURE_ACCESS_KEY
# Azure Blob Storage Account # This can also be provided as a valueFrom: secretKeyRef: account: value:
AZURE_STORAGE_ACCOUNT

Replace the following:

  • AZURE_BUCKET_NAME: Object storage bucket name
  • AZURE_ACCESS_KEY: Azure Blob Storage access key
  • AZURE_STORAGE_ACCOUNT: Azure Blob Storage account ID

If using Google Cloud Storage as your object store, provide values for the following fields in your myinfluxdb.yml:

  • spec.package.spec.objectStore
    • bucket: Google Cloud Storage bucket name
    • google:
      • serviceAccountSecret.name: the Kubernetes Secret name that contains your Google IAM service account credentials
      • serviceAccountSecret.key: the key inside of your Google IAM secret that contains your Google IAM account credentials
apiVersion: kubecfg.dev/v1alpha1
kind: AppInstance
# ...
spec:
  package:
    spec:
      objectStore:
          # Bucket that the Parquet files will be stored in
        bucket: 
GOOGLE_BUCKET_NAME
google: # This section is not needed if you are using GKE Workload Identity. # It is only required to use explicit service account secrets (JSON files) serviceAccountSecret: # Kubernetes Secret name containing the credentials for a Google IAM # Service Account. name:
GOOGLE_IAM_SECRET
# The key within the Secret containing the credentials. key:
GOOGLE_CREDENTIALS_KEY

Replace the following:

  • GOOGLE_BUCKET_NAME: Google Cloud Storage bucket name
  • GOOGLE_IAM_SECRET: the Kubernetes Secret name that contains your Google IAM service account credentials
  • GOOGLE_CREDENTIALS_KEY: the key inside of your Google IAM secret that contains your Google IAM account credentials

Configure the catalog database

The InfluxDB catalog is a PostgreSQL-compatible relational database that stores metadata about your time series data. To connect your InfluxDB cluster to your PostgreSQL-compatible database, provide values for the following fields in your myinfluxdb.yml configuration file:

We recommend storing sensitive credentials, such as your PostgreSQL-compatible DSN, as secrets in your Kubernetes cluster.

  • spec.package.spec.catalog.dsn.valueFrom.secretKeyRef
    • .name: Secret name
    • .key: Key in the secret that contains the DSN
apiVersion: kubecfg.dev/v1alpha1
kind: AppInstance
# ...
spec:
  package:
    spec:
      catalog:
        # A postgresql style DSN that points to a postgresql compatible database.
        # postgres://[user[:password]@][netloc][:port][/dbname][?param1=value1&...]
        dsn:
          valueFrom:
            secretKeyRef:
              name: 
SECRET_NAME
key:
SECRET_KEY

Replace the following:

  • SECRET_NAME: Name of the secret containing your PostgreSQL-compatible DSN
  • SECRET_KEY: Key in the secret that references your PostgreSQL-compatible DSN

Percent-encode special symbols in PostgreSQL DSNs

Percent-encode special symbols in PostgreSQL DSNs to ensure InfluxDB Clustered parses them correctly. Consider this when using DSNs with auto-generated passwords that include special symbols for added security.

If a DSN contains special characters that aren’t percent-encoded, you might encounter an error similar to the following:

Catalog DSN error: A catalog error occurred: unhandled external error: error with configuration: invalid port number

View percent-encoded DSN example

For more information, see the PostgreSQL Connection URI docs.

PostgreSQL instances without TLS or SSL

If your PostgreSQL-compatible instance runs without TLS or SSL, you must include the sslmode=disable parameter in the DSN. For example:

postgres://username:passw0rd@mydomain:5432/influxdb?sslmode=disable

Configure local storage for ingesters

InfluxDB ingesters require local storage to store the Write Ahead Log (WAL) for incoming data. To connect your InfluxDB cluster to local storage, provide values for the following fields in your myinfluxdb.yml configuration file:

  • spec.package.spec.ingesterStorage
    • .storageClassName: Kubernetes storage class. This differs based on the Kubernetes environment and desired storage characteristics.
    • storage: Storage size. We recommend a minimum of 2 gibibytes (2Gi).
apiVersion: kubecfg.dev/v1alpha1
kind: AppInstance
# ...
spec:
  package:
    spec:
      ingesterStorage:
        storageClassName: 
STORAGE_CLASS
storage:
STORAGE_SIZE

Replace the following:



Was this page helpful?

Thank you for your feedback!


The future of Flux

Flux is going into maintenance mode. You can continue using it as you currently are without any changes to your code.

Read more

InfluxDB v3 enhancements and InfluxDB Clustered is now generally available

New capabilities, including faster query performance and management tooling advance the InfluxDB v3 product line. InfluxDB Clustered is now generally available.

InfluxDB v3 performance and features

The InfluxDB v3 product line has seen significant enhancements in query performance and has made new management tooling available. These enhancements include an operational dashboard to monitor the health of your InfluxDB cluster, single sign-on (SSO) support in InfluxDB Cloud Dedicated, and new management APIs for tokens and databases.

Learn about the new v3 enhancements


InfluxDB Clustered general availability

InfluxDB Clustered is now generally available and gives you the power of InfluxDB v3 in your self-managed stack.

Talk to us about InfluxDB Clustered