Configure your InfluxDB cluster using Helm
Manage your InfluxDB Clustered deployments using Kubernetes and apply configuration settings using a YAML configuration file.
The InfluxDB Clustered Helm chart
provides an alternative method for deploying your InfluxDB cluster using
Helm. It acts as a wrapper for the InfluxDB AppInstance
resource. When using Helm, apply configuration options in a
a values.yaml
on your local machine.
InfluxData provides the following items:
influxdb-docker-config.json
: an authenticated Docker configuration file. The InfluxDB Clustered software is in a secure container registry. This file grants access to the collection of container images required to install InfluxDB Clustered.
Configuration data
When ready to configure your InfluxDB cluster, have the following information available:
- InfluxDB cluster hostname: the hostname Kubernetes uses to expose InfluxDB API endpoints
- PostgreSQL-style data source name (DSN): used to access your PostgreSQL-compatible database that stores the InfluxDB Catalog.
- Object store credentials (AWS S3 or S3-compatible)
- Endpoint URL
- Access key
- Bucket name
- Region (required for S3, may not be required for other object stores)
- Local storage information (for ingester pods)
- Storage class
- Storage size
InfluxDB is deployed to a Kubernetes namespace which, throughout the following
installation procedure, is referred to as the target namespace.
For simplicity, we assume this namespace is influxdb
, however
you may use any name you like.
Set namespaceOverride if using a namespace other than influxdb
If you use a namespace name other than influxdb
, update the namespaceOverride
field in your values.yaml
to use your custom namespace name.
AppInstance resource
The InfluxDB installation, update, and upgrade processes are driven by editing
and applying a Kubernetes custom resource (CRD)
called AppInstance
.
The AppInstance
CRD is included in the InfluxDB Clustered Helm chart and can
be configured by applying custom settings in the values.yaml
included in the
chart.
The AppInstance
resource contains key information, such as:
- Name of the target namespace
- Version of the InfluxDB package
- Reference to the InfluxDB container registry pull secrets
- Hostname where the InfluxDB API is exposed
- Parameters to connect to external prerequisites
kubecfg kubit operator
The InfluxDB Clustered Helm chart also includes the
kubecfg kubit
operator (maintained by InfluxData)
which simplifies the installation and management of the InfluxDB Clustered package.
It manages the application of the jsonnet templates used to install, manage, and
update an InfluxDB cluster.
If you already installed the kubecfg kubit
operator separately when
setting up prerequisites
for your cluster, in your values.yaml
, set skipOperator
to true
.
skipOperator: true
Configure your cluster
- Install Helm
- Create a values.yaml file
- Configure access to the InfluxDB container registry
- Modify the configuration file to point to prerequisites
Install Helm
If you haven’t already, install Helm on your local machine.
Create a values.yaml file
Download or copy the base values.yaml
for the InfluxDB Clustered Helm chart
from GitHub and store it locally. For example–if using cURL:
curl -O https://raw.githubusercontent.com/influxdata/helm-charts/master/charts/influxdb3-clustered/values.yaml
Or you can copy the default values.yaml
from GitHub:
Configure access to the InfluxDB container registry
The provided influxdb-docker-config.json
grants access to a collection of
container images required to run InfluxDB Clustered.
Your Kubernetes Cluster needs access to the container registry to pull down and
install InfluxDB.
When pulling images, there are two main scenarios:
- You have a Kubernetes cluster that can pull from the InfluxData container registry.
- You run in an environment with no network interfaces (“air-gapped”) and you can only access a private container registry.
In both scenarios, you need a valid container registry secret file. Use crane to create a container registry secret file.
- Install crane
- Use the following command to create a container registry secret file and retrieve the necessary secrets:
mkdir /tmp/influxdbsecret
cp influxdb-docker-config.json /tmp/influxdbsecret/config.json
DOCKER_CONFIG=/tmp/influxdbsecret \
crane manifest \
us-docker.pkg.dev/influxdb2-artifacts/clustered/influxdb:PACKAGE_VERSION
Replace PACKAGE_VERSION
with your InfluxDB Clustered package version.
If your Docker configuration is valid and you’re able to connect to the container registry, the command succeeds and the output is the JSON manifest for the Docker image, similar to the following:
If there’s a problem with the Docker configuration, crane won’t retrieve the manifest and the output is similar to the following error:
Error: fetching manifest us-docker.pkg.dev/influxdb2-artifacts/clustered/influxdb:<package-version>: GET https://us-docker.pkg.dev/v2/token?scope=repository%3Ainfluxdb2-artifacts%2Fclustered%2Finfluxdb%3Apull&service=: DENIED: Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/influxdb2-artifacts/locations/us/repositories/clustered" (or it may not exist)
Public registry (non-air-gapped)
To pull from the InfluxData registry, you need to create a Kubernetes secret in the target namespace.
kubectl create secret docker-registry gar-docker-secret \
--from-file=.dockerconfigjson=influxdb-docker-config.json \
--namespace influxdb
If successful, the output is the following:
secret/gar-docker-secret created
By default, this secret is named gar-docker-secret
.
If you change the name of this secret, you must also change the value of the
imagePullSecrets.name
field in your values.yaml
.
Private registry (air-gapped)
If your Kubernetes cluster can’t use a public network to download container images from our container registry, do the following:
- Copy the images from the InfluxDB registry to your own private registry.
- Configure your
AppInstance
resource with a reference to your private registry name. - Provide credentials to your private registry.
The list of images that you need to copy is included in the package metadata. You can obtain it with any standard OCI image inspection tool. For example:
DOCKER_CONFIG=/tmp/influxdbsecret \
crane config \
us-docker.pkg.dev/influxdb2-artifacts/clustered/influxdb:PACKAGE_VERSION \
| jq -r '.metadata["oci.image.list"].images[]' \
> /tmp/images.txt
The output is a list of image names, similar to the following:
us-docker.pkg.dev/influxdb2-artifacts/idpe/idpe-cd-ioxauth@sha256:5f015a7f28a816df706b66d59cb9d6f087d24614f485610619f0e3a808a73864
us-docker.pkg.dev/influxdb2-artifacts/iox/iox@sha256:b59d80add235f29b806badf7410239a3176bc77cf2dc335a1b07ab68615b870c
...
Use crane
to copy the images to your private registry:
</tmp/images.txt xargs -I% crane cp % REGISTRY_HOSTNAME/%
Replace REGISTRY_HOSTNAME
with the hostname of your private registry–for example:
myregistry.mydomain.io
Set the
images.registryOverride
field in your values.yaml
to the location of your
private registry–for example:
images:
registryOverride: REGISTRY_HOSTNAME
Modify the configuration file to point to prerequisites
Update your values.yaml
file with credentials necessary to connect your
cluster to your prerequisites.
- Configure ingress
- Configure the object store
- Configure the catalog database
- Configure local storage for ingesters
Configure ingress
To configure ingress, provide values for the following fields in your
values.yaml
:
ingress.hosts
: Cluster hostnamesProvide the hostnames that Kubernetes should use to expose the InfluxDB API endpoints–for example:
cluster-host.com
.You can provide multiple hostnames. The ingress layer accepts incoming requests for all listed hostnames. This can be useful if you want to have distinct paths for your internal and external traffic.
You are responsible for configuring and managing DNS. Options include:
- Manually managing DNS records
- Using external-dns to synchronize exposed Kubernetes services and ingresses with DNS providers.
ingress.tlsSecretName
: TLS certificate secret name(Optional): Provide the name of the secret that contains your TLS certificate and key. The examples in this guide use the name
ingress-tls
.The
tlsSecretName
field is optional. You may want to use it if you already have a TLS certificate for your DNS name.Writing to and querying data from InfluxDB does not require TLS. For simplicity, you can wait to enable TLS before moving into production. For more information, see Phase 4 of the InfluxDB Clustered installation process, Secure your cluster.
ingress:
hosts:
- cluster-host.com
tlsSecretName: ingress-tls
Configure the object store
To connect your InfluxDB cluster to your object store, provide the required
credentials in your values.yaml
. The credentials required depend on your
object storage provider.
If using Amazon S3 or an S3-compatible object store, provide values for the
following fields in your values.yaml
:
objectStore
bucket
: Object storage bucket names3
:endpoint
: Object storage endpoint URLallowHttp
: Set totrue
to allow unencrypted HTTP connectionsaccessKey.value
: Object storage access key (can use avalue
literal orvalueFrom
to retrieve the value from a secret)secretKey.value
: Object storage secret key (can use avalue
literal orvalueFrom
to retrieve the value from a secret)region
: Object storage region
objectStore:
# Bucket that the Parquet files will be stored in
bucket: S3_BUCKET_NAME
s3:
# URL for S3 Compatible object store
endpoint: S3_URL
# Set to true to allow communication over HTTP (instead of HTTPS)
allowHttp: 'true'
# S3 Access Key
# This can also be provided as a valueFrom: secretKeyRef:
accessKey:
value: S3_ACCESS_KEY
# S3 Secret Key
# This can also be provided as a valueFrom: secretKeyRef:
secretKey:
value: S3_SECRET_KEY
# This value is required for AWS S3, it may or may not be required for other providers.
region: S3_REGION
Replace the following:
S3_BUCKET_NAME
: Object storage bucket nameS3_URL
: Object storage endpoint URLS3_ACCESS_KEY
: Object storage access keyS3_SECRET_KEY
: Object storage secret keyS3_REGION
: Object storage region
If using Azure Blob Storage as your object store, provide values for the
following fields in your values.yaml
:
objectStore
bucket
: Azure Blob Storage bucket nameazure
:accessKey.value
: Azure Blob Storage access key (can use avalue
literal orvalueFrom
to retrieve the value from a secret)account.value
: Azure Blob Storage account ID (can use avalue
literal orvalueFrom
to retrieve the value from a secret)
objectStore:
# Bucket that the Parquet files will be stored in
bucket: AZURE_BUCKET_NAME
azure:
# Azure Blob Storage Access Key
# This can also be provided as a valueFrom:
accessKey:
value: AZURE_ACCESS_KEY
# Azure Blob Storage Account
# This can also be provided as a valueFrom: secretKeyRef:
account:
value: AZURE_STORAGE_ACCOUNT
Replace the following:
AZURE_BUCKET_NAME
: Object storage bucket nameAZURE_ACCESS_KEY
: Azure Blob Storage access keyAZURE_STORAGE_ACCOUNT
: Azure Blob Storage account ID
If using Google Cloud Storage as your object store, provide values for the
following fields in your values.yaml
:
objectStore
bucket
: Google Cloud Storage bucket namegoogle
:serviceAccountSecret.name
: the Kubernetes Secret name that contains your Google IAM service account credentialsserviceAccountSecret.key
: the key inside of your Google IAM secret that contains your Google IAM account credentials
objectStore:
# Bucket that the Parquet files will be stored in
bucket: GOOGLE_BUCKET_NAME
google:
# This section is not needed if you are using GKE Workload Identity.
# It is only required to use explicit service account secrets (JSON files)
serviceAccountSecret:
# Kubernetes Secret name containing the credentials for a Google IAM
# Service Account.
name: GOOGLE_IAM_SECRET
# The key within the Secret containing the credentials.
key: GOOGLE_CREDENTIALS_KEY
Replace the following:
GOOGLE_BUCKET_NAME
: Google Cloud Storage bucket nameGOOGLE_IAM_SECRET
: the Kubernetes Secret name that contains your Google IAM service account credentialsGOOGLE_CREDENTIALS_KEY
: the key inside of your Google IAM secret that contains your Google IAM account credentials
Configure the catalog database
The InfluxDB catalog is a PostgreSQL-compatible relational database that stores
metadata about your time series data.
To connect your InfluxDB cluster to your PostgreSQL-compatible database,
provide values for the following fields in your values.yaml
:
We recommend storing sensitive credentials, such as your PostgreSQL-compatible DSN, as secrets in your Kubernetes cluster.
catalog.dsn
SecretName
: Secret nameSecretKey
: Key in the secret that contains the DSN
catalog:
# Secret name and key within the secret containing the dsn string to connect
# to the catalog
dsn:
# Kubernetes Secret name containing the dsn for the catalog.
SecretName: SECRET_NAME
# The key within the Secret containing the dsn.
SecretKey: SECRET_KEY
Replace the following:
SECRET_NAME
: Name of the secret containing your PostgreSQL-compatible DSNSECRET_KEY
: Key in the secret that references your PostgreSQL-compatible DSN
Percent-encode special symbols in PostgreSQL DSNs
Special symbols in PostgreSQL DSNs should be percent-encoded to ensure they are parsed correctly by InfluxDB Clustered. This is important to consider when using DSNs containing auto-generated passwords which may include special symbols to make passwords more secure.
A DSN with special characters that are not percent-encoded result in an error similar to:
Catalog DSN error: A catalog error occurred: unhandled external error: error with configuration: invalid port number
For more information, see the PostgreSQL Connection URI docs.
PostgreSQL instances without TLS or SSL
If your PostgreSQL-compatible instance runs without TLS or SSL, you must include
the sslmode=disable
parameter in the DSN. For example:
postgres://username:passw0rd@mydomain:5432/influxdb?sslmode=disable
Configure local storage for ingesters
InfluxDB ingesters require local storage to store the Write Ahead Log (WAL) for
incoming data.
To connect your InfluxDB cluster to local storage, provide values for the
following fields in your values.yaml
:
ingesterStorage
storageClassName
: Kubernetes storage class. This differs based on the Kubernetes environment and desired storage characteristics.storage
: Storage size. We recommend a minimum of 2 gibibytes (2Gi
).
ingesterStorage:
# (Optional) Set the storage class. This will differ based on the K8s
# environment and desired storage characteristics.
# If not set, the default storage class will be used.
storageClassName: STORAGE_CLASS
# Set the storage size (minimum 2Gi recommended)
storage: STORAGE_SIZE
Replace the following:
STORAGE_CLASS
: Kubernetes storage classSTORAGE_SIZE
: Storage size (example:2Gi
)
Was this page helpful?
Thank you for your feedback!
Support and feedback
Thank you for being part of our community! We welcome and encourage your feedback and bug reports for InfluxDB and this documentation. To find support, use the following resources:
Customers with an annual or support contract can contact InfluxData Support.