Documentation

Python client library for InfluxDB v3

The InfluxDB v3 influxdb3-python Python client library integrates InfluxDB Cloud Serverless write and query operations with Python scripts and applications.

InfluxDB client libraries provide configurable batch writing of data to InfluxDB Cloud Serverless. Client libraries can be used to construct line protocol data, transform data from other formats to line protocol, and batch write line protocol data to InfluxDB HTTP APIs.

InfluxDB v3 client libraries can query InfluxDB Cloud Serverless using SQL or InfluxQL. The influxdb3-python Python client library wraps the Apache Arrow pyarrow.flight client in a convenient InfluxDB v3 interface for executing SQL and InfluxQL queries, requesting server metadata, and retrieving data from InfluxDB Cloud Serverless using the Flight protocol with gRPC.

Code samples in this page use the Get started home sensor sample data.

Installation

Install the client library and dependencies using pip:

pip install influxdb3-python

Importing the module

The influxdb3-python client library package provides the influxdb_client_3 module.

Import the module:

import influxdb_client_3

Import specific class methods from the module:

from influxdb_client_3 import InfluxDBClient3, Point, WriteOptions

API reference

The influxdb_client_3 module includes the following classes and functions.

Classes

Class InfluxDBClient3

Provides an interface for interacting with InfluxDB APIs for writing and querying data.

The InfluxDBClient3 constructor initializes and returns a client instance with the following:

  • A singleton write client configured for writing to the database.
  • A singleton Flight client configured for querying the database.

Parameters

  • host (string): The host URL of the InfluxDB instance.
  • database (string): The bucket to use for writing and querying.
  • token (string): An API token with read/write permissions.
  • Optional write_client_options (dict): Options to use when writing to InfluxDB. If None, writes are synchronous.
  • Optional flight_client_options (dict): Options to use when querying InfluxDB.

Writing modes

When writing data, the client uses one of the following modes:

Synchronous writing

Default. When no write_client_options are provided during the initialization of InfluxDBClient3, writes are synchronous. When writing data in synchronous mode, the client immediately tries to write the provided data to InfluxDB, doesn’t retry failed requests, and doesn’t invoke response callbacks.

Example: initialize a client with synchronous (non-batch) defaults

The following example initializes a client for writing and querying data in an InfluxDB Cloud Serverless database. Given that write_client_options isn’t specified, the client uses the default synchronous writing mode.

from influxdb_client_3 import InfluxDBClient3

client = InfluxDBClient3(host=f"cloud2.influxdata.com",
                        database=f"
BUCKET_NAME
"
,
token=f"
API_TOKEN
"
)

Replace the following:

  • BUCKET_NAME: the name of your InfluxDB Cloud Serverless database
  • API_TOKEN: an InfluxDB Cloud Serverless API token with read/write permissions on the specified database

To explicitly specify synchronous mode, create a client with write_options=SYNCHRONOUS–for example:

from influxdb_client_3 import InfluxDBClient3, write_client_options, SYNCHRONOUS

wco = write_client_options(write_options=SYNCHRONOUS)

client = InfluxDBClient3(host=f"cloud2.influxdata.com",
                        database=f"
BUCKET_NAME
"
,
token=f"
API_TOKEN
"
,
write_client_options=wco, flight_client_options=None)

Replace the following:

  • BUCKET_NAME: the name of your InfluxDB Cloud Serverless database
  • API_TOKEN: an InfluxDB Cloud Serverless API token with write permissions on the specified database

Batch writing

Batch writing is particularly useful for efficient bulk data operations. Options include setting batch size, flush intervals, retry intervals, and more.

Batch writing groups multiple writes into a single request to InfluxDB. In batching mode, the client adds the record or records to a batch, and then schedules the batch for writing to InfluxDB. The client writes the batch to InfluxDB after reaching write_client_options.batch_size or write_client_options.flush_interval. If a write fails, the client reschedules the write according to the write_client_options retry options.

Configuring write client options

Use WriteOptions and write_client_options to configure batch writing and response handling for the client:

  1. Instantiate WriteOptions. To use batch defaults, call the constructor without specifying parameters.
  2. Call write_client_options and use the write_options parameter to specify the WriteOptions instance from the preceding step. Specify callback parameters (success, error, and retry) to invoke functions on success or error.
  3. Instantiate InfluxDBClient3 and use the write_client_options parameter to specify the dict output from the preceding step.
Example: initialize a client using batch defaults and callbacks

The following example shows how to use batch mode with defaults and specify callback functions for the response status (success, error, or retryable error).

from influxdb_client_3 import(InfluxDBClient3,
                              write_client_options,
                              WriteOptions,
                              InfluxDBError)

status = None

# Define callbacks for write responses
def success(self, data: str):
    status = "Success writing batch: data: {data}"
    assert status.startswith('Success'), f"Expected {status} to be success"

def error(self, data: str, err: InfluxDBError):
    status = f"Error writing batch: config: {self}, data: {data}, error: {err}"
    assert status.startswith('Success'), f"Expected {status} to be success"


def retry(self, data: str, err: InfluxDBError):
    status = f"Retry error writing batch: config: {self}, data: {data}, error: {err}"
    assert status.startswith('Success'), f"Expected {status} to be success"

# Instantiate WriteOptions for batching
write_options = WriteOptions()
wco = write_client_options(success_callback=success,
                            error_callback=error,
                            retry_callback=retry,
                            write_options=write_options)

# Use the with...as statement to ensure the file is properly closed and resources
# are released.
with InfluxDBClient3(host=f"cloud2.influxdata.com",
                     database=f"
BUCKET_NAME
"
,
token=f"
API_TOKEN
"
,
write_client_options=wco) as client: client.write_file(file='./data/home-sensor-data.csv', timestamp_column='time', tag_columns=["room"], write_precision='s')

Replace the following:

  • BUCKET_NAME: the name of your InfluxDB Cloud Serverless database
  • API_TOKEN: an InfluxDB Cloud Serverless API token with write permissions on the specified database

InfluxDBClient3 instance methods

InfluxDBClient3.write

Writes a record or a list of records to InfluxDB.

Parameters

  • record (record or list): A record or list of records to write. A record can be a Point object, a dict that represents a point, a line protocol string, or a DataFrame.
  • bucket (string): Optional. The bucket to write to. Default is to write to the bucket specified for the client.
  • ****kwargs**: Additional write options–for example:
    • write_precision (string): Optional. Default is "ns". Specifies the precision ("ms", "s", "us", "ns") for timestamps in record.
    • write_client_options (dict): Optional. Specifies callback functions and options for batch writing mode. To generate the dict, use the write_client_options function.

Example: write a line protocol string

from influxdb_client_3 import InfluxDBClient3

point = "home,room=Living\\ Room temp=21.1,hum=35.9,co=0i 1641024000"

client = InfluxDBClient3(host=f"cloud2.influxdata.com",
                         database=f"
BUCKET_NAME
"
,
token=f"
API_TOKEN
"
)
client.write(record=point, write_precision="s")

The following sample code executes an SQL query to retrieve the point:

# Execute an SQL query
table = client.query(query='''SELECT room
                            FROM home
                            WHERE temp=21.1
                              AND time=from_unixtime(1641024000)''')
# table is a pyarrow.Table
room = table[0][0]
assert f"{room}" == 'Living Room', f"Expected {room} to be Living Room"

Replace the following:

  • BUCKET_NAME: the name of your InfluxDB Cloud Serverless bucket
  • API_TOKEN: an InfluxDB Cloud Serverless API token with write permissions on the specified bucket

Example: write data using points

The influxdb_client_3.Point class provides an interface for constructing a data point for a measurement and setting fields, tags, and the timestamp for the point. The following example shows how to create a Point object, and then write the data to InfluxDB.

from influxdb_client_3 import Point, InfluxDBClient3

point = Point("home").tag("room", "Kitchen").field("temp", 21.5).field("hum", .25)
client = InfluxDBClient3(host=f"cloud2.influxdata.com",
                         database=f"
BUCKET_NAME
"
,
token=f"
API_TOKEN
"
)
client.write(point)

The following sample code executes an InfluxQL query to retrieve the written data:

# Execute an InfluxQL query
table = client.query(query='''SELECT DISTINCT(temp) as val
                              FROM home
                              WHERE temp > 21.0
                              AND time >= now() - 10m''', language="influxql")
# table is a pyarrow.Table
df = table.to_pandas()
assert 21.5 in df['val'].values, f"Expected value in {df['val']}"

Replace the following:

  • BUCKET_NAME: the name of your InfluxDB Cloud Serverless database
  • API_TOKEN: an InfluxDB Cloud Serverless API token with write permissions on the specified database
Example: write data using a dict

InfluxDBClient3 can serialize a dictionary object into line protocol. If you pass a dict to InfluxDBClient3.write, the client expects the dict to have the following point attributes:

  • measurement (string): the measurement name
  • tags (dict): a dictionary of tag key-value pairs
  • fields (dict): a dictionary of field key-value pairs
  • time: the timestamp for the record

The following example shows how to define a dict that represents a point, and then write the data to InfluxDB.

from influxdb_client_3 import InfluxDBClient3

# Using point dictionary structure
points = {
          "measurement": "home",
          "tags": {"room": "Kitchen", "sensor": "K001"},
          "fields": {"temp": 72.2, "hum": 36.9, "co": 4},
          "time": 1641067200
          }

client = InfluxDBClient3(host=f"cloud2.influxdata.com",
                         database=f"
BUCKET_NAME
"
,
token=f"
API_TOKEN
"
)
client.write(record=points, write_precision="s")

Replace the following:

  • BUCKET_NAME: the name of your InfluxDB Cloud Serverless database
  • API_TOKEN: an InfluxDB Cloud Serverless API token with write permissions on the specified database

InfluxDBClient3.write_file

Writes data from a file to InfluxDB. Execution is synchronous.

Parameters

  • file (string): A path to a file containing records to write to InfluxDB. The filename must end with one of the following supported extensions. For more information about encoding and formatting data, see the documentation for each supported format:

  • measurement_name (string): Defines the measurement name for records in the file. The specified value takes precedence over measurement and iox::measurement columns in the file. If no value is specified for the parameter, and a measurement column exists in the file, the measurement column value is used for the measurement name. If no value is specified for the parameter, and no measurement column exists, the iox::measurement column value is used for the measurement name.

  • tag_columns (list): Tag column names. Columns not included in the list and not specified by another parameter are assumed to be fields.

  • timestamp_column (string): The name of the column that contains timestamps. Default is 'time'.

  • database (string): The bucket to write to. Default is to write to the bucket specified for the client.

  • file_parser_options (callable): A function for providing additional arguments to the file parser.

  • **kwargs: Additional options to pass to the WriteAPI–for example:

    • write_precision (string): Optional. Default is "ns". Specifies the precision ("ms", "s", "us", "ns") for timestamps in record.
    • write_client_options (dict): Optional. Specifies callback functions and options for batch writing mode. To generate the dict, use the write_client_options function.

Example: use batch options when writing file data

The following example shows how to specify customized write options for batching, retries, and response callbacks, and how to write data from CSV and JSON files to InfluxDB:

from influxdb_client_3 import(InfluxDBClient3, write_client_options,
                              WritePrecision, WriteOptions, InfluxDBError)

# Define the result object
result = {
    'config': None,
    'status': None,
    'data': None,
    'error': None
}

# Define callbacks for write responses
def success_callback(self, data: str):
    result['config'] = self
    result['status'] = 'success'
    result['data'] = data

    assert result['data'] != None, f"Expected {result['data']}"
    print("Successfully wrote data: {result['data']}")

def error_callback(self, data: str, exception: InfluxDBError):
    result['config'] = self
    result['status'] = 'error'
    result['data'] = data
    result['error'] = exception

    assert result['status'] == "success", f"Expected {result['error']} to be success for {result['config']}"

def retry_callback(self, data: str, exception: InfluxDBError):
    result['config'] = self
    result['status'] = 'retry_error'
    result['data'] = data
    result['error'] = exception

    assert result['status'] == "success", f"Expected {result['status']} to be success for {result['config']}"

write_options = WriteOptions(batch_size=500,
                            flush_interval=10_000,
                            jitter_interval=2_000,
                            retry_interval=5_000,
                            max_retries=5,
                            max_retry_delay=30_000,
                            exponential_base=2)


wco = write_client_options(success_callback=success_callback,
                          error_callback=error_callback,
                          retry_callback=retry_callback,
                          write_options=write_options)

with InfluxDBClient3(host=f"cloud2.influxdata.com",
                         database=f"
BUCKET_NAME
"
,
token=f"
API_TOKEN
"
,
write_client_options=wco) as client: client.write_file(file='./data/home-sensor-data.csv', timestamp_column='time', tag_columns=["room"], write_precision='s') client.write_file(file='./data/home-sensor-data.json', timestamp_column='time', tag_columns=["room"], write_precision='s')

Replace the following:

  • BUCKET_NAME: the name of your InfluxDB Cloud Serverless database
  • API_TOKEN: an InfluxDB Cloud Serverless API token with write permissions on the specified database

InfluxDBClient3.query

Sends a Flight request to execute the specified SQL or InfluxQL query. Returns all data in the query result as an Arrow table (pyarrow.Table instance).

Parameters

  • query (string): the SQL or InfluxQL to execute.
  • language (string): the query language used in the query parameter–"sql" or "influxql". Default is "sql".
  • mode (string): Specifies the output to return from the pyarrow.flight.FlightStreamReader. Default is "all".
    • all: Read the entire contents of the stream and return it as a pyarrow.Table.
    • chunk: Read the next message (a FlightStreamChunk) and return data and app_metadata. Returns null if there are no more messages.
    • pandas: Read the contents of the stream and return it as a pandas.DataFrame.
    • reader: Convert the FlightStreamReader into a pyarrow.RecordBatchReader.
    • schema: Return the schema for all record batches in the stream.
  • **kwargs: FlightCallOptions

Example: query using SQL

from influxdb_client_3 import InfluxDBClient3

client = InfluxDBClient3(host=f"cloud2.influxdata.com",
                         database=f"
BUCKET_NAME
"
,
token=f"
API_TOKEN
"
)
table = client.query("SELECT * from home WHERE time >= now() - INTERVAL '90 days'") # Filter columns. print(table.select(['room', 'temp'])) # Use PyArrow to aggregate data. print(table.group_by('hum').aggregate([]))

In the examples, replace the following:

  • BUCKET_NAME: the name of your InfluxDB Cloud Serverless database
  • API_TOKEN: an InfluxDB Cloud Serverless API token with read permission on the specified database

Example: query using InfluxQL

from influxdb_client_3 import InfluxDBClient3

client = InfluxDBClient3(host=f"cloud2.influxdata.com",
                         database=f"
BUCKET_NAME
"
,
token=f"
API_TOKEN
"
)
query = "SELECT * from home WHERE time >= -90d" table = client.query(query=query, language="influxql") # Filter columns. print(table.select(['room', 'temp']))
Example: read all data from the stream and return a pandas DataFrame
from influxdb_client_3 import InfluxDBClient3

client = InfluxDBClient3(host=f"cloud2.influxdata.com",
                         database=f"
BUCKET_NAME
"
,
token=f"
API_TOKEN
"
)
query = "SELECT * from home WHERE time >= now() - INTERVAL '90 days'" pd = client.query(query=query, mode="pandas") # Print the pandas DataFrame formatted as a Markdown table. print(pd.to_markdown())
Example: view the schema for all batches in the stream
from influxdb_client_3 import InfluxDBClient3

client = InfluxDBClient3(host=f"cloud2.influxdata.com",
                         database=f"
BUCKET_NAME
"
,
token=f"
API_TOKEN
"
)
table = client.query("""SELECT * from home WHERE time >= now() - INTERVAL '90 days'""") # View the table schema. print(table.schema)
Example: retrieve the result schema and no data
from influxdb_client_3 import InfluxDBClient3

client = InfluxDBClient3(host=f"cloud2.influxdata.com",
                         database=f"
BUCKET_NAME
"
,
token=f"
API_TOKEN
"
)
query = "SELECT * from home WHERE time >= now() - INTERVAL '90 days'" schema = client.query(query=query, mode="schema") print(schema)
Specify a timeout

Pass timeout=<number of seconds> for FlightCallOptions to use a custom timeout.

from influxdb_client_3 import InfluxDBClient3

client = InfluxDBClient3(host=f"cloud2.influxdata.com",
                         database=f"
BUCKET_NAME
"
,
token=f"
API_TOKEN
"
)
query = "SELECT * from home WHERE time >= now() - INTERVAL '90 days'" client.query(query=query, timeout=5)

InfluxDBClient3.close

Sends all remaining records from the batch to InfluxDB, and then closes the underlying write client and Flight client to release resources.

Example: close a client

from influxdb_client_3 import InfluxDBClient3

client = InfluxDBClient3(host=f"cloud2.influxdata.com",
                         database=f"
BUCKET_NAME
"
,
token=f"
API_TOKEN
"
)
client.close()

Class Point

Provides an interface for constructing a time series data point for a measurement, and setting fields, tags, and timestamp.

from influxdb_client_3 import Point
point = Point("home").tag("room", "Living Room").field("temp", 72)

See how to write data using points.

Class WriteOptions

Provides an interface for constructing options that customize batch writing behavior, such as batch size and retry.

from influxdb_client_3 import WriteOptions

write_options = WriteOptions(batch_size=500,
                             flush_interval=10_000,
                             jitter_interval=2_000,
                             retry_interval=5_000,
                             max_retries=5,
                             max_retry_delay=30_000,
                             exponential_base=2)

See how to use batch options for writing data.

Parameters

  • batch_size: Default is 1000.
  • flush_interval: Default is 1000.
  • jitter_interval: Default is 0.
  • retry_interval: Default is 5000.
  • max_retries: Default is 5.
  • max_retry_delay: Default is 125000.
  • max_retry_time: Default is 180000.
  • exponential_base: Default is 2.
  • max_close_wait: Default is 300000.
  • write_scheduler: Default is ThreadPoolScheduler(max_workers=1).

Functions

Function write_client_options(**kwargs)

Returns a dict with the specified write client options.

Parameters

The function takes the following keyword arguments:

  • write_options (WriteOptions): Specifies whether the client writes data using synchronous mode or batching mode. If using batching mode, the client uses the specified batching options.
  • point_settings (dict): Default tags that the client will add to each point when writing the data to InfluxDB.
  • success_callback (callable): If using batching mode, a function to call after data is written successfully to InfluxDB (HTTP status 204)
  • error_callback (callable): if using batching mode, a function to call if data is not written successfully (the response has a non-204 HTTP status)
  • retry_callback (callable): if using batching mode, a function to call if the request is a retry (using batching mode) and data is not written successfully

Example: instantiate options for batch writing

from influxdb_client_3 import write_client_options, WriteOptions
from influxdb_client_3.write_client.client.write_api import WriteType

def success():
  print("Success")
def error():
  print("Error")
def retry():
  print("Retry error")

write_options = WriteOptions()
wco = write_client_options(success_callback=success,
                            error_callback=error,
                            retry_callback=retry,
                            write_options=write_options)

assert wco['success_callback']
assert wco['error_callback']
assert wco['retry_callback']
assert wco['write_options'].write_type == WriteType.batching

Example: instantiate options for synchronous writing

from influxdb_client_3 import write_client_options, SYNCHRONOUS
from influxdb_client_3.write_client.client.write_api import WriteType

wco = write_client_options(write_options=SYNCHRONOUS)

assert wco['write_options'].write_type == WriteType.synchronous

Function flight_client_options(**kwargs)

Returns a dict with the specified FlightClient parameters.

Parameters

Example: specify the root certificate path

from influxdb_client_3 import InfluxDBClient3, flight_client_options
import certifi

fh = open(certifi.where(), "r")
cert = fh.read()
fh.close()

client = InfluxDBClient3(host=f"cloud2.influxdata.com",
                         database=f"
BUCKET_NAME
"
,
token=f"
API_TOKEN
"
,
fco=flight_client_options(tls_root_certs=cert))

Replace the following:

  • BUCKET_NAME: the name of your InfluxDB Cloud Serverless database
  • API_TOKEN: an InfluxDB Cloud Serverless API token with read permission on the specified database

Constants

  • influxdb_client_3.SYNCHRONOUS: Represents synchronous write mode
  • influxdb_client_3.WritePrecision: Enum class that represents write precision

Exceptions

  • influxdb_client_3.InfluxDBError: Exception class raised for InfluxDB-related errors

View the InfluxDB v3 Python client library


Was this page helpful?

Thank you for your feedback!


The future of Flux

Flux is going into maintenance mode. You can continue using it as you currently are without any changes to your code.

Read more

InfluxDB v3 enhancements and InfluxDB Clustered is now generally available

New capabilities, including faster query performance and management tooling advance the InfluxDB v3 product line. InfluxDB Clustered is now generally available.

InfluxDB v3 performance and features

The InfluxDB v3 product line has seen significant enhancements in query performance and has made new management tooling available. These enhancements include an operational dashboard to monitor the health of your InfluxDB cluster, single sign-on (SSO) support in InfluxDB Cloud Dedicated, and new management APIs for tokens and databases.

Learn about the new v3 enhancements


InfluxDB Clustered general availability

InfluxDB Clustered is now generally available and gives you the power of InfluxDB v3 in your self-managed stack.

Talk to us about InfluxDB Clustered

InfluxDB Cloud Serverless