Documentation

Python Flight SQL DBAPI client

The Python flightsql-dbapi Flight SQL DBAPI library integrates with Python applications using SQL to query data stored in an InfluxDB Cloud Dedicated database. The flightsql-dbapi library uses the Flight SQL protocol to query and retrieve data.

Use InfluxDB v3 client libraries

We recommend using the influxdb3-python Python client library for integrating InfluxDB v3 with your Python application code.

InfluxDB v3 client libraries wrap Apache Arrow Flight clients and provide convenient methods for writing, querying, and processing data stored in InfluxDB Cloud Dedicated. Client libraries can query using SQL or InfluxQL.

Installation

The flightsql-dbapi Flight SQL library for Python provides a DB API 2 interface and SQLAlchemy dialect for Flight SQL. Installing flightsql-dbapi also installs the pyarrow library that you’ll use for working with Arrow data.

In your terminal, use pip to install flightsql-dbapi:

pip install flightsql-dbapi

Importing the module

The flightsql-dbapi package provides the flightsql module. From the module, import the FlightSQLClient class method:

from flightsql import FlightSQLClient
  • flightsql.FlightSQLClient class: an interface for initializing a client and interacting with a Flight SQL server.

API reference

Class FlightSQLClient

Provides an interface for initializing a client and interacting with a Flight SQL server.

Syntax

__init__(self, host=None, token=None, metadata=None, features=None)

Initializes and returns a FlightSQLClient instance for interacting with the server.

Initialize a client

The following example shows how to use Python with flightsql-dbapi and the DB API 2 interface to instantiate a Flight SQL client configured for an InfluxDB database.

from flightsql import FlightSQLClient

# Instantiate a FlightSQLClient configured for a database
client = FlightSQLClient(host='cluster-id.influxdb.io',
                        token='
DATABASE_TOKEN
'
,
metadata={'database': '
DATABASE_NAME
'
},
features={'metadata-reflection': 'true'})

Replace the following:

  • DATABASE_TOKEN: an InfluxDB Cloud Dedicated database token with read permissions on the databases you want to query
  • DATABASE_NAME: the name of your InfluxDB Cloud Dedicated database

Instance methods

FlightSQLClient.execute

Sends a Flight SQL RPC request to execute the specified SQL Query.

Syntax

execute(query: str, call_options: Optional[FlightSQLCallOptions] = None)

Example

# Execute the query
info = client.execute("SELECT * FROM home")

The response contains a flight.FlightInfo object that contains metadata and an endpoints: [...] list. Each endpoint contains the following:

  • A list of addresses where you can retrieve query result data.
  • A ticket value that identifies the data to retrieve.

FlightSQLClient.do_get

Passes a Flight ticket (obtained from a FlightSQLClient.execute response) and retrieves Arrow data identified by the ticket. Returns a pyarrow.flight.FlightStreamReader for streaming the data.

Syntax

 do_get(ticket, call_options: Optional[FlightSQLCallOptions] = None)

Example

The following sample shows how to use Python with flightsql-dbapi and pyarrow to query InfluxDB and retrieve data.

from flightsql import FlightSQLClient

# Instantiate a FlightSQLClient configured for a database
client = FlightSQLClient(host='cluster-id.influxdb.io',
    token='DATABASE_TOKEN',
    metadata={'database': 'DATABASE_NAME'},
    features={'metadata-reflection': 'true'})

# Execute the query to retrieve FlightInfo
info = client.execute("SELECT * FROM home")

# Extract the token for retrieving data
ticket = info.endpoints[0].ticket

# Use the ticket to request the Arrow data stream.
# Return a FlightStreamReader for streaming the results.
reader = client.do_get(ticket)

# Read all data to a pyarrow.Table
table = reader.read_all()

print(table)

do_get(ticket) returns a pyarrow.flight.FlightStreamReader for streaming Arrow record batches.

To read data from the stream, call one of the following FlightStreamReader methods:

  • read_all(): Read all record batches as a pyarrow.Table.
  • read_chunk(): Read the next RecordBatch and metadata.
  • read_pandas(): Read all record batches and convert them to a pandas.DataFrame.

Was this page helpful?

Thank you for your feedback!


The future of Flux

Flux is going into maintenance mode. You can continue using it as you currently are without any changes to your code.

Flux is going into maintenance mode and will not be supported in InfluxDB 3.0. This was a decision based on the broad demand for SQL and the continued growth and adoption of InfluxQL. We are continuing to support Flux for users in 1.x and 2.x so you can continue using it with no changes to your code. If you are interested in transitioning to InfluxDB 3.0 and want to future-proof your code, we suggest using InfluxQL.

For information about the future of Flux, see the following: