openapi: 3.0.3
info:
title: InfluxDB 3 Core API Service
description: >
The InfluxDB HTTP API for InfluxDB 3 Core provides a programmatic interface
for
interacting with InfluxDB 3 Core databases and resources.
Use this API to:
- Write data to InfluxDB 3 Core databases
- Query data using SQL or InfluxQL
- Process data using Processing engine plugins
- Manage databases, tables, and Processing engine triggers
- Perform administrative tasks and access system information
The API includes endpoints under the following paths:
- `/api/v3`: InfluxDB 3 Core native endpoints
- `/`: Compatibility endpoints for InfluxDB v1 workloads and clients
- `/api/v2/write`: Compatibility endpoint for InfluxDB v2 workloads and
clients
[Download the OpenAPI
specification](https://docs.influxdata.com/openapi/influxdb3-core-openapi.yaml)
version: v3.8.0
license:
name: MIT
url: https://opensource.org/licenses/MIT
contact:
name: InfluxData
url: https://www.influxdata.com
email: support@influxdata.com
x-source-hash: sha256:1259b96096eab6c8dbf3f76c974924f124e9b3e08eedc6b0c9a66d3108857c52
x-influxdata-short-title: InfluxDB 3 API
x-influxdata-version-matrix:
v1: Compatibility layer for InfluxDB 1.x clients (supported)
v2: Compatibility layer for InfluxDB 2.x clients (supported)
v3: Native API for InfluxDB 3.x (current)
x-influxdata-short-description: >-
The InfluxDB 3 HTTP API provides a programmatic interface for interactions
with InfluxDB, including writing, querying, and processing data, and
managing an InfluxDB 3 instance.
servers:
- url: https://{baseurl}
description: InfluxDB 3 Core API URL
variables:
baseurl:
enum:
- localhost:8181
default: localhost:8181
description: InfluxDB 3 Core URL
security:
- BearerAuthentication: []
- TokenAuthentication: []
- BasicAuthentication: []
- QuerystringAuthentication: []
tags:
- name: Authentication
description: >-
Use one of the following schemes to authenticate to the InfluxDB 3 Core
API:
- [Token authentication](#section/Authentication/TokenAuthentication)
- [Bearer authentication](#section/Authentication/BearerAuthentication)
- [Basic authentication](#section/Authentication/BasicAuthentication)
- [Querystring
authentication](#section/Authentication/QuerystringAuthentication)
x-traitTag: true
x-related:
- title: Authenticate v1 API requests
href: >-
https://docs.influxdata.com/influxdb3/core/guides/api-compatibility/v1/
- title: Manage tokens
href: https://docs.influxdata.com/influxdb3/core/admin/tokens/
- name: Database
description: Create, list, and delete databases in InfluxDB 3 Core.
x-related:
- title: Manage databases
href: https://docs.influxdata.com/influxdb3/core/admin/databases/
- description: >-
Most InfluxDB API endpoints require parameters in the request--for
example, specifying the database to use.
### Common parameters
The following table shows common parameters used by many InfluxDB API
endpoints.
Many endpoints may require other parameters in the query string or in the
request body that perform functions specific to those endpoints.
| Query parameter | Value type |
Description |
|:------------------------ |:---------------------
|:-------------------------------------------|
| `db` | string | The database name |
InfluxDB HTTP API endpoints use standard HTTP request and response
headers.
The following table shows common headers used by many InfluxDB API
endpoints.
Some endpoints may use other headers that perform functions more specific
to those endpoints--for example,
the write endpoints accept the `Content-Encoding` header to indicate that
line protocol is compressed in the request body.
| Header | Value type |
Description |
|:------------------------ |:---------------------
|:-------------------------------------------|
| `Accept` | string | The content type that
the client can understand. |
| `Authorization` | string | The authorization
scheme and credential. |
| `Content-Length` | integer | The size of the
entity-body, in bytes. |
| `Content-Type` | string | The format of the
data in the request body. |
name: Headers and parameters
x-traitTag: true
- name: Processing engine
description: >-
Manage Processing engine triggers, test plugins, and send requests to
trigger On Request plugins.
InfluxDB 3 Core provides the InfluxDB 3 processing engine, an embedded
Python VM that can dynamically load and trigger Python plugins in response
to events in your database.
Use Processing engine plugins and triggers to run code and perform tasks
for different database events.
x-related:
- title: Processing engine and Python plugins
href: https://docs.influxdata.com/influxdb3/core/processing-engine/
- name: Query data
description: Query data stored in InfluxDB 3 Core using SQL or InfluxQL.
x-related:
- title: Query data
href: https://docs.influxdata.com/influxdb3/core/query-data/
- name: Quick start
description: >-
Authenticate, write, and query with the API:
1. Create an admin token to authorize API requests.
```bash
curl -X POST "http://localhost:8181/api/v3/configure/token/admin"
```
2. Check the status of the InfluxDB server.
```bash
curl "http://localhost:8181/health" \
--header "Authorization: Bearer ADMIN_TOKEN"
```
3. Write data to InfluxDB.
```bash
curl "http://localhost:8181/api/v3/write_lp?db=sensors&precision=auto"
--header "Authorization: Bearer ADMIN_TOKEN" \
--data-raw "home,room=Kitchen temp=72.0
home,room=Living\ room temp=71.5"
```
If all data is written, the response is `204 No Content`.
4. Query data from InfluxDB.
```bash
curl -G "http://localhost:8181/api/v3/query_sql" \
--header "Authorization: Bearer ADMIN_TOKEN" \
--data-urlencode "db=sensors" \
--data-urlencode "q=SELECT * FROM home WHERE room='Living room'" \
--data-urlencode "format=jsonl"
```
Output:
```jsonl
{"room":"Living room","temp":71.5,"time":"2025-02-25T20:19:34.984098"}
```
For more information about using InfluxDB 3 Core, see the [Get
started](https://docs.influxdata.com/influxdb3/core/get-started/) guide.
x-traitTag: true
- name: Server information
description: >-
Retrieve server metrics, health status, and version information for
InfluxDB 3 Core.
- name: Table
description: Manage table schemas in an InfluxDB 3 Core database.
x-related:
- title: Manage tables
href: https://docs.influxdata.com/influxdb3/core/admin/tables/
- name: Auth token
description: >-
Create and manage tokens used for authenticating and authorizing access to
InfluxDB 3 Core resources.
x-related:
- title: Manage tokens
href: https://docs.influxdata.com/influxdb3/core/admin/tokens/
- name: Write data
description: >-
Write data to InfluxDB 3 Core using line protocol format.
#### Timestamp precision across write APIs
InfluxDB 3 provides multiple write endpoints for compatibility with
different InfluxDB versions.
The following table compares timestamp precision support across v1, v2,
and v3 write APIs:
| Precision | v1 (`/write`) | v2 (`/api/v2/write`) | v3
(`/api/v3/write_lp`) |
|-----------|---------------|----------------------|-------------------------|
| **Auto detection** | ❌ No | ❌ No | ✅ `auto` (default) |
| **Seconds** | ✅ `s` | ✅ `s` | ✅ `second` |
| **Milliseconds** | ✅ `ms` | ✅ `ms` | ✅ `millisecond` |
| **Microseconds** | ✅ `u` or `µ` | ✅ `us` | ✅ `microsecond` |
| **Nanoseconds** | ✅ `ns` | ✅ `ns` | ✅ `nanosecond` |
| **Minutes** | ✅ `m` | ❌ No | ❌ No |
| **Hours** | ✅ `h` | ❌ No | ❌ No |
| **Default** | Nanosecond | Nanosecond | **Auto** (guessed) |
All timestamps are stored internally as nanoseconds.
x-related:
- title: Write data using HTTP APIs
href: https://docs.influxdata.com/influxdb3/core/write-data/http-api/
- title: Line protocol reference
href: >-
https://docs.influxdata.com/influxdb3/core/reference/syntax/line-protocol/
- name: Cache distinct values
description: >-
The Distinct Value Cache (DVC) lets you cache distinct
values of one or more columns in a table, improving the performance of
queries that return distinct tag and field values.
The DVC is an in-memory cache that stores distinct values for specific
columns
in a table. When you create a DVC, you can specify what columns' distinct
values to cache, the maximum number of distinct value combinations to
cache, and
the maximum age of cached values. A DVC is associated with a table, which
can
have multiple DVCs.
x-related:
- title: Manage the Distinct Value Cache
href: https://docs.influxdata.com/influxdb3/core/admin/distinct-value-cache/
- name: Cache last value
description: >-
The Last Value Cache (LVC) lets you cache the most recent
values for specific fields in a table, improving the performance of
queries that
return the most recent value of a field for specific series or the last N
values
of a field.
The LVC is an in-memory cache that stores the last N number of values for
specific fields of series in a table. When you create an LVC, you can
specify
what fields to cache, what tags to use to identify each series, and the
number of values to cache for each unique series.
An LVC is associated with a table, which can have multiple LVCs.
x-related:
- title: Manage the Last Value Cache
href: https://docs.influxdata.com/influxdb3/core/admin/last-value-cache/
- name: Cache data
- name: Migrate from InfluxDB v1 or v2
description: >-
Migrate your existing InfluxDB v1 or v2 workloads to InfluxDB 3 Core.
InfluxDB 3 provides compatibility endpoints that work with InfluxDB 1.x
and 2.x client libraries and tools.
Operations marked with v1 or v2 badges are compatible with the respective
InfluxDB version.
### Migration guides
- [Migrate from InfluxDB
v1](https://docs.influxdata.com/influxdb3/core/guides/migrate/influxdb-1x/)
- For users migrating from InfluxDB 1.x
- [Migrate from InfluxDB
v2](https://docs.influxdata.com/influxdb3/core/guides/migrate/influxdb-2x/)
- For users migrating from InfluxDB 2.x or Cloud
- [Use compatibility APIs to write
data](https://docs.influxdata.com/influxdb3/core/write-data/http-api/compatibility-apis/)
- v1 and v2 write endpoints
- [Use the v1 HTTP query
API](https://docs.influxdata.com/influxdb3/core/query-data/execute-queries/influxdb-v1-api/)
- InfluxQL queries via HTTP
x-traitTag: true
paths:
/api/v1/health:
get:
operationId: GetHealthV1
summary: Health check (v1)
description: >
Checks the status of the service.
Returns `OK` if the service is running. This endpoint does not return
version information.
Use the [`/ping`](#operation/GetPing) endpoint to retrieve version
details.
> **Note**: This endpoint requires authentication by default in InfluxDB
3 Core.
responses:
'200':
description: Service is running. Returns `OK`.
content:
text/plain:
schema:
type: string
example: OK
'401':
description: Unauthorized. Authentication is required.
'500':
description: Service is unavailable.
tags:
- Server information
/api/v2/write:
post:
operationId: PostV2Write
x-compatibility-version: v2
responses:
'204':
description: >-
Success ("No Content"). All data in the batch is written and
queryable.
headers:
cluster-uuid:
$ref: '#/components/headers/ClusterUUID'
'400':
description: Bad request.
'401':
$ref: '#/components/responses/Unauthorized'
'403':
description: Access denied.
'413':
description: Request entity too large.
summary: Write line protocol (v2-compatible)
description: >
Writes line protocol to the specified database.
This endpoint provides backward compatibility for InfluxDB 2.x write
workloads using tools such as InfluxDB 2.x client libraries, the
Telegraf `outputs.influxdb_v2` output plugin, or third-party tools.
Use this endpoint to send data in [line
protocol](https://docs.influxdata.com/influxdb3/core/reference/syntax/line-protocol/)
format to InfluxDB.
Use query parameters to specify options for writing data.
#### Related
- [Use compatibility APIs to write
data](https://docs.influxdata.com/influxdb3/core/write-data/http-api/compatibility-apis/)
parameters:
- name: Content-Type
in: header
description: |
The content type of the request payload.
schema:
$ref: '#/components/schemas/LineProtocol'
required: false
- description: |
The compression applied to the line protocol in the request payload.
To send a gzip payload, pass `Content-Encoding: gzip` header.
in: header
name: Content-Encoding
schema:
default: identity
description: >
Content coding.
Use `gzip` for compressed data or `identity` for unmodified,
uncompressed data.
enum:
- gzip
- identity
type: string
- description: |
The size of the entity-body, in bytes, sent to InfluxDB.
in: header
name: Content-Length
schema:
description: The length in decimal number of octets.
type: integer
- description: >
The content type that the client can understand.
Writes only return a response body if they fail (partially or
completely)--for example,
due to a syntax problem or type mismatch.
in: header
name: Accept
schema:
default: application/json
description: Error content type.
enum:
- application/json
type: string
- name: bucket
in: query
required: true
schema:
type: string
description: >-
A database name.
InfluxDB creates the database if it doesn't already exist, and then
writes all points in the batch to the database.
This parameter is named `bucket` for compatibility with InfluxDB v2
client libraries.
- name: accept_partial
in: query
required: false
schema:
$ref: '#/components/schemas/AcceptPartial'
- $ref: '#/components/parameters/compatibilityPrecisionParam'
requestBody:
$ref: '#/components/requestBodies/lineProtocolRequestBody'
tags:
- Write data
/api/v3/configure/database:
delete:
operationId: DeleteConfigureDatabase
parameters:
- $ref: '#/components/parameters/db'
- name: data_only
in: query
required: false
schema:
type: boolean
default: false
description: >
Delete only data while preserving the database schema and all
associated resources
(tokens, triggers, last value caches, distinct value caches,
processing engine configurations).
When `false` (default), the entire database is deleted.
- name: remove_tables
in: query
required: false
schema:
type: boolean
default: false
description: >
Used with `data_only=true` to remove table resources (caches) while
preserving
database-level resources (tokens, triggers, processing engine
configurations).
Has no effect when `data_only=false`.
- name: hard_delete_at
in: query
required: false
schema:
type: string
format: date-time
description: >-
Schedule the database for hard deletion at the specified time.
If not provided, the database will be soft deleted.
Use ISO 8601 date-time format (for example, "2025-12-31T23:59:59Z").
#### Deleting a database cannot be undone
Deleting a database is a destructive action.
Once a database is deleted, data stored in that database cannot be
recovered.
Also accepts special string values:
- `now` — hard delete immediately
- `never` — soft delete only (default behavior)
- `default` — use the system default hard deletion time
responses:
'200':
description: Success. Database deleted.
'401':
$ref: '#/components/responses/Unauthorized'
'404':
description: Database not found.
summary: Delete a database
description: >
Soft deletes a database.
The database is scheduled for deletion and unavailable for querying.
Use the `hard_delete_at` parameter to schedule a hard deletion.
Use the `data_only` parameter to delete data while preserving the
database schema and resources.
tags:
- Database
get:
operationId: GetConfigureDatabase
responses:
'200':
description: Success. The response body contains the list of databases.
content:
application/json:
schema:
$ref: '#/components/schemas/ShowDatabasesResponse'
'400':
description: Bad request.
'401':
$ref: '#/components/responses/Unauthorized'
'404':
description: Database not found.
summary: List databases
description: Retrieves a list of databases.
parameters:
- $ref: '#/components/parameters/formatRequired'
- name: show_deleted
in: query
required: false
schema:
type: boolean
default: false
description: |
Include soft-deleted databases in the response.
By default, only active databases are returned.
tags:
- Database
post:
operationId: PostConfigureDatabase
responses:
'200':
description: Success. Database created.
'400':
description: Bad request.
'401':
$ref: '#/components/responses/Unauthorized'
'409':
description: Database already exists.
summary: Create a database
description: Creates a new database in the system.
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/CreateDatabaseRequest'
tags:
- Database
put:
operationId: update_database
responses:
'200':
description: Success. The database has been updated.
'400':
description: Bad request.
'401':
$ref: '#/components/responses/Unauthorized'
'404':
description: Database not found.
summary: Update a database
description: |
Updates database configuration, such as retention period.
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/UpdateDatabaseRequest'
tags:
- Database
/api/v3/configure/database/retention_period:
delete:
operationId: DeleteDatabaseRetentionPeriod
summary: Remove database retention period
description: >
Removes the retention period from a database, setting it to infinite
retention.
parameters:
- $ref: '#/components/parameters/db'
responses:
'204':
description: Success. The database retention period has been removed.
'401':
$ref: '#/components/responses/Unauthorized'
'404':
description: Database not found.
tags:
- Database
/api/v3/configure/distinct_cache:
delete:
operationId: DeleteConfigureDistinctCache
responses:
'200':
description: Success. The distinct cache has been deleted.
'400':
description: Bad request.
'401':
$ref: '#/components/responses/Unauthorized'
'404':
description: Cache not found.
summary: Delete distinct cache
description: Deletes a distinct cache.
parameters:
- $ref: '#/components/parameters/db'
- name: table
in: query
required: true
schema:
type: string
description: The name of the table containing the distinct cache.
- name: name
in: query
required: true
schema:
type: string
description: The name of the distinct cache to delete.
tags:
- Cache distinct values
- Table
post:
operationId: PostConfigureDistinctCache
responses:
'201':
description: Success. The distinct cache has been created.
'400':
description: >
Bad request.
The server responds with status `400` if the request would overwrite
an existing cache with a different configuration.
'409':
description: Conflict. A distinct cache with this configuration already exists.
summary: Create distinct cache
description: Creates a distinct cache for a table.
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/DistinctCacheCreateRequest'
tags:
- Cache distinct values
- Table
/api/v3/configure/last_cache:
delete:
operationId: DeleteConfigureLastCache
responses:
'200':
description: Success. The last cache has been deleted.
'400':
description: Bad request.
'401':
$ref: '#/components/responses/Unauthorized'
'404':
description: Cache not found.
summary: Delete last cache
description: Deletes a last cache.
parameters:
- $ref: '#/components/parameters/db'
- name: table
in: query
required: true
schema:
type: string
description: The name of the table containing the last cache.
- name: name
in: query
required: true
schema:
type: string
description: The name of the last cache to delete.
tags:
- Cache last value
- Table
post:
operationId: PostConfigureLastCache
responses:
'201':
description: Success. Last cache created.
'400':
description: >-
Bad request. A cache with this name already exists or the request is
malformed.
'401':
$ref: '#/components/responses/Unauthorized'
'404':
description: Cache not found.
summary: Create last cache
description: Creates a last cache for a table.
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/LastCacheCreateRequest'
tags:
- Cache last value
- Table
/api/v3/configure/plugin_environment/install_packages:
post:
operationId: PostInstallPluginPackages
summary: Install plugin packages
description: >-
Installs the specified Python packages into the processing engine plugin
environment.
This endpoint is synchronous and blocks until the packages are
installed.
parameters:
- $ref: '#/components/parameters/ContentType'
requestBody:
required: true
content:
application/json:
schema:
type: object
properties:
packages:
type: array
items:
type: string
description: >
A list of Python package names to install.
Can include version specifiers (for example,
"scipy==1.9.0").
example:
- influxdb3-python
- scipy
- pandas==1.5.0
- requests
required:
- packages
example:
packages:
- influxdb3-python
- scipy
- pandas==1.5.0
- requests
responses:
'200':
description: Success. The packages are installed.
'400':
description: Bad request.
'401':
$ref: '#/components/responses/Unauthorized'
tags:
- Processing engine
/api/v3/configure/plugin_environment/install_requirements:
post:
operationId: PostInstallPluginRequirements
summary: Install plugin requirements
description: >
Installs requirements from a requirements file (also known as a "pip
requirements file") into the processing engine plugin environment.
This endpoint is synchronous and blocks until the requirements are
installed.
### Related
- [Processing engine and Python
plugins](https://docs.influxdata.com/influxdb3/core/plugins/)
- [Python requirements file
format](https://pip.pypa.io/en/stable/reference/requirements-file-format/)
parameters:
- $ref: '#/components/parameters/ContentType'
requestBody:
required: true
content:
application/json:
schema:
type: object
properties:
requirements_location:
type: string
description: >
The path to the requirements file containing Python packages
to install.
Can be a relative path (relative to the plugin directory) or
an absolute path.
example: requirements.txt
required:
- requirements_location
example:
requirements_location: requirements.txt
responses:
'200':
description: Success. The requirements have been installed.
'400':
description: Bad request.
'401':
$ref: '#/components/responses/Unauthorized'
tags:
- Processing engine
/api/v3/configure/processing_engine_trigger:
post:
operationId: PostConfigureProcessingEngineTrigger
summary: Create processing engine trigger
description: >-
Creates a processing engine trigger with the specified plugin file and
trigger specification.
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/ProcessingEngineTriggerRequest'
examples:
schedule_cron:
summary: Schedule trigger using cron
description: >
In `"cron:CRON_EXPRESSION"`, `CRON_EXPRESSION` uses extended
6-field cron format.
The cron expression `0 0 6 * * 1-5` means the trigger will run
at 6:00 AM every weekday (Monday to Friday).
value:
db: DATABASE_NAME
plugin_filename: schedule.py
trigger_name: schedule_cron_trigger
trigger_specification: cron:0 0 6 * * 1-5
disabled: false
trigger_settings:
run_async: false
error_behavior: Log
schedule_every:
summary: Schedule trigger using interval
description: >
In `"every:DURATION"`, `DURATION` specifies the interval
between trigger executions.
The duration `1h` means the trigger will run every hour.
value:
db: mydb
plugin_filename: schedule.py
trigger_name: schedule_every_trigger
trigger_specification: every:1h
disabled: false
trigger_settings:
run_async: false
error_behavior: Log
schedule_every_seconds:
summary: Schedule trigger using seconds interval
description: |
Example of scheduling a trigger to run every 30 seconds.
value:
db: mydb
plugin_filename: schedule.py
trigger_name: schedule_every_30s_trigger
trigger_specification: every:30s
disabled: false
trigger_settings:
run_async: false
error_behavior: Log
schedule_every_minutes:
summary: Schedule trigger using minutes interval
description: |
Example of scheduling a trigger to run every 5 minutes.
value:
db: mydb
plugin_filename: schedule.py
trigger_name: schedule_every_5m_trigger
trigger_specification: every:5m
disabled: false
trigger_settings:
run_async: false
error_behavior: Log
all_tables:
summary: All tables trigger example
description: >
Trigger that fires on write events to any table in the
database.
value:
db: mydb
plugin_filename: all_tables.py
trigger_name: all_tables_trigger
trigger_specification: all_tables
disabled: false
trigger_settings:
run_async: false
error_behavior: Log
table_specific:
summary: Table-specific trigger example
description: |
Trigger that fires on write events to a specific table.
value:
db: mydb
plugin_filename: table.py
trigger_name: table_trigger
trigger_specification: table:sensors
disabled: false
trigger_settings:
run_async: false
error_behavior: Log
api_request:
summary: On-demand request trigger example
description: >
Creates an HTTP endpoint `/api/v3/engine/hello-world` for
manual invocation.
value:
db: mydb
plugin_filename: request.py
trigger_name: hello_world_trigger
trigger_specification: request:hello-world
disabled: false
trigger_settings:
run_async: false
error_behavior: Log
cron_friday_afternoon:
summary: Cron trigger for Friday afternoons
description: |
Example of a cron trigger that runs every Friday at 2:30 PM.
value:
db: reports
plugin_filename: weekly_report.py
trigger_name: friday_report_trigger
trigger_specification: cron:0 30 14 * * 5
disabled: false
trigger_settings:
run_async: false
error_behavior: Log
cron_monthly:
summary: Cron trigger for monthly execution
description: >
Example of a cron trigger that runs on the first day of every
month at midnight.
value:
db: monthly_data
plugin_filename: monthly_cleanup.py
trigger_name: monthly_cleanup_trigger
trigger_specification: cron:0 0 0 1 * *
disabled: false
trigger_settings:
run_async: false
error_behavior: Log
responses:
'200':
description: Success. Processing engine trigger created.
'400':
description: Bad request.
'401':
$ref: '#/components/responses/Unauthorized'
'404':
description: Trigger not found.
tags:
- Processing engine
delete:
operationId: DeleteConfigureProcessingEngineTrigger
summary: Delete processing engine trigger
description: Deletes a processing engine trigger.
parameters:
- $ref: '#/components/parameters/db'
- name: trigger_name
in: query
required: true
schema:
type: string
- name: force
in: query
required: false
schema:
type: boolean
default: false
description: |
Force deletion of the trigger even if it has active executions.
By default, deletion fails if the trigger is currently executing.
responses:
'200':
description: Success. The processing engine trigger has been deleted.
'400':
description: Bad request.
'401':
$ref: '#/components/responses/Unauthorized'
'404':
description: Trigger not found.
tags:
- Processing engine
/api/v3/configure/processing_engine_trigger/disable:
post:
operationId: PostDisableProcessingEngineTrigger
summary: Disable processing engine trigger
description: Disables a processing engine trigger.
parameters:
- name: db
in: query
required: true
schema:
type: string
description: The database name.
- name: trigger_name
in: query
required: true
schema:
type: string
description: The name of the trigger.
responses:
'200':
description: Success. The processing engine trigger has been disabled.
'400':
description: Bad request.
'401':
$ref: '#/components/responses/Unauthorized'
'404':
description: Trigger not found.
tags:
- Processing engine
/api/v3/configure/processing_engine_trigger/enable:
post:
operationId: PostEnableProcessingEngineTrigger
summary: Enable processing engine trigger
description: Enables a processing engine trigger.
parameters:
- name: db
in: query
required: true
schema:
type: string
description: The database name.
- name: trigger_name
in: query
required: true
schema:
type: string
description: The name of the trigger.
responses:
'200':
description: Success. The processing engine trigger is enabled.
'400':
description: Bad request.
'401':
$ref: '#/components/responses/Unauthorized'
'404':
description: Trigger not found.
tags:
- Processing engine
/api/v3/configure/table:
delete:
operationId: DeleteConfigureTable
parameters:
- $ref: '#/components/parameters/db'
- name: table
in: query
required: true
schema:
type: string
- name: data_only
in: query
required: false
schema:
type: boolean
default: false
description: >
Delete only data while preserving the table schema and all
associated resources
(last value caches, distinct value caches).
When `false` (default), the entire table is deleted.
- name: hard_delete_at
in: query
required: false
schema:
type: string
format: date-time
description: |-
Schedule the table for hard deletion at the specified time.
If not provided, the table will be soft deleted.
Use ISO 8601 format (for example, "2025-12-31T23:59:59Z").
Also accepts special string values:
- `now` — hard delete immediately
- `never` — soft delete only (default behavior)
- `default` — use the system default hard deletion time
responses:
'200':
description: Success (no content). The table has been deleted.
'401':
$ref: '#/components/responses/Unauthorized'
'404':
description: Table not found.
summary: Delete a table
description: >
Soft deletes a table.
The table is scheduled for deletion and unavailable for querying.
Use the `hard_delete_at` parameter to schedule a hard deletion.
Use the `data_only` parameter to delete data while preserving the table
schema and resources.
#### Deleting a table cannot be undone
Deleting a table is a destructive action.
Once a table is deleted, data stored in that table cannot be recovered.
tags:
- Table
post:
operationId: PostConfigureTable
responses:
'200':
description: Success. The table has been created.
'400':
description: Bad request.
'401':
$ref: '#/components/responses/Unauthorized'
'404':
description: Database not found.
summary: Create a table
description: Creates a new table within a database.
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/CreateTableRequest'
tags:
- Table
/api/v3/configure/token:
delete:
operationId: DeleteToken
parameters:
- name: token_name
in: query
required: true
schema:
type: string
description: The name of the token to delete.
responses:
'200':
description: Success. The token has been deleted.
'401':
$ref: '#/components/responses/Unauthorized'
'404':
description: Token not found.
summary: Delete token
description: |
Deletes a token.
tags:
- Authentication
- Auth token
/api/v3/configure/token/admin:
post:
operationId: PostCreateAdminToken
responses:
'201':
description: |
Success. The admin token has been created.
The response body contains the token string and metadata.
content:
application/json:
schema:
$ref: '#/components/schemas/AdminTokenObject'
'401':
$ref: '#/components/responses/Unauthorized'
summary: Create admin token
description: >
Creates an admin token.
An admin token is a special type of token that has full access to all
resources in the system.
tags:
- Authentication
- Auth token
/api/v3/configure/token/admin/regenerate:
post:
operationId: PostRegenerateAdminToken
summary: Regenerate admin token
description: >
Regenerates an admin token and revokes the previous token with the same
name.
parameters: []
responses:
'201':
description: Success. The admin token has been regenerated.
content:
application/json:
schema:
$ref: '#/components/schemas/AdminTokenObject'
'401':
$ref: '#/components/responses/Unauthorized'
tags:
- Authentication
- Auth token
/api/v3/configure/token/named_admin:
post:
operationId: PostCreateNamedAdminToken
responses:
'201':
description: |
Success. The named admin token has been created.
The response body contains the token string and metadata.
content:
application/json:
schema:
$ref: '#/components/schemas/AdminTokenObject'
'401':
$ref: '#/components/responses/Unauthorized'
'409':
description: A token with this name already exists.
summary: Create named admin token
description: >
Creates a named admin token.
A named admin token is a special type of admin token with a custom name
for identification and management.
tags:
- Authentication
- Auth token
requestBody:
required: true
content:
application/json:
schema:
type: object
properties:
token_name:
type: string
description: The name for the admin token.
expiry_secs:
type: integer
description: >-
Optional expiration time in seconds. If not provided, the
token does not expire.
nullable: true
required:
- token_name
/api/v3/engine/{request_path}:
get:
operationId: GetProcessingEnginePluginRequest
responses:
'200':
description: Success. The plugin request has been executed.
'400':
description: Malformed request.
'401':
$ref: '#/components/responses/Unauthorized'
'404':
description: Plugin not found.
'500':
description: Processing failure.
summary: On Request processing engine plugin request
description: >
Executes the On Request processing engine plugin specified in the
trigger's `plugin_filename`.
The request can include request headers, query string parameters, and a
request body, which InfluxDB passes to the plugin.
An On Request plugin implements the following signature:
```python
def process_request(influxdb3_local, query_parameters, request_headers,
request_body, args=None)
```
The response depends on the plugin implementation.
tags:
- Processing engine
post:
operationId: PostProcessingEnginePluginRequest
responses:
'200':
description: Success. The plugin request has been executed.
'400':
description: Malformed request.
'401':
$ref: '#/components/responses/Unauthorized'
'404':
description: Plugin not found.
'500':
description: Processing failure.
summary: On Request processing engine plugin request
description: >
Executes the On Request processing engine plugin specified in the
trigger's `plugin_filename`.
The request can include request headers, query string parameters, and a
request body, which InfluxDB passes to the plugin.
An On Request plugin implements the following signature:
```python
def process_request(influxdb3_local, query_parameters, request_headers,
request_body, args=None)
```
The response depends on the plugin implementation.
parameters:
- $ref: '#/components/parameters/ContentType'
requestBody:
required: false
content:
application/json:
schema:
type: object
additionalProperties: true
tags:
- Processing engine
parameters:
- name: request_path
description: >
The path configured in the request trigger specification for the
plugin.
For example, if you define a trigger with the following:
```json
trigger_specification: "request:hello-world"
```
then, the HTTP API exposes the following plugin endpoint:
```
/api/v3/engine/hello-world
```
in: path
required: true
schema:
type: string
/api/v3/plugin_test/schedule:
post:
operationId: PostTestSchedulingPlugin
responses:
'200':
description: Success. The plugin test has been executed.
'400':
description: Bad request.
'401':
$ref: '#/components/responses/Unauthorized'
'404':
description: Plugin not enabled.
summary: Test scheduling plugin
description: Executes a test of a scheduling plugin.
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/SchedulePluginTestRequest'
tags:
- Processing engine
/api/v3/plugin_test/wal:
post:
operationId: PostTestWALPlugin
responses:
'200':
description: Success. The plugin test has been executed.
'400':
description: Bad request.
'401':
$ref: '#/components/responses/Unauthorized'
'404':
description: Plugin not enabled.
summary: Test WAL plugin
description: Executes a test of a write-ahead logging (WAL) plugin.
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/WALPluginTestRequest'
tags:
- Processing engine
/api/v3/plugins/directory:
put:
operationId: PutPluginDirectory
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/PluginDirectoryRequest'
responses:
'200':
description: Success. The plugin directory has been updated.
'401':
$ref: '#/components/responses/Unauthorized'
'403':
description: Forbidden. Admin token required.
'500':
description: >-
Plugin not found. The `plugin_name` does not match any registered
trigger.
summary: Update a multi-file plugin directory
description: |
Replaces all files in a multi-file plugin directory. The
`plugin_name` must match a registered trigger name. Each entry in
the `files` array specifies a `relative_path` and `content`—the
server writes them into the trigger's plugin directory.
Use this endpoint to update multi-file plugins (directories with
`__init__.py` and supporting modules). For single-file plugins,
use `PUT /api/v3/plugins/files` instead.
tags:
- Processing engine
x-security-note: Requires an admin token
/api/v3/plugins/files:
post:
operationId: create_plugin_file
summary: Create a plugin file
description: |
Creates a single plugin file in the plugin directory. Writes the
`content` to a file named after `plugin_name`. Does not require an
existing trigger—use this to upload plugin files before creating
triggers that reference them.
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/PluginFileRequest'
responses:
'200':
description: Success. The plugin file has been created.
'401':
$ref: '#/components/responses/Unauthorized'
'403':
description: Forbidden. Admin token required.
tags:
- Processing engine
x-security-note: Requires an admin token
put:
operationId: PutPluginFile
summary: Update a plugin file
description: |
Updates a single plugin file for an existing trigger. The
`plugin_name` must match a registered trigger name—the server
resolves the trigger's `plugin_filename` and overwrites that file
with the provided `content`.
To upload a new plugin file before creating a trigger, use
`POST /api/v3/plugins/files` instead. To update a multi-file
plugin directory, use `PUT /api/v3/plugins/directory`.
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/PluginFileRequest'
responses:
'200':
description: Success. The plugin file has been updated.
'401':
$ref: '#/components/responses/Unauthorized'
'403':
description: Forbidden. Admin token required.
'500':
description: >-
Plugin not found. The `plugin_name` does not match any registered
trigger.
tags:
- Processing engine
x-security-note: Requires an admin token
/api/v3/query_influxql:
get:
operationId: GetExecuteInfluxQLQuery
responses:
'200':
description: Success. The response body contains query results.
content:
application/json:
schema:
$ref: '#/components/schemas/QueryResponse'
text/csv:
schema:
type: string
application/vnd.apache.parquet:
schema:
type: string
application/jsonl:
schema:
type: string
'400':
description: Bad request.
'401':
$ref: '#/components/responses/Unauthorized'
'403':
description: Access denied.
'404':
description: Database not found.
'405':
description: Method not allowed.
'422':
description: Unprocessable entity.
summary: Execute InfluxQL query
description: Executes an InfluxQL query to retrieve data from the specified database.
parameters:
- $ref: '#/components/parameters/dbQueryParam'
- name: q
in: query
required: true
schema:
type: string
- name: format
in: query
required: false
schema:
type: string
- $ref: '#/components/parameters/AcceptQueryHeader'
- name: params
in: query
required: false
schema:
type: string
description: >-
JSON-encoded query parameters. Use this to pass bind parameters to
parameterized queries.
description: JSON-encoded query parameters for parameterized queries.
tags:
- Query data
post:
operationId: PostExecuteQueryInfluxQL
responses:
'200':
description: Success. The response body contains query results.
content:
application/json:
schema:
$ref: '#/components/schemas/QueryResponse'
text/csv:
schema:
type: string
application/vnd.apache.parquet:
schema:
type: string
application/jsonl:
schema:
type: string
'400':
description: Bad request.
'401':
$ref: '#/components/responses/Unauthorized'
'403':
description: Access denied.
'404':
description: Database not found.
'405':
description: Method not allowed.
'422':
description: Unprocessable entity.
summary: Execute InfluxQL query
description: Executes an InfluxQL query to retrieve data from the specified database.
parameters:
- $ref: '#/components/parameters/AcceptQueryHeader'
- $ref: '#/components/parameters/ContentType'
requestBody:
$ref: '#/components/requestBodies/queryRequestBody'
tags:
- Query data
/api/v3/query_sql:
get:
operationId: GetExecuteQuerySQL
responses:
'200':
description: Success. The response body contains query results.
content:
application/json:
schema:
$ref: '#/components/schemas/QueryResponse'
example:
results:
- series:
- name: mytable
columns:
- time
- value
values:
- - '2024-02-02T12:00:00Z'
- 42
text/csv:
schema:
type: string
application/vnd.apache.parquet:
schema:
type: string
application/jsonl:
schema:
type: string
'400':
description: Bad request.
'401':
$ref: '#/components/responses/Unauthorized'
'403':
description: Access denied.
'404':
description: Database not found.
'405':
description: Method not allowed.
'422':
description: Unprocessable entity.
summary: Execute SQL query
description: Executes an SQL query to retrieve data from the specified database.
parameters:
- $ref: '#/components/parameters/db'
- $ref: '#/components/parameters/querySqlParam'
- $ref: '#/components/parameters/format'
- $ref: '#/components/parameters/AcceptQueryHeader'
- $ref: '#/components/parameters/ContentType'
- name: params
in: query
required: false
schema:
type: string
description: >-
JSON-encoded query parameters. Use this to pass bind parameters to
parameterized queries.
description: JSON-encoded query parameters for parameterized queries.
tags:
- Query data
post:
operationId: PostExecuteQuerySQL
responses:
'200':
description: Success. The response body contains query results.
content:
application/json:
schema:
$ref: '#/components/schemas/QueryResponse'
text/csv:
schema:
type: string
application/vnd.apache.parquet:
schema:
type: string
application/jsonl:
schema:
type: string
'400':
description: Bad request.
'401':
$ref: '#/components/responses/Unauthorized'
'403':
description: Access denied.
'404':
description: Database not found.
'405':
description: Method not allowed.
'422':
description: Unprocessable entity.
summary: Execute SQL query
description: Executes an SQL query to retrieve data from the specified database.
parameters:
- $ref: '#/components/parameters/AcceptQueryHeader'
- $ref: '#/components/parameters/ContentType'
requestBody:
$ref: '#/components/requestBodies/queryRequestBody'
tags:
- Query data
/api/v3/write_lp:
post:
operationId: PostWriteLP
parameters:
- $ref: '#/components/parameters/dbWriteParam'
- $ref: '#/components/parameters/accept_partial'
- $ref: '#/components/parameters/precisionParam'
- name: no_sync
in: query
schema:
$ref: '#/components/schemas/NoSync'
- name: Content-Type
in: header
description: |
The content type of the request payload.
schema:
$ref: '#/components/schemas/LineProtocol'
required: false
- name: Accept
in: header
description: >
The content type that the client can understand.
Writes only return a response body if they fail (partially or
completely)--for example,
due to a syntax problem or type mismatch.
schema:
type: string
default: application/json
enum:
- application/json
required: false
- $ref: '#/components/parameters/ContentEncoding'
- $ref: '#/components/parameters/ContentLength'
responses:
'204':
description: >-
Success ("No Content"). All data in the batch is written and
queryable.
headers:
cluster-uuid:
$ref: '#/components/headers/ClusterUUID'
'400':
description: Bad request.
'401':
$ref: '#/components/responses/Unauthorized'
'403':
description: Access denied.
'413':
description: Request entity too large.
'422':
description: Unprocessable entity.
summary: Write line protocol
description: >
Writes line protocol to the specified database.
This is the native InfluxDB 3 Core write endpoint that provides enhanced
control
over write behavior with advanced parameters for high-performance and
fault-tolerant operations.
Use this endpoint to send data in [line
protocol](https://docs.influxdata.com/influxdb3/core/reference/syntax/line-protocol/)
format to InfluxDB.
Use query parameters to specify options for writing data.
#### Features
- **Partial writes**: Use `accept_partial=true` to allow partial success
when some lines in a batch fail
- **Asynchronous writes**: Use `no_sync=true` to skip waiting for WAL
synchronization, allowing faster response times but sacrificing
durability guarantees
- **Flexible precision**: Automatic timestamp precision detection with
`precision=auto` (default)
#### Auto precision detection
When you use `precision=auto` or omit the precision parameter, InfluxDB
3 automatically detects
the timestamp precision based on the magnitude of the timestamp value:
- Timestamps < 5e9 → Second precision (multiplied by 1,000,000,000 to
convert to nanoseconds)
- Timestamps < 5e12 → Millisecond precision (multiplied by 1,000,000)
- Timestamps < 5e15 → Microsecond precision (multiplied by 1,000)
- Larger timestamps → Nanosecond precision (no conversion needed)
#### Related
- [Use the InfluxDB v3 write_lp API to write
data](https://docs.influxdata.com/influxdb3/core/write-data/http-api/v3-write-lp/)
requestBody:
$ref: '#/components/requestBodies/lineProtocolRequestBody'
tags:
- Write data
x-codeSamples:
- label: cURL - Basic write
lang: Shell
source: >
curl --request POST
"http://localhost:8181/api/v3/write_lp?db=sensors" \
--header "Authorization: Bearer DATABASE_TOKEN" \
--header "Content-Type: text/plain" \
--data-raw "cpu,host=server01 usage=85.2 1638360000000000000"
- label: cURL - Write with millisecond precision
lang: Shell
source: >
curl --request POST
"http://localhost:8181/api/v3/write_lp?db=sensors&precision=ms" \
--header "Authorization: Bearer DATABASE_TOKEN" \
--header "Content-Type: text/plain" \
--data-raw "cpu,host=server01 usage=85.2 1638360000000"
- label: cURL - Asynchronous write with partial acceptance
lang: Shell
source: >
curl --request POST
"http://localhost:8181/api/v3/write_lp?db=sensors&accept_partial=true&no_sync=true&precision=auto"
\
--header "Authorization: Bearer DATABASE_TOKEN" \
--header "Content-Type: text/plain" \
--data-raw "cpu,host=server01 usage=85.2
memory,host=server01 used=4096"
- label: cURL - Multiple measurements with tags
lang: Shell
source: >
curl --request POST
"http://localhost:8181/api/v3/write_lp?db=sensors&precision=ns" \
--header "Authorization: Bearer DATABASE_TOKEN" \
--header "Content-Type: text/plain" \
--data-raw "cpu,host=server01,region=us-west usage=85.2,load=0.75 1638360000000000000
memory,host=server01,region=us-west used=4096,free=12288
1638360000000000000
disk,host=server01,region=us-west,device=/dev/sda1
used=50.5,free=49.5 1638360000000000000"
/health:
get:
operationId: GetHealth
responses:
'200':
description: Service is running. Returns `OK`.
content:
text/plain:
schema:
type: string
example: OK
'401':
description: Unauthorized. Authentication is required.
'500':
description: Service is unavailable.
summary: Health check
description: >
Checks the status of the service.
Returns `OK` if the service is running. This endpoint does not return
version information.
Use the [`/ping`](#operation/GetPing) endpoint to retrieve version
details.
> **Note**: This endpoint requires authentication by default in InfluxDB
3 Core.
tags:
- Server information
/metrics:
get:
operationId: GetMetrics
responses:
'200':
description: Success
summary: Metrics
description: Retrieves Prometheus-compatible server metrics.
tags:
- Server information
/ping:
get:
operationId: GetPing
responses:
'200':
description: Success. The response body contains server information.
headers:
x-influxdb-version:
description: The InfluxDB version number (for example, `3.8.0`).
schema:
type: string
example: 3.8.0
x-influxdb-build:
description: The InfluxDB build type (`Core` or `Enterprise`).
schema:
type: string
example: Core
content:
application/json:
schema:
type: object
properties:
version:
type: string
description: The InfluxDB version number.
example: 3.8.0
revision:
type: string
description: The git revision hash for the build.
example: 83b589b883
process_id:
type: string
description: A unique identifier for the server process.
example: b756d9e0-cecd-4f72-b6d0-19e2d4f8cbb7
'401':
description: Unauthorized. Authentication is required.
'404':
description: |
Not Found. Returned for HEAD requests.
Use a GET request to retrieve version information.
x-client-method: ping
summary: Ping the server
description: >
Returns version information for the server.
**Important**: Use a GET request. HEAD requests return `404 Not Found`.
The response includes version information in both headers and the JSON
body:
- **Headers**: `x-influxdb-version` and `x-influxdb-build`
- **Body**: JSON object with `version`, `revision`, and `process_id`
> **Note**: This endpoint requires authentication by default in InfluxDB
3 Core.
tags:
- Server information
post:
operationId: ping
responses:
'200':
description: Success. The response body contains server information.
headers:
x-influxdb-version:
description: The InfluxDB version number (for example, `3.8.0`).
schema:
type: string
example: 3.8.0
x-influxdb-build:
description: The InfluxDB build type (`Core` or `Enterprise`).
schema:
type: string
example: Core
content:
application/json:
schema:
type: object
properties:
version:
type: string
description: The InfluxDB version number.
example: 3.8.0
revision:
type: string
description: The git revision hash for the build.
example: 83b589b883
process_id:
type: string
description: A unique identifier for the server process.
example: b756d9e0-cecd-4f72-b6d0-19e2d4f8cbb7
'401':
description: Unauthorized. Authentication is required.
'404':
description: |
Not Found. Returned for HEAD requests.
Use a GET request to retrieve version information.
summary: Ping the server
description: >-
Returns version information for the server. Accepts POST in addition to
GET.
tags:
- Server information
/query:
get:
operationId: GetV1ExecuteQuery
x-compatibility-version: v1
responses:
'200':
description: |
Success. The response body contains query results.
content:
application/json:
schema:
$ref: '#/components/schemas/QueryResponse'
application/csv:
schema:
type: string
headers:
Content-Type:
description: >
The content type of the response.
Default is `application/json`.
If the `Accept` request header is `application/csv` or
`text/csv`, the `Content-type` response header is
`application/csv`
and the response is formatted as CSV.
schema:
type: string
default: application/json
enum:
- application/json
- application/csv
'400':
description: Bad request.
'401':
$ref: '#/components/responses/Unauthorized'
'403':
description: Access denied.
'404':
description: Database not found.
'405':
description: Method not allowed.
'422':
description: Unprocessable entity.
summary: Execute InfluxQL query (v1-compatible)
description: >
Executes an InfluxQL query to retrieve data from the specified database.
This endpoint is compatible with InfluxDB 1.x client libraries and
third-party integrations such as Grafana.
Use query parameters to specify the database and the InfluxQL query.
#### Related
- [Use the InfluxDB v1 HTTP query API and InfluxQL to query
data](https://docs.influxdata.com/influxdb3/core/query-data/execute-queries/influxdb-v1-api/)
parameters:
- name: Accept
in: header
schema:
type: string
default: application/json
enum:
- application/json
- application/csv
- text/csv
required: false
description: >
The content type that the client can understand.
If `text/csv` is specified, the `Content-type` response header is
`application/csv` and the response is formatted as CSV.
Returns an error if the format is invalid or non-UTF8.
- in: query
name: chunked
description: |
If true, the response is divided into chunks of size `chunk_size`.
schema:
type: boolean
default: false
- in: query
name: chunk_size
description: |
The number of records that will go into a chunk.
This parameter is only used if `chunked=true`.
schema:
type: integer
default: 10000
- in: query
name: db
description: >-
The database to query. If not provided, the InfluxQL query string
must specify the database.
schema:
type: string
format: InfluxQL
- in: query
name: pretty
description: |
If true, the JSON response is formatted in a human-readable format.
schema:
type: boolean
default: false
- in: query
name: q
description: The InfluxQL query string.
required: true
schema:
type: string
- name: epoch
description: >
Formats timestamps as [unix (epoch)
timestamps](https://docs.influxdata.com/influxdb3/core/reference/glossary/#unix-timestamp)
with the specified precision
instead of [RFC3339
timestamps](https://docs.influxdata.com/influxdb3/core/reference/glossary/#rfc3339-timestamp)
with nanosecond precision.
in: query
schema:
$ref: '#/components/schemas/EpochCompatibility'
- $ref: '#/components/parameters/v1UsernameParam'
- $ref: '#/components/parameters/v1PasswordParam'
- name: rp
in: query
required: false
schema:
type: string
description: >
Retention policy name. Honored but discouraged. InfluxDB 3 doesn't
use retention policies.
- name: Authorization
in: header
required: false
schema:
type: string
description: >
Authorization header for token-based authentication.
Supported schemes:
- `Bearer AUTH_TOKEN` - OAuth bearer token scheme
- `Token AUTH_TOKEN` - InfluxDB v2 token scheme
- `Basic ` - Basic
authentication (username is ignored)
tags:
- Query data
post:
operationId: PostExecuteV1Query
x-compatibility-version: v1
responses:
'200':
description: |
Success. The response body contains query results.
content:
application/json:
schema:
$ref: '#/components/schemas/QueryResponse'
application/csv:
schema:
type: string
headers:
Content-Type:
description: >
The content type of the response.
Default is `application/json`.
If the `Accept` request header is `application/csv` or
`text/csv`, the `Content-type` response header is
`application/csv`
and the response is formatted as CSV.
schema:
type: string
default: application/json
enum:
- application/json
- application/csv
'400':
description: Bad request.
'401':
$ref: '#/components/responses/Unauthorized'
'403':
description: Access denied.
'404':
description: Database not found.
'405':
description: Method not allowed.
'422':
description: Unprocessable entity.
summary: Execute InfluxQL query (v1-compatible)
description: >
Executes an InfluxQL query to retrieve data from the specified database.
#### Related
- [Use the InfluxDB v1 HTTP query API and InfluxQL to query
data](https://docs.influxdata.com/influxdb3/core/query-data/execute-queries/influxdb-v1-api/)
parameters:
- name: Accept
in: header
schema:
type: string
default: application/json
enum:
- application/json
- application/csv
- text/csv
required: false
description: >
The content type that the client can understand.
If `text/csv` is specified, the `Content-type` response header is
`application/csv` and the response is formatted as CSV.
Returns an error if the format is invalid or non-UTF8.
requestBody:
content:
application/json:
schema:
type: object
properties:
db:
type: string
description: >-
The database to query. If not provided, the InfluxQL query
string must specify the database.
q:
description: The InfluxQL query string.
type: string
chunked:
description: >
If true, the response is divided into chunks of size
`chunk_size`.
type: boolean
chunk_size:
description: |
The number of records that will go into a chunk.
This parameter is only used if `chunked=true`.
type: integer
default: 10000
epoch:
description: >
A unix timestamp precision.
- `h` for hours
- `m` for minutes
- `s` for seconds
- `ms` for milliseconds
- `u` or `µ` for microseconds
- `ns` for nanoseconds
Formats timestamps as [unix (epoch)
timestamps](https://docs.influxdata.com/influxdb3/core/reference/glossary/#unix-timestamp)
with the specified precision
instead of [RFC3339
timestamps](https://docs.influxdata.com/influxdb3/core/reference/glossary/#rfc3339-timestamp)
with nanosecond precision.
enum:
- ns
- u
- µ
- ms
- s
- m
- h
type: string
pretty:
description: >
If true, the JSON response is formatted in a human-readable
format.
type: boolean
required:
- q
application/x-www-form-urlencoded:
schema:
type: object
properties:
db:
type: string
description: >-
The database to query. If not provided, the InfluxQL query
string must specify the database.
q:
description: The InfluxQL query string.
type: string
chunked:
description: >
If true, the response is divided into chunks of size
`chunk_size`.
type: boolean
chunk_size:
description: |
The number of records that will go into a chunk.
This parameter is only used if `chunked=true`.
type: integer
default: 10000
epoch:
description: >
A unix timestamp precision.
- `h` for hours
- `m` for minutes
- `s` for seconds
- `ms` for milliseconds
- `u` or `µ` for microseconds
- `ns` for nanoseconds
Formats timestamps as [unix (epoch)
timestamps](https://docs.influxdata.com/influxdb3/core/reference/glossary/#unix-timestamp)
with the specified precision
instead of [RFC3339
timestamps](https://docs.influxdata.com/influxdb3/core/reference/glossary/#rfc3339-timestamp)
with nanosecond precision.
enum:
- ns
- u
- µ
- ms
- s
- m
- h
type: string
pretty:
description: >
If true, the JSON response is formatted in a human-readable
format.
type: boolean
required:
- q
application/vnd.influxql:
schema:
type: string
description: InfluxQL query string sent as the request body.
tags:
- Query data
/write:
post:
operationId: PostV1Write
x-compatibility-version: v1
responses:
'204':
description: >-
Success ("No Content"). All data in the batch is written and
queryable.
headers:
cluster-uuid:
$ref: '#/components/headers/ClusterUUID'
'400':
description: >
Bad request. Some (a _partial write_) or all of the data from the
batch was rejected and not written.
If a partial write occurred, then some points from the batch are
written and queryable.
The response body:
- indicates if a partial write occurred or all data was rejected.
- contains details about the [rejected points](https://docs.influxdata.com/influxdb3/core/write-data/troubleshoot/#troubleshoot-rejected-points), up to 100 points.
content:
application/json:
examples:
rejectedAllPoints:
summary: Rejected all points in the batch
value: |
{
"error": "write of line protocol failed",
"data": [
{
"original_line": "dquote> home,room=Kitchen temp=hi",
"line_number": 2,
"error_message": "No fields were provided"
}
]
}
partialWriteErrorWithRejectedPoints:
summary: Partial write rejected some points in the batch
value: |
{
"error": "partial write of line protocol occurred",
"data": [
{
"original_line": "dquote> home,room=Kitchen temp=hi",
"line_number": 2,
"error_message": "No fields were provided"
}
]
}
'401':
$ref: '#/components/responses/Unauthorized'
'403':
description: Access denied.
'413':
description: Request entity too large.
summary: Write line protocol (v1-compatible)
description: >
Writes line protocol to the specified database.
This endpoint provides backward compatibility for InfluxDB 1.x write
workloads using tools such as InfluxDB 1.x client libraries, the
Telegraf `outputs.influxdb` output plugin, or third-party tools.
Use this endpoint to send data in [line
protocol](https://docs.influxdata.com/influxdb3/core/reference/syntax/line-protocol/)
format to InfluxDB.
Use query parameters to specify options for writing data.
#### Related
- [Use compatibility APIs to write
data](https://docs.influxdata.com/influxdb3/core/write-data/http-api/compatibility-apis/)
parameters:
- $ref: '#/components/parameters/dbWriteParam'
- $ref: '#/components/parameters/compatibilityPrecisionParam'
- $ref: '#/components/parameters/v1UsernameParam'
- $ref: '#/components/parameters/v1PasswordParam'
- name: rp
in: query
required: false
schema:
type: string
description: >
Retention policy name. Honored but discouraged. InfluxDB 3 doesn't
use retention policies.
- name: consistency
in: query
required: false
schema:
type: string
description: >
Write consistency level. Ignored by InfluxDB 3. Provided for
compatibility with InfluxDB 1.x clients.
- name: Authorization
in: header
required: false
schema:
type: string
description: >
Authorization header for token-based authentication.
Supported schemes:
- `Bearer AUTH_TOKEN` - OAuth bearer token scheme
- `Token AUTH_TOKEN` - InfluxDB v2 token scheme
- `Basic ` - Basic
authentication (username is ignored)
- name: Content-Type
in: header
description: |
The content type of the request payload.
schema:
$ref: '#/components/schemas/LineProtocol'
required: false
- name: Accept
in: header
description: >
The content type that the client can understand.
Writes only return a response body if they fail (partially or
completely)--for example,
due to a syntax problem or type mismatch.
schema:
type: string
default: application/json
enum:
- application/json
required: false
- $ref: '#/components/parameters/ContentEncoding'
- $ref: '#/components/parameters/ContentLength'
requestBody:
$ref: '#/components/requestBodies/lineProtocolRequestBody'
tags:
- Write data
components:
parameters:
AcceptQueryHeader:
name: Accept
in: header
schema:
type: string
default: application/json
enum:
- application/json
- application/jsonl
- application/vnd.apache.parquet
- text/csv
required: false
description: |
The content type that the client can understand.
ContentEncoding:
name: Content-Encoding
in: header
description: |
The compression applied to the line protocol in the request payload.
To send a gzip payload, pass `Content-Encoding: gzip` header.
schema:
$ref: '#/components/schemas/ContentEncoding'
required: false
ContentLength:
name: Content-Length
in: header
description: |
The size of the entity-body, in bytes, sent to InfluxDB.
schema:
$ref: '#/components/schemas/ContentLength'
ContentType:
name: Content-Type
description: |
The format of the data in the request body.
in: header
schema:
type: string
enum:
- application/json
required: false
db:
name: db
in: query
required: true
schema:
type: string
description: |
The name of the database.
dbWriteParam:
name: db
in: query
required: true
schema:
type: string
description: |
The name of the database.
InfluxDB creates the database if it doesn't already exist, and then
writes all points in the batch to the database.
dbQueryParam:
name: db
in: query
required: false
schema:
type: string
description: >
The name of the database.
If you provide a query that specifies the database, you can omit the
'db' parameter from your request.
accept_partial:
name: accept_partial
in: query
required: false
schema:
$ref: '#/components/schemas/AcceptPartial'
compatibilityPrecisionParam:
name: precision
in: query
required: false
schema:
$ref: '#/components/schemas/PrecisionWriteCompatibility'
description: The precision for unix timestamps in the line protocol batch.
precisionParam:
name: precision
in: query
required: false
schema:
$ref: '#/components/schemas/PrecisionWrite'
description: The precision for unix timestamps in the line protocol batch.
querySqlParam:
name: q
in: query
required: true
schema:
type: string
format: SQL
description: |
The query to execute.
format:
name: format
in: query
required: false
schema:
$ref: '#/components/schemas/Format'
formatRequired:
name: format
in: query
required: true
schema:
$ref: '#/components/schemas/Format'
v1UsernameParam:
name: u
in: query
required: false
schema:
type: string
description: >
Username for v1 compatibility authentication.
When using Basic authentication or query string authentication, InfluxDB
3 ignores this parameter but allows any arbitrary string for
compatibility with InfluxDB 1.x clients.
v1PasswordParam:
name: p
in: query
required: false
schema:
type: string
description: >
Password for v1 compatibility authentication.
For query string authentication, pass a database token with write
permissions as this parameter.
InfluxDB 3 checks that the `p` value is an authorized token.
requestBodies:
lineProtocolRequestBody:
required: true
content:
text/plain:
schema:
type: string
examples:
line:
summary: Example line protocol
value: measurement,tag=value field=1 1234567890
multiline:
summary: Example line protocol with UTF-8 characters
value: |
measurement,tag=value field=1 1234567890
measurement,tag=value field=2 1234567900
measurement,tag=value field=3 1234568000
queryRequestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/QueryRequestObject'
schemas:
AdminTokenObject:
type: object
properties:
id:
type: integer
name:
type: string
token:
type: string
hash:
type: string
created_at:
type: string
format: date-time
expiry:
format: date-time
example:
id: 0
name: _admin
token: apiv3_00xx0Xx0xx00XX0x0
hash: 00xx0Xx0xx00XX0x0
created_at: '2025-04-18T14:02:45.331Z'
expiry: null
ContentEncoding:
type: string
enum:
- gzip
- identity
description: >
Content coding.
Use `gzip` for compressed data or `identity` for unmodified,
uncompressed data.
#### Multi-member gzip support
InfluxDB 3 supports multi-member gzip payloads (concatenated gzip files
per [RFC 1952](https://www.rfc-editor.org/rfc/rfc1952)).
This allows you to:
- Concatenate multiple gzip files and send them in a single request
- Maintain compatibility with InfluxDB v1 and v2 write endpoints
- Simplify batch operations using standard compression tools
default: identity
LineProtocol:
type: string
enum:
- text/plain
- text/plain; charset=utf-8
description: >
`text/plain` is the content type for line protocol. `UTF-8` is the
default character set.
default: text/plain; charset=utf-8
ContentLength:
type: integer
description: The length in decimal number of octets.
AcceptPartial:
type: boolean
default: true
description: Accept partial writes.
Format:
type: string
enum:
- json
- csv
- parquet
- json_lines
- jsonl
- pretty
description: |-
The format of data in the response body.
`json_lines` is the canonical name; `jsonl` is accepted as an alias.
NoSync:
type: boolean
default: false
description: >
Acknowledges a successful write without waiting for WAL persistence.
#### Related
- [Use the HTTP API and client libraries to write
data](https://docs.influxdata.com/influxdb3/core/write-data/api-client-libraries/)
- [Data
durability](https://docs.influxdata.com/influxdb3/core/reference/internals/durability/)
PrecisionWriteCompatibility:
enum:
- ms
- s
- us
- u
- ns
- 'n'
type: string
description: >-
The precision for unix timestamps in the line protocol batch.
Use `ms` for milliseconds, `s` for seconds, `us` or `u` for
microseconds, or `ns` or `n` for nanoseconds.
Optional — defaults to nanosecond precision if omitted.
PrecisionWrite:
enum:
- auto
- nanosecond
- microsecond
- millisecond
- second
type: string
description: >
The precision for unix timestamps in the line protocol batch.
Supported values:
- `auto` (default): Automatically detects precision based on timestamp
magnitude
- `nanosecond`: Nanoseconds
- `microsecond`: Microseconds
- `millisecond`: Milliseconds
- `second`: Seconds
QueryRequestObject:
type: object
properties:
db:
description: |
The name of the database to query.
Required if the query (`q`) doesn't specify the database.
type: string
q:
description: The query to execute.
type: string
format:
description: The format of the query results.
type: string
enum:
- json
- csv
- parquet
- json_lines
- jsonl
- pretty
params:
description: |
Additional parameters for the query.
Use this field to pass query parameters.
type: object
additionalProperties: true
required:
- db
- q
example:
db: mydb
q: SELECT * FROM mytable
format: json
params: {}
CreateDatabaseRequest:
type: object
properties:
db:
type: string
pattern: ^[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9]$|^[a-zA-Z0-9]$
description: >-
The database name. Database names cannot contain underscores (_).
Names must start and end with alphanumeric characters and can
contain hyphens (-) in the middle.
retention_period:
type: string
description: >-
The retention period for the database. Specifies how long data
should be retained.
Use duration format (for example, "1d", "1h", "30m", "7d").
example: 7d
required:
- db
CreateTableRequest:
type: object
properties:
db:
type: string
table:
type: string
tags:
type: array
items:
type: string
fields:
type: array
items:
type: object
properties:
name:
type: string
type:
type: string
enum:
- utf8
- int64
- uint64
- float64
- bool
required:
- name
- type
retention_period:
type: string
description: >-
The retention period for the table. Specifies how long data in this
table should be retained.
Use duration format (for example, "1d", "1h", "30m", "7d").
example: 30d
required:
- db
- table
- tags
- fields
DistinctCacheCreateRequest:
type: object
properties:
db:
type: string
table:
type: string
name:
type: string
description: Optional cache name.
columns:
type: array
items:
type: string
max_cardinality:
type: integer
description: Optional maximum cardinality.
max_age:
type: integer
description: Optional maximum age in seconds.
required:
- db
- table
- columns
example:
db: mydb
table: mytable
columns:
- tag1
- tag2
max_cardinality: 1000
max_age: 3600
LastCacheCreateRequest:
type: object
properties:
db:
type: string
table:
type: string
name:
type: string
description: Optional cache name.
key_columns:
type: array
items:
type: string
description: Optional list of key columns.
value_columns:
type: array
items:
type: string
description: Optional list of value columns.
count:
type: integer
description: Optional count.
ttl:
type: integer
description: Optional time-to-live in seconds.
required:
- db
- table
example:
db: mydb
table: mytable
key_columns:
- tag1
value_columns:
- field1
count: 100
ttl: 3600
ProcessingEngineTriggerRequest:
type: object
properties:
db:
type: string
plugin_filename:
type: string
description: >
The path and filename of the plugin to execute--for example,
`schedule.py` or `endpoints/report.py`.
The path can be absolute or relative to the `--plugins-dir`
directory configured when starting InfluxDB 3.
The plugin file must implement the trigger interface associated with
the trigger's specification.
trigger_name:
type: string
trigger_settings:
description: |
Configuration for trigger error handling and execution behavior.
allOf:
- $ref: '#/components/schemas/TriggerSettings'
trigger_specification:
description: >
Specifies when and how the processing engine trigger should be
invoked.
## Supported trigger specifications:
### Cron-based scheduling
Format: `cron:CRON_EXPRESSION`
Uses extended (6-field) cron format (second minute hour day_of_month
month day_of_week):
```
┌───────────── second (0-59)
│ ┌───────────── minute (0-59)
│ │ ┌───────────── hour (0-23)
│ │ │ ┌───────────── day of month (1-31)
│ │ │ │ ┌───────────── month (1-12)
│ │ │ │ │ ┌───────────── day of week (0-6, Sunday=0)
│ │ │ │ │ │
* * * * * *
```
Examples:
- `cron:0 0 6 * * 1-5` - Every weekday at 6:00 AM
- `cron:0 30 14 * * 5` - Every Friday at 2:30 PM
- `cron:0 0 0 1 * *` - First day of every month at midnight
### Interval-based scheduling
Format: `every:DURATION`
Supported durations: `s` (seconds), `m` (minutes), `h` (hours), `d`
(days), `w` (weeks), `M` (months), `y` (years):
- `every:30s` - Every 30 seconds
- `every:5m` - Every 5 minutes
- `every:1h` - Every hour
- `every:1d` - Every day
- `every:1w` - Every week
- `every:1M` - Every month
- `every:1y` - Every year
**Maximum interval**: 1 year
### Table-based triggers
- `all_tables` - Triggers on write events to any table in the
database
- `table:TABLE_NAME` - Triggers on write events to a specific table
### On-demand triggers
Format: `request:REQUEST_PATH`
Creates an HTTP endpoint `/api/v3/engine/REQUEST_PATH` for manual
invocation:
- `request:hello-world` - Creates endpoint
`/api/v3/engine/hello-world`
- `request:data-export` - Creates endpoint
`/api/v3/engine/data-export`
pattern: >-
^(cron:[0-9
*,/-]+|every:[0-9]+[smhd]|all_tables|table:[a-zA-Z_][a-zA-Z0-9_]*|request:[a-zA-Z0-9_-]+)$
example: cron:0 0 6 * * 1-5
trigger_arguments:
type: object
additionalProperties: true
description: Optional arguments passed to the plugin.
disabled:
type: boolean
default: false
description: Whether the trigger is disabled.
required:
- db
- plugin_filename
- trigger_name
- trigger_settings
- trigger_specification
- disabled
TriggerSettings:
type: object
description: >
Configuration settings for processing engine trigger error handling and
execution behavior.
properties:
run_async:
type: boolean
default: false
description: >
Whether to run the trigger asynchronously.
When `true`, the trigger executes in the background without
blocking.
When `false`, the trigger executes synchronously.
error_behavior:
type: string
enum:
- Log
- Retry
- Disable
description: |
Specifies how to handle errors that occur during trigger execution:
- `Log`: Log the error and continue (default)
- `Retry`: Retry the trigger execution
- `Disable`: Disable the trigger after an error
default: Log
required:
- run_async
- error_behavior
WALPluginTestRequest:
type: object
description: |
Request body for testing a write-ahead logging (WAL) plugin.
properties:
filename:
type: string
description: |
The path and filename of the plugin to test.
database:
type: string
description: |
The database name to use for the test.
input_lp:
type: string
description: |
Line protocol data to use as input for the test.
cache_name:
type: string
description: |
Optional name of the cache to use in the test.
input_arguments:
type: object
additionalProperties:
type: string
description: |
Optional key-value pairs of arguments to pass to the plugin.
required:
- filename
- database
- input_lp
SchedulePluginTestRequest:
type: object
description: |
Request body for testing a scheduling plugin.
properties:
filename:
type: string
description: |
The path and filename of the plugin to test.
database:
type: string
description: |
The database name to use for the test.
schedule:
type: string
description: |
Optional schedule specification in cron or interval format.
cache_name:
type: string
description: |
Optional name of the cache to use in the test.
input_arguments:
type: object
additionalProperties:
type: string
description: |
Optional key-value pairs of arguments to pass to the plugin.
required:
- filename
- database
PluginFileRequest:
type: object
description: |
Request body for updating a plugin file.
properties:
plugin_name:
type: string
description: |
The name of the plugin file to update.
content:
type: string
description: |
The content of the plugin file.
required:
- plugin_name
- content
PluginDirectoryRequest:
type: object
description: |
Request body for updating plugin directory with multiple files.
properties:
plugin_name:
type: string
description: |
The name of the plugin directory to update.
files:
type: array
items:
$ref: '#/components/schemas/PluginFileEntry'
description: |
List of plugin files to include in the directory.
required:
- plugin_name
- files
PluginFileEntry:
type: object
description: |
Represents a single file in a plugin directory.
properties:
content:
type: string
description: |
The content of the file.
relative_path:
type: string
description: The relative path of the file within the plugin directory.
required:
- relative_path
- content
ShowDatabasesResponse:
type: object
properties:
databases:
type: array
items:
type: string
QueryResponse:
type: object
properties:
results:
type: array
items:
type: object
example:
results:
- series:
- name: mytable
columns:
- time
- value
values:
- - '2024-02-02T12:00:00Z'
- 42
ErrorMessage:
type: object
properties:
error:
type: string
data:
type: object
nullable: true
EpochCompatibility:
description: |
A unix timestamp precision.
- `h` for hours
- `m` for minutes
- `s` for seconds
- `ms` for milliseconds
- `u` or `µ` for microseconds
- `ns` for nanoseconds
enum:
- ns
- u
- µ
- ms
- s
- m
- h
type: string
UpdateDatabaseRequest:
type: object
properties:
retention_period:
type: string
description: >
The retention period for the database. Specifies how long data
should be retained.
Use duration format (for example, "1d", "1h", "30m", "7d").
example: 7d
description: Request schema for updating database configuration.
responses:
Unauthorized:
description: Unauthorized access.
content:
application/json:
schema:
$ref: '#/components/schemas/ErrorMessage'
BadRequest:
description: |
Request failed. Possible reasons:
- Invalid database name
- Malformed request body
- Invalid timestamp precision
content:
application/json:
schema:
$ref: '#/components/schemas/ErrorMessage'
Forbidden:
description: Access denied.
content:
application/json:
schema:
$ref: '#/components/schemas/ErrorMessage'
NotFound:
description: Resource not found.
content:
application/json:
schema:
$ref: '#/components/schemas/ErrorMessage'
headers:
ClusterUUID:
description: |
The catalog UUID of the InfluxDB instance.
This header is included in all HTTP API responses and enables you to:
- Identify which cluster instance handled the request
- Monitor deployments across multiple InfluxDB instances
- Debug and troubleshoot distributed systems
schema:
type: string
format: uuid
example: 01234567-89ab-cdef-0123-456789abcdef
securitySchemes:
BasicAuthentication:
type: http
scheme: basic
description: >-
Use the `Authorization` header with the `Basic` scheme to authenticate
v1 API requests.
Works with v1 compatibility [`/write`](#operation/PostV1Write) and
[`/query`](#operation/GetV1Query) endpoints in InfluxDB 3.
When authenticating requests, InfluxDB 3 checks that the `password` part
of the decoded credential is an authorized token
and ignores the `username` part of the decoded credential.
### Syntax
```http
Authorization: Basic
```
### Example
```bash
curl "http://localhost:8181/write?db=DATABASE_NAME&precision=s" \
--user "":"AUTH_TOKEN" \
--header "Content-type: text/plain; charset=utf-8" \
--data-binary 'home,room=kitchen temp=72 1641024000'
```
Replace the following:
- **`DATABASE_NAME`**: your InfluxDB 3 Core database
- **`AUTH_TOKEN`**: an admin token or database token authorized for the
database
QuerystringAuthentication:
type: apiKey
in: query
name: u=&p=
description: >-
Use InfluxDB 1.x API parameters to provide credentials through the query
string for v1 API requests.
Querystring authentication works with v1-compatible
[`/write`](#operation/PostV1Write) and [`/query`](#operation/GetV1Query)
endpoints.
When authenticating requests, InfluxDB 3 checks that the `p`
(_password_) query parameter is an authorized token
and ignores the `u` (_username_) query parameter.
### Syntax
```http
https://localhost:8181/query/?[u=any]&p=AUTH_TOKEN
https://localhost:8181/write/?[u=any]&p=AUTH_TOKEN
```
### Examples
```bash
curl
"http://localhost:8181/write?db=DATABASE_NAME&precision=s&p=AUTH_TOKEN"
\
--header "Content-type: text/plain; charset=utf-8" \
--data-binary 'home,room=kitchen temp=72 1641024000'
```
Replace the following:
- **`DATABASE_NAME`**: your InfluxDB 3 Core database
- **`AUTH_TOKEN`**: an admin token or database token authorized for the
database
```bash
#######################################
# Use an InfluxDB 1.x compatible username and password
# to query the InfluxDB v1 HTTP API
#######################################
# Use authentication query parameters:
# ?p=AUTH_TOKEN
#######################################
curl --get "http://localhost:8181/query" \
--data-urlencode "p=AUTH_TOKEN" \
--data-urlencode "db=DATABASE_NAME" \
--data-urlencode "q=SELECT * FROM MEASUREMENT"
```
Replace the following:
- **`DATABASE_NAME`**: the database to query
- **`AUTH_TOKEN`**: a database token with sufficient permissions to the
database
BearerAuthentication:
type: http
scheme: bearer
bearerFormat: JWT
description: >
Use the OAuth Bearer authentication
scheme to provide an authorization token to InfluxDB 3.
Bearer authentication works with all endpoints.
In your API requests, send an `Authorization` header.
For the header value, provide the word `Bearer` followed by a space and
a database token.
### Syntax
```http
Authorization: Bearer AUTH_TOKEN
```
### Example
```bash
curl http://localhost:8181/api/v3/query_influxql \
--header "Authorization: Bearer AUTH_TOKEN"
```
TokenAuthentication:
description: >-
Use InfluxDB v2 Token authentication to provide an authorization token
to InfluxDB 3.
The v2 Token scheme works with v1 and v2 compatibility endpoints in
InfluxDB 3.
In your API requests, send an `Authorization` header.
For the header value, provide the word `Token` followed by a space and a
database token.
The word `Token` is case-sensitive.
### Syntax
```http
Authorization: Token AUTH_TOKEN
```
### Example
```sh
########################################################
# Use the Token authentication scheme with /api/v2/write
# to write data.
########################################################
curl --request post
"http://localhost:8181/api/v2/write?bucket=DATABASE_NAME&precision=s" \
--header "Authorization: Token AUTH_TOKEN" \
--data-binary 'home,room=kitchen temp=72 1463683075'
```
in: header
name: Authorization
type: apiKey