openapi: 3.0.3 info: title: InfluxDB 3 Enterprise API Service description: | The InfluxDB HTTP API for InfluxDB 3 Enterprise provides a programmatic interface for interacting with InfluxDB 3 Enterprise databases and resources. Use this API to: - Write data to InfluxDB 3 Enterprise databases - Query data using SQL or InfluxQL - Process data using Processing engine plugins - Manage databases, tables, and Processing engine triggers - Perform administrative tasks and access system information The API includes endpoints under the following paths: - `/api/v3`: InfluxDB 3 Enterprise native endpoints - `/`: Compatibility endpoints for InfluxDB v1 workloads and clients - `/api/v2/write`: Compatibility endpoint for InfluxDB v2 workloads and clients To download the OpenAPI specification for this API, use the **Download** button above. version: v3.8.0 license: name: MIT url: https://opensource.org/licenses/MIT contact: name: InfluxData url: https://www.influxdata.com email: support@influxdata.com x-source-hash: sha256:1259b96096eab6c8dbf3f76c974924f124e9b3e08eedc6b0c9a66d3108857c52 servers: - url: https://{baseurl} description: InfluxDB 3 Enterprise API URL variables: baseurl: enum: - localhost:8181 default: localhost:8181 description: InfluxDB 3 Enterprise URL security: - BearerAuthentication: [] - TokenAuthentication: [] - BasicAuthentication: [] - QuerystringAuthentication: [] tags: - name: Authentication description: | Depending on your workflow, use one of the following schemes to authenticate to the InfluxDB 3 API: | Authentication scheme | Works with | |:-------------------|:-----------| | [Bearer authentication](#section/Authentication/BearerAuthentication) | All endpoints | | [Token authentication](#section/Authentication/TokenAuthentication) | v1, v2 endpoints | | [Basic authentication](#section/Authentication/BasicAuthentication) | v1 endpoints | | [Querystring authentication](#section/Authentication/QuerystringAuthentication) | v1 endpoints | x-traitTag: true x-related: - title: Authenticate v1 API requests href: /influxdb3/enterprise/guides/api-compatibility/v1/ - title: Manage tokens href: /influxdb3/enterprise/admin/tokens/ - name: Cache data description: |- Manage the in-memory cache. #### Distinct Value Cache The Distinct Value Cache (DVC) lets you cache distinct values of one or more columns in a table, improving the performance of queries that return distinct tag and field values. The DVC is an in-memory cache that stores distinct values for specific columns in a table. When you create an DVC, you can specify what columns' distinct values to cache, the maximum number of distinct value combinations to cache, and the maximum age of cached values. A DVC is associated with a table, which can have multiple DVCs. #### Last value cache The Last Value Cache (LVC) lets you cache the most recent values for specific fields in a table, improving the performance of queries that return the most recent value of a field for specific series or the last N values of a field. The LVC is an in-memory cache that stores the last N number of values for specific fields of series in a table. When you create an LVC, you can specify what fields to cache, what tags to use to identify each series, and the number of values to cache for each unique series. An LVC is associated with a table, which can have multiple LVCs. x-related: - title: Manage the Distinct Value Cache href: /influxdb3/enterprise/admin/distinct-value-cache/ - title: Manage the Last Value Cache href: /influxdb3/enterprise/admin/last-value-cache/ - name: Compatibility endpoints description: > InfluxDB 3 provides compatibility endpoints for InfluxDB 1.x and InfluxDB 2.x workloads and clients. ### Write data using v1- or v2-compatible endpoints - [`/api/v2/write` endpoint](#operation/PostV2Write) for InfluxDB v2 clients and when you bring existing InfluxDB v2 write workloads to InfluxDB 3. - [`/write` endpoint](#operation/PostV1Write) for InfluxDB v1 clients and when you bring existing InfluxDB v1 write workloads to InfluxDB 3. For new workloads, use the [`/api/v3/write_lp` endpoint](#operation/PostWriteLP). All endpoints accept the same line protocol format. ### Query data Use the HTTP [`/query`](#operation/GetV1ExecuteQuery) endpoint for InfluxDB v1 clients and v1 query workloads using InfluxQL. For new workloads, use one of the following: - HTTP [`/api/v3/query_sql` endpoint](#operation/GetExecuteQuerySQL) for new query workloads using SQL. - HTTP [`/api/v3/query_influxql` endpoint](#operation/GetExecuteInfluxQLQuery) for new query workloads using InfluxQL. - Flight SQL and InfluxDB 3 _Flight+gRPC_ APIs for querying with SQL or InfluxQL. For more information about using Flight APIs, see [InfluxDB 3 client libraries](https://github.com/InfluxCommunity?q=influxdb3&type=public&language=&sort=). ### Server information Server information endpoints such as `/health` and `metrics` are compatible with InfluxDB 1.x and InfluxDB 2.x clients. x-related: - title: Use compatibility APIs to write data href: /influxdb3/enterprise/write-data/http-api/compatibility-apis/ - name: Database description: Manage databases - description: > Most InfluxDB API endpoints require parameters in the request--for example, specifying the database to use. ### Common parameters The following table shows common parameters used by many InfluxDB API endpoints. Many endpoints may require other parameters in the query string or in the request body that perform functions specific to those endpoints. | Query parameter | Value type | Description | |:------------------------ |:--------------------- |:-------------------------------------------| | `db` | string | The database name | InfluxDB HTTP API endpoints use standard HTTP request and response headers. The following table shows common headers used by many InfluxDB API endpoints. Some endpoints may use other headers that perform functions more specific to those endpoints--for example, the write endpoints accept the `Content-Encoding` header to indicate that line protocol is compressed in the request body. | Header | Value type | Description | |:------------------------ |:--------------------- |:-------------------------------------------| | `Accept` | string | The content type that the client can understand. | | `Authorization` | string | The authorization scheme and credential. | | `Content-Length` | integer | The size of the entity-body, in bytes. | | `Content-Type` | string | The format of the data in the request body. | name: Headers and parameters x-traitTag: true - name: Processing engine description: > Manage Processing engine triggers, test plugins, and send requests to trigger On Request plugins. InfluxDB 3 Enterprise provides the InfluxDB 3 processing engine, an embedded Python VM that can dynamically load and trigger Python plugins in response to events in your database. Use Processing engine plugins and triggers to run code and perform tasks for different database events. To get started with the processing engine, see the [Processing engine and Python plugins](/influxdb3/enterprise/processing-engine/) guide. x-related: - title: Processing engine and Python plugins href: /influxdb3/enterprise/plugins/ - name: Query data description: Query data using SQL or InfluxQL x-related: - title: Use the InfluxDB v1 HTTP query API and InfluxQL to query data href: /influxdb3/enterprise/query-data/execute-queries/influxdb-v1-api/ - name: Quick start description: > 1. [Create an admin token](#section/Authentication) to authorize API requests. ```bash curl -X POST "http://localhost:8181/api/v3/configure/token/admin" ``` 2. [Check the status](#section/Server-information) of the InfluxDB server. ```bash curl "http://localhost:8181/health" \ --header "Authorization: Bearer ADMIN_TOKEN" ``` 3. [Write data](#operation/PostWriteLP) to InfluxDB. ```bash curl "http://localhost:8181/api/v3/write_lp?db=sensors&precision=auto" --header "Authorization: Bearer ADMIN_TOKEN" \ --data-raw "home,room=Kitchen temp=72.0 home,room=Living\ room temp=71.5" ``` If all data is written, the response is `204 No Content`. 4. [Query data](#operation/GetExecuteQuerySQL) from InfluxDB. ```bash curl -G "http://localhost:8181/api/v3/query_sql" \ --header "Authorization: Bearer ADMIN_TOKEN" \ --data-urlencode "db=sensors" \ --data-urlencode "q=SELECT * FROM home WHERE room='Living room'" \ --data-urlencode "format=jsonl" ``` Output: ```jsonl {"room":"Living room","temp":71.5,"time":"2025-02-25T20:19:34.984098"} ``` For more information about using InfluxDB 3 Enterprise, see the [Get started](/influxdb3/enterprise/get-started/) guide. x-traitTag: true - name: Server information description: Retrieve server metrics, status, and version information - name: Table description: Manage table schemas and data - name: Token description: Manage tokens for authentication and authorization - name: Write data description: | Write data to InfluxDB 3 using line protocol format. #### Timestamp precision across write APIs InfluxDB 3 provides multiple write endpoints for compatibility with different InfluxDB versions. The following table compares timestamp precision support across v1, v2, and v3 write APIs: | Precision | v1 (`/write`) | v2 (`/api/v2/write`) | v3 (`/api/v3/write_lp`) | |-----------|---------------|----------------------|-------------------------| | **Auto detection** | ❌ No | ❌ No | ✅ `auto` (default) | | **Seconds** | ✅ `s` | ✅ `s` | ✅ `second` | | **Milliseconds** | ✅ `ms` | ✅ `ms` | ✅ `millisecond` | | **Microseconds** | ✅ `u` or `µ` | ✅ `us` | ✅ `microsecond` | | **Nanoseconds** | ✅ `ns` | ✅ `ns` | ✅ `nanosecond` | | **Default** | Nanosecond | Nanosecond | **Auto** (guessed) | All timestamps are stored internally as nanoseconds. paths: /api/v1/health: get: operationId: GetHealthV1 summary: Health check (v1) description: | Checks the status of the service. Returns `OK` if the service is running. This endpoint does not return version information. Use the [`/ping`](#operation/GetPing) endpoint to retrieve version details. > **Note**: This endpoint requires authentication by default in InfluxDB 3 Enterprise. responses: "200": description: Service is running. Returns `OK`. content: text/plain: schema: type: string example: OK "401": description: Unauthorized. Authentication is required. "500": description: Service is unavailable. tags: - Server information - Compatibility endpoints /api/v2/write: post: operationId: PostV2Write responses: "204": description: Success ("No Content"). All data in the batch is written and queryable. headers: cluster-uuid: $ref: "#/components/headers/ClusterUUID" "400": description: Bad request. "401": $ref: "#/components/responses/Unauthorized" "403": description: Access denied. "413": description: Request entity too large. summary: Write line protocol (v2-compatible) description: > Writes line protocol to the specified database. This endpoint provides backward compatibility for InfluxDB 2.x write workloads using tools such as InfluxDB 2.x client libraries, the Telegraf `outputs.influxdb_v2` output plugin, or third-party tools. Use this endpoint to send data in [line protocol](/influxdb3/enterprise/reference/syntax/line-protocol/) format to InfluxDB. Use query parameters to specify options for writing data. #### Related - [Use compatibility APIs to write data](/influxdb3/enterprise/write-data/http-api/compatibility-apis/) parameters: - name: Content-Type in: header description: | The content type of the request payload. schema: $ref: "#/components/schemas/LineProtocol" required: false - description: | The compression applied to the line protocol in the request payload. To send a gzip payload, pass `Content-Encoding: gzip` header. in: header name: Content-Encoding schema: default: identity description: | Content coding. Use `gzip` for compressed data or `identity` for unmodified, uncompressed data. enum: - gzip - identity type: string - description: | The size of the entity-body, in bytes, sent to InfluxDB. in: header name: Content-Length schema: description: The length in decimal number of octets. type: integer - description: | The content type that the client can understand. Writes only return a response body if they fail (partially or completely)--for example, due to a syntax problem or type mismatch. in: header name: Accept schema: default: application/json description: Error content type. enum: - application/json type: string - name: bucket in: query required: true schema: type: string description: |- A database name. InfluxDB creates the database if it doesn't already exist, and then writes all points in the batch to the database. This parameter is named `bucket` for compatibility with InfluxDB v2 client libraries. - name: accept_partial in: query required: false schema: $ref: "#/components/schemas/AcceptPartial" - $ref: "#/components/parameters/compatibilityPrecisionParam" requestBody: $ref: "#/components/requestBodies/lineProtocolRequestBody" tags: - Compatibility endpoints - Write data /api/v3/configure/database: delete: operationId: DeleteConfigureDatabase parameters: - $ref: "#/components/parameters/db" - name: data_only in: query required: false schema: type: boolean default: false description: | Delete only data while preserving the database schema and all associated resources (tokens, triggers, last value caches, distinct value caches, processing engine configurations). When `false` (default), the entire database is deleted. - name: remove_tables in: query required: false schema: type: boolean default: false description: | Used with `data_only=true` to remove table resources (caches) while preserving database-level resources (tokens, triggers, processing engine configurations). Has no effect when `data_only=false`. - name: hard_delete_at in: query required: false schema: type: string format: date-time description: |- Schedule the database for hard deletion at the specified time. If not provided, the database will be soft deleted. Use ISO 8601 date-time format (for example, "2025-12-31T23:59:59Z"). #### Deleting a database cannot be undone Deleting a database is a destructive action. Once a database is deleted, data stored in that database cannot be recovered. Also accepts special string values: - `now` — hard delete immediately - `never` — soft delete only (default behavior) - `default` — use the system default hard deletion time responses: "200": description: Success. Database deleted. "401": $ref: "#/components/responses/Unauthorized" "404": description: Database not found. summary: Delete a database description: | Soft deletes a database. The database is scheduled for deletion and unavailable for querying. Use the `hard_delete_at` parameter to schedule a hard deletion. Use the `data_only` parameter to delete data while preserving the database schema and resources. tags: - Database get: operationId: GetConfigureDatabase responses: "200": description: Success. The response body contains the list of databases. content: application/json: schema: $ref: "#/components/schemas/ShowDatabasesResponse" "400": description: Bad request. "401": $ref: "#/components/responses/Unauthorized" "404": description: Database not found. summary: List databases description: Retrieves a list of databases. parameters: - $ref: "#/components/parameters/formatRequired" - name: show_deleted in: query required: false schema: type: boolean default: false description: | Include soft-deleted databases in the response. By default, only active databases are returned. tags: - Database post: operationId: PostConfigureDatabase responses: "200": description: Success. Database created. "400": description: Bad request. "401": $ref: "#/components/responses/Unauthorized" "409": description: Database already exists. summary: Create a database description: Creates a new database in the system. requestBody: required: true content: application/json: schema: $ref: "#/components/schemas/CreateDatabaseRequest" tags: - Database put: operationId: update_database responses: "200": description: Success. The database has been updated. "400": description: Bad request. "401": $ref: "#/components/responses/Unauthorized" "404": description: Database not found. summary: Update a database description: | Updates database configuration, such as retention period. requestBody: required: true content: application/json: schema: $ref: "#/components/schemas/UpdateDatabaseRequest" tags: - Database /api/v3/configure/database/retention_period: delete: operationId: DeleteDatabaseRetentionPeriod summary: Remove database retention period description: | Removes the retention period from a database, setting it to infinite retention. parameters: - $ref: "#/components/parameters/db" responses: "204": description: Success. The database retention period has been removed. "401": $ref: "#/components/responses/Unauthorized" "404": description: Database not found. tags: - Database /api/v3/configure/distinct_cache: delete: operationId: DeleteConfigureDistinctCache responses: "200": description: Success. The distinct cache has been deleted. "400": description: Bad request. "401": $ref: "#/components/responses/Unauthorized" "404": description: Cache not found. summary: Delete distinct cache description: Deletes a distinct cache. parameters: - $ref: "#/components/parameters/db" - name: table in: query required: true schema: type: string description: The name of the table containing the distinct cache. - name: name in: query required: true schema: type: string description: The name of the distinct cache to delete. tags: - Cache data - Table post: operationId: PostConfigureDistinctCache responses: "201": description: Success. The distinct cache has been created. "400": description: > Bad request. The server responds with status `400` if the request would overwrite an existing cache with a different configuration. "409": description: Conflict. A distinct cache with this configuration already exists. summary: Create distinct cache description: Creates a distinct cache for a table. requestBody: required: true content: application/json: schema: $ref: "#/components/schemas/DistinctCacheCreateRequest" tags: - Cache data - Table /api/v3/configure/last_cache: delete: operationId: DeleteConfigureLastCache responses: "200": description: Success. The last cache has been deleted. "400": description: Bad request. "401": $ref: "#/components/responses/Unauthorized" "404": description: Cache not found. summary: Delete last cache description: Deletes a last cache. parameters: - $ref: "#/components/parameters/db" - name: table in: query required: true schema: type: string description: The name of the table containing the last cache. - name: name in: query required: true schema: type: string description: The name of the last cache to delete. tags: - Cache data - Table post: operationId: PostConfigureLastCache responses: "201": description: Success. Last cache created. "400": description: Bad request. A cache with this name already exists or the request is malformed. "401": $ref: "#/components/responses/Unauthorized" "404": description: Cache not found. summary: Create last cache description: Creates a last cache for a table. requestBody: required: true content: application/json: schema: $ref: "#/components/schemas/LastCacheCreateRequest" tags: - Cache data - Table /api/v3/configure/plugin_environment/install_packages: post: operationId: PostInstallPluginPackages summary: Install plugin packages description: |- Installs the specified Python packages into the processing engine plugin environment. This endpoint is synchronous and blocks until the packages are installed. parameters: - $ref: "#/components/parameters/ContentType" requestBody: required: true content: application/json: schema: type: object properties: packages: type: array items: type: string description: | A list of Python package names to install. Can include version specifiers (e.g., "scipy==1.9.0"). example: - influxdb3-python - scipy - pandas==1.5.0 - requests required: - packages example: packages: - influxdb3-python - scipy - pandas==1.5.0 - requests responses: "200": description: Success. The packages are installed. "400": description: Bad request. "401": $ref: "#/components/responses/Unauthorized" tags: - Processing engine /api/v3/configure/plugin_environment/install_requirements: post: operationId: PostInstallPluginRequirements summary: Install plugin requirements description: > Installs requirements from a requirements file (also known as a "pip requirements file") into the processing engine plugin environment. This endpoint is synchronous and blocks until the requirements are installed. ### Related - [Processing engine and Python plugins](/influxdb3/enterprise/plugins/) - [Python requirements file format](https://pip.pypa.io/en/stable/reference/requirements-file-format/) parameters: - $ref: "#/components/parameters/ContentType" requestBody: required: true content: application/json: schema: type: object properties: requirements_location: type: string description: | The path to the requirements file containing Python packages to install. Can be a relative path (relative to the plugin directory) or an absolute path. example: requirements.txt required: - requirements_location example: requirements_location: requirements.txt responses: "200": description: Success. The requirements have been installed. "400": description: Bad request. "401": $ref: "#/components/responses/Unauthorized" tags: - Processing engine /api/v3/configure/processing_engine_trigger: post: operationId: PostConfigureProcessingEngineTrigger summary: Create processing engine trigger description: Creates a processing engine trigger with the specified plugin file and trigger specification. requestBody: required: true content: application/json: schema: $ref: "#/components/schemas/ProcessingEngineTriggerRequest" examples: schedule_cron: summary: Schedule trigger using cron description: > In `"cron:CRON_EXPRESSION"`, `CRON_EXPRESSION` uses extended 6-field cron format. The cron expression `0 0 6 * * 1-5` means the trigger will run at 6:00 AM every weekday (Monday to Friday). value: db: DATABASE_NAME plugin_filename: schedule.py trigger_name: schedule_cron_trigger trigger_specification: cron:0 0 6 * * 1-5 disabled: false trigger_settings: run_async: false error_behavior: Log schedule_every: summary: Schedule trigger using interval description: | In `"every:DURATION"`, `DURATION` specifies the interval between trigger executions. The duration `1h` means the trigger will run every hour. value: db: mydb plugin_filename: schedule.py trigger_name: schedule_every_trigger trigger_specification: every:1h disabled: false trigger_settings: run_async: false error_behavior: Log schedule_every_seconds: summary: Schedule trigger using seconds interval description: | Example of scheduling a trigger to run every 30 seconds. value: db: mydb plugin_filename: schedule.py trigger_name: schedule_every_30s_trigger trigger_specification: every:30s disabled: false trigger_settings: run_async: false error_behavior: Log schedule_every_minutes: summary: Schedule trigger using minutes interval description: | Example of scheduling a trigger to run every 5 minutes. value: db: mydb plugin_filename: schedule.py trigger_name: schedule_every_5m_trigger trigger_specification: every:5m disabled: false trigger_settings: run_async: false error_behavior: Log all_tables: summary: All tables trigger example description: | Trigger that fires on write events to any table in the database. value: db: mydb plugin_filename: all_tables.py trigger_name: all_tables_trigger trigger_specification: all_tables disabled: false trigger_settings: run_async: false error_behavior: Log table_specific: summary: Table-specific trigger example description: | Trigger that fires on write events to a specific table. value: db: mydb plugin_filename: table.py trigger_name: table_trigger trigger_specification: table:sensors disabled: false trigger_settings: run_async: false error_behavior: Log api_request: summary: On-demand request trigger example description: | Creates an HTTP endpoint `/api/v3/engine/hello-world` for manual invocation. value: db: mydb plugin_filename: request.py trigger_name: hello_world_trigger trigger_specification: request:hello-world disabled: false trigger_settings: run_async: false error_behavior: Log cron_friday_afternoon: summary: Cron trigger for Friday afternoons description: | Example of a cron trigger that runs every Friday at 2:30 PM. value: db: reports plugin_filename: weekly_report.py trigger_name: friday_report_trigger trigger_specification: cron:0 30 14 * * 5 disabled: false trigger_settings: run_async: false error_behavior: Log cron_monthly: summary: Cron trigger for monthly execution description: | Example of a cron trigger that runs on the first day of every month at midnight. value: db: monthly_data plugin_filename: monthly_cleanup.py trigger_name: monthly_cleanup_trigger trigger_specification: cron:0 0 0 1 * * disabled: false trigger_settings: run_async: false error_behavior: Log responses: "200": description: Success. Processing engine trigger created. "400": description: Bad request. "401": $ref: "#/components/responses/Unauthorized" "404": description: Trigger not found. tags: - Processing engine delete: operationId: DeleteConfigureProcessingEngineTrigger summary: Delete processing engine trigger description: Deletes a processing engine trigger. parameters: - $ref: "#/components/parameters/db" - name: trigger_name in: query required: true schema: type: string - name: force in: query required: false schema: type: boolean default: false description: | Force deletion of the trigger even if it has active executions. By default, deletion fails if the trigger is currently executing. responses: "200": description: Success. The processing engine trigger has been deleted. "400": description: Bad request. "401": $ref: "#/components/responses/Unauthorized" "404": description: Trigger not found. tags: - Processing engine /api/v3/configure/processing_engine_trigger/disable: post: operationId: PostDisableProcessingEngineTrigger summary: Disable processing engine trigger description: Disables a processing engine trigger. parameters: - name: db in: query required: true schema: type: string description: The database name. - name: trigger_name in: query required: true schema: type: string description: The name of the trigger. responses: "200": description: Success. The processing engine trigger has been disabled. "400": description: Bad request. "401": $ref: "#/components/responses/Unauthorized" "404": description: Trigger not found. tags: - Processing engine /api/v3/configure/processing_engine_trigger/enable: post: operationId: PostEnableProcessingEngineTrigger summary: Enable processing engine trigger description: Enables a processing engine trigger. parameters: - name: db in: query required: true schema: type: string description: The database name. - name: trigger_name in: query required: true schema: type: string description: The name of the trigger. responses: "200": description: Success. The processing engine trigger is enabled. "400": description: Bad request. "401": $ref: "#/components/responses/Unauthorized" "404": description: Trigger not found. tags: - Processing engine /api/v3/configure/table: delete: operationId: DeleteConfigureTable parameters: - $ref: "#/components/parameters/db" - name: table in: query required: true schema: type: string - name: data_only in: query required: false schema: type: boolean default: false description: | Delete only data while preserving the table schema and all associated resources (last value caches, distinct value caches). When `false` (default), the entire table is deleted. - name: hard_delete_at in: query required: false schema: type: string format: date-time description: |- Schedule the table for hard deletion at the specified time. If not provided, the table will be soft deleted. Use ISO 8601 format (for example, "2025-12-31T23:59:59Z"). Also accepts special string values: - `now` — hard delete immediately - `never` — soft delete only (default behavior) - `default` — use the system default hard deletion time responses: "200": description: Success (no content). The table has been deleted. "401": $ref: "#/components/responses/Unauthorized" "404": description: Table not found. summary: Delete a table description: | Soft deletes a table. The table is scheduled for deletion and unavailable for querying. Use the `hard_delete_at` parameter to schedule a hard deletion. Use the `data_only` parameter to delete data while preserving the table schema and resources. #### Deleting a table cannot be undone Deleting a table is a destructive action. Once a table is deleted, data stored in that table cannot be recovered. tags: - Table post: operationId: PostConfigureTable responses: "200": description: Success. The table has been created. "400": description: Bad request. "401": $ref: "#/components/responses/Unauthorized" "404": description: Database not found. summary: Create a table description: Creates a new table within a database. requestBody: required: true content: application/json: schema: $ref: "#/components/schemas/CreateTableRequest" tags: - Table put: operationId: PatchConfigureTable responses: "200": description: Success. The table has been updated. "400": description: Bad request. "401": $ref: "#/components/responses/Unauthorized" "404": description: Table not found. summary: Update a table description: | Updates table configuration, such as retention period. requestBody: required: true content: application/json: schema: $ref: "#/components/schemas/UpdateTableRequest" tags: - Table x-enterprise-only: true /api/v3/configure/token: delete: operationId: DeleteToken parameters: - name: token_name in: query required: true schema: type: string description: The name of the token to delete. responses: "200": description: Success. The token has been deleted. "401": $ref: "#/components/responses/Unauthorized" "404": description: Token not found. summary: Delete token description: | Deletes a token. tags: - Authentication - Token /api/v3/configure/token/admin: post: operationId: PostCreateAdminToken responses: "201": description: | Success. The admin token has been created. The response body contains the token string and metadata. content: application/json: schema: $ref: "#/components/schemas/AdminTokenObject" "401": $ref: "#/components/responses/Unauthorized" summary: Create admin token description: | Creates an admin token. An admin token is a special type of token that has full access to all resources in the system. tags: - Authentication - Token /api/v3/configure/token/admin/regenerate: post: operationId: PostRegenerateAdminToken summary: Regenerate admin token description: | Regenerates an admin token and revokes the previous token with the same name. parameters: [] responses: "201": description: Success. The admin token has been regenerated. content: application/json: schema: $ref: "#/components/schemas/AdminTokenObject" "401": $ref: "#/components/responses/Unauthorized" tags: - Authentication - Token /api/v3/configure/token/named_admin: post: operationId: PostCreateNamedAdminToken responses: "201": description: | Success. The named admin token has been created. The response body contains the token string and metadata. content: application/json: schema: $ref: "#/components/schemas/AdminTokenObject" "401": $ref: "#/components/responses/Unauthorized" "409": description: A token with this name already exists. summary: Create named admin token description: | Creates a named admin token. A named admin token is a special type of admin token with a custom name for identification and management. tags: - Authentication - Token requestBody: required: true content: application/json: schema: type: object properties: token_name: type: string description: The name for the admin token. expiry_secs: type: integer description: Optional expiration time in seconds. If not provided, the token does not expire. nullable: true required: - token_name /api/v3/engine/{request_path}: get: operationId: GetProcessingEnginePluginRequest responses: "200": description: Success. The plugin request has been executed. "400": description: Malformed request. "401": $ref: "#/components/responses/Unauthorized" "404": description: Plugin not found. "500": description: Processing failure. summary: On Request processing engine plugin request description: > Executes the On Request processing engine plugin specified in the trigger's `plugin_filename`. The request can include request headers, query string parameters, and a request body, which InfluxDB passes to the plugin. An On Request plugin implements the following signature: ```python def process_request(influxdb3_local, query_parameters, request_headers, request_body, args=None) ``` The response depends on the plugin implementation. tags: - Processing engine post: operationId: PostProcessingEnginePluginRequest responses: "200": description: Success. The plugin request has been executed. "400": description: Malformed request. "401": $ref: "#/components/responses/Unauthorized" "404": description: Plugin not found. "500": description: Processing failure. summary: On Request processing engine plugin request description: > Executes the On Request processing engine plugin specified in the trigger's `plugin_filename`. The request can include request headers, query string parameters, and a request body, which InfluxDB passes to the plugin. An On Request plugin implements the following signature: ```python def process_request(influxdb3_local, query_parameters, request_headers, request_body, args=None) ``` The response depends on the plugin implementation. parameters: - $ref: "#/components/parameters/ContentType" requestBody: required: false content: application/json: schema: type: object additionalProperties: true tags: - Processing engine parameters: - name: request_path description: | The path configured in the request trigger specification for the plugin. For example, if you define a trigger with the following: ```json trigger_specification: "request:hello-world" ``` then, the HTTP API exposes the following plugin endpoint: ``` /api/v3/engine/hello-world ``` in: path required: true schema: type: string /api/v3/enterprise/configure/file_index: post: operationId: configure_file_index_create summary: Create a file index description: >- Creates a file index for a database or table. A file index improves query performance by indexing data files based on specified columns, enabling the query engine to skip irrelevant files during query execution. This endpoint is only available in InfluxDB 3 Enterprise. x-enterprise-only: true requestBody: required: true content: application/json: schema: $ref: "#/components/schemas/FileIndexCreateRequest" responses: "200": description: Success. The file index has been created. "400": description: Bad request. "401": $ref: "#/components/responses/Unauthorized" "404": description: Database or table not found. tags: - Database - Table delete: operationId: configure_file_index_delete summary: Delete a file index description: |- Deletes a file index from a database or table. This endpoint is only available in InfluxDB 3 Enterprise. x-enterprise-only: true requestBody: required: true content: application/json: schema: $ref: "#/components/schemas/FileIndexDeleteRequest" responses: "200": description: Success. The file index has been deleted. "400": description: Bad request. "401": $ref: "#/components/responses/Unauthorized" "404": description: Database, table, or file index not found. tags: - Database - Table /api/v3/enterprise/configure/node/stop: post: operationId: stop_node summary: Mark a node as stopped description: >- Marks a node as stopped in the catalog, freeing up the licensed cores it was using for other nodes. Use this endpoint after you have already stopped the physical instance (for example, using `kill` or stopping the container). This endpoint does not shut down the running process — you must stop the instance first. When the node is marked as stopped: 1. Licensed cores from the stopped node are freed for reuse 2. Other nodes in the cluster see the update after their catalog sync interval This endpoint is only available in InfluxDB 3 Enterprise. #### Related - [influxdb3 stop node](/influxdb3/enterprise/reference/cli/influxdb3/stop/node/) x-enterprise-only: true requestBody: required: true content: application/json: schema: $ref: "#/components/schemas/StopNodeRequest" responses: "200": description: Success. The node has been marked as stopped. "401": $ref: "#/components/responses/Unauthorized" "404": description: Node not found. tags: - Server information /api/v3/enterprise/configure/table/retention_period: post: operationId: create_or_update_retention_period_for_table summary: Set table retention period description: >- Sets or updates the retention period for a specific table. Use this endpoint to control how long data in a table is retained independently of the database-level retention period. This endpoint is only available in InfluxDB 3 Enterprise. #### Related - [influxdb3 update table](/influxdb3/enterprise/reference/cli/influxdb3/update/table/) x-enterprise-only: true parameters: - name: db in: query required: true schema: type: string description: The database name. - name: table in: query required: true schema: type: string description: The table name. - name: duration in: query required: true schema: type: string description: The retention period as a human-readable duration (for example, "30d", "24h", "1y"). responses: "204": description: Success. The table retention period has been set. "400": description: Bad request. Invalid duration format. "401": $ref: "#/components/responses/Unauthorized" "404": description: Database or table not found. tags: - Table delete: operationId: delete_retention_period_for_table summary: Clear table retention period description: >- Removes the retention period from a specific table, reverting to the database-level retention period (or infinite retention if no database-level retention is set). This endpoint is only available in InfluxDB 3 Enterprise. #### Related - [influxdb3 update table](/influxdb3/enterprise/reference/cli/influxdb3/update/table/) x-enterprise-only: true parameters: - name: db in: query required: true schema: type: string description: The database name. - name: table in: query required: true schema: type: string description: The table name. responses: "204": description: Success. The table retention period has been cleared. "401": $ref: "#/components/responses/Unauthorized" "404": description: Database or table not found. tags: - Table /api/v3/enterprise/configure/token: post: operationId: PostCreateResourceToken summary: Create a resource token description: | Creates a resource (fine-grained permissions) token. A resource token is a token that has access to specific resources in the system. This endpoint is only available in InfluxDB 3 Enterprise. responses: "201": description: | Success. The resource token has been created. The response body contains the token string and metadata. content: application/json: schema: $ref: "#/components/schemas/ResourceTokenObject" "401": $ref: "#/components/responses/Unauthorized" tags: - Authentication - Token x-enterprise-only: true requestBody: required: true content: application/json: schema: $ref: "#/components/schemas/CreateTokenWithPermissionsRequest" /api/v3/plugin_test/schedule: post: operationId: PostTestSchedulingPlugin responses: "200": description: Success. The plugin test has been executed. "400": description: Bad request. "401": $ref: "#/components/responses/Unauthorized" "404": description: Plugin not enabled. summary: Test scheduling plugin description: Executes a test of a scheduling plugin. requestBody: required: true content: application/json: schema: $ref: "#/components/schemas/SchedulePluginTestRequest" tags: - Processing engine /api/v3/plugin_test/wal: post: operationId: PostTestWALPlugin responses: "200": description: Success. The plugin test has been executed. "400": description: Bad request. "401": $ref: "#/components/responses/Unauthorized" "404": description: Plugin not enabled. summary: Test WAL plugin description: Executes a test of a write-ahead logging (WAL) plugin. requestBody: required: true content: application/json: schema: $ref: "#/components/schemas/WALPluginTestRequest" tags: - Processing engine /api/v3/plugins/directory: put: operationId: PutPluginDirectory requestBody: required: true content: application/json: schema: $ref: "#/components/schemas/PluginDirectoryRequest" responses: "200": description: Success. The plugin directory has been updated. "401": $ref: "#/components/responses/Unauthorized" "403": description: Forbidden. Admin token required. "500": description: Plugin not found. The `plugin_name` does not match any registered trigger. summary: Update a multi-file plugin directory description: | Replaces all files in a multi-file plugin directory. The `plugin_name` must match a registered trigger name. Each entry in the `files` array specifies a `relative_path` and `content`—the server writes them into the trigger's plugin directory. Use this endpoint to update multi-file plugins (directories with `__init__.py` and supporting modules). For single-file plugins, use `PUT /api/v3/plugins/files` instead. tags: - Processing engine x-security-note: Requires an admin token /api/v3/plugins/files: post: operationId: create_plugin_file summary: Create a plugin file description: | Creates a single plugin file in the plugin directory. Writes the `content` to a file named after `plugin_name`. Does not require an existing trigger—use this to upload plugin files before creating triggers that reference them. requestBody: required: true content: application/json: schema: $ref: "#/components/schemas/PluginFileRequest" responses: "200": description: Success. The plugin file has been created. "401": $ref: "#/components/responses/Unauthorized" "403": description: Forbidden. Admin token required. tags: - Processing engine x-security-note: Requires an admin token put: operationId: PutPluginFile summary: Update a plugin file description: | Updates a single plugin file for an existing trigger. The `plugin_name` must match a registered trigger name—the server resolves the trigger's `plugin_filename` and overwrites that file with the provided `content`. To upload a new plugin file before creating a trigger, use `POST /api/v3/plugins/files` instead. To update a multi-file plugin directory, use `PUT /api/v3/plugins/directory`. requestBody: required: true content: application/json: schema: $ref: "#/components/schemas/PluginFileRequest" responses: "200": description: Success. The plugin file has been updated. "401": $ref: "#/components/responses/Unauthorized" "403": description: Forbidden. Admin token required. "500": description: Plugin not found. The `plugin_name` does not match any registered trigger. tags: - Processing engine x-security-note: Requires an admin token /api/v3/query_influxql: get: operationId: GetExecuteInfluxQLQuery responses: "200": description: Success. The response body contains query results. content: application/json: schema: $ref: "#/components/schemas/QueryResponse" text/csv: schema: type: string application/vnd.apache.parquet: schema: type: string application/jsonl: schema: type: string "400": description: Bad request. "401": $ref: "#/components/responses/Unauthorized" "403": description: Access denied. "404": description: Database not found. "405": description: Method not allowed. "422": description: Unprocessable entity. summary: Execute InfluxQL query description: Executes an InfluxQL query to retrieve data from the specified database. parameters: - $ref: "#/components/parameters/dbQueryParam" - name: q in: query required: true schema: type: string - name: format in: query required: false schema: type: string - $ref: "#/components/parameters/AcceptQueryHeader" - name: params in: query required: false schema: type: string description: JSON-encoded query parameters. Use this to pass bind parameters to parameterized queries. description: JSON-encoded query parameters for parameterized queries. tags: - Query data post: operationId: PostExecuteQueryInfluxQL responses: "200": description: Success. The response body contains query results. content: application/json: schema: $ref: "#/components/schemas/QueryResponse" text/csv: schema: type: string application/vnd.apache.parquet: schema: type: string application/jsonl: schema: type: string "400": description: Bad request. "401": $ref: "#/components/responses/Unauthorized" "403": description: Access denied. "404": description: Database not found. "405": description: Method not allowed. "422": description: Unprocessable entity. summary: Execute InfluxQL query description: Executes an InfluxQL query to retrieve data from the specified database. parameters: - $ref: "#/components/parameters/AcceptQueryHeader" - $ref: "#/components/parameters/ContentType" requestBody: $ref: "#/components/requestBodies/queryRequestBody" tags: - Query data /api/v3/query_sql: get: operationId: GetExecuteQuerySQL responses: "200": description: Success. The response body contains query results. content: application/json: schema: $ref: "#/components/schemas/QueryResponse" example: results: - series: - name: mytable columns: - time - value values: - - "2024-02-02T12:00:00Z" - 42 text/csv: schema: type: string application/vnd.apache.parquet: schema: type: string application/jsonl: schema: type: string "400": description: Bad request. "401": $ref: "#/components/responses/Unauthorized" "403": description: Access denied. "404": description: Database not found. "405": description: Method not allowed. "422": description: Unprocessable entity. summary: Execute SQL query description: Executes an SQL query to retrieve data from the specified database. parameters: - $ref: "#/components/parameters/db" - $ref: "#/components/parameters/querySqlParam" - $ref: "#/components/parameters/format" - $ref: "#/components/parameters/AcceptQueryHeader" - $ref: "#/components/parameters/ContentType" - name: params in: query required: false schema: type: string description: JSON-encoded query parameters. Use this to pass bind parameters to parameterized queries. description: JSON-encoded query parameters for parameterized queries. tags: - Query data post: operationId: PostExecuteQuerySQL responses: "200": description: Success. The response body contains query results. content: application/json: schema: $ref: "#/components/schemas/QueryResponse" text/csv: schema: type: string application/vnd.apache.parquet: schema: type: string application/jsonl: schema: type: string "400": description: Bad request. "401": $ref: "#/components/responses/Unauthorized" "403": description: Access denied. "404": description: Database not found. "405": description: Method not allowed. "422": description: Unprocessable entity. summary: Execute SQL query description: Executes an SQL query to retrieve data from the specified database. parameters: - $ref: "#/components/parameters/AcceptQueryHeader" - $ref: "#/components/parameters/ContentType" requestBody: $ref: "#/components/requestBodies/queryRequestBody" tags: - Query data /api/v3/write_lp: post: operationId: PostWriteLP parameters: - $ref: "#/components/parameters/dbWriteParam" - $ref: "#/components/parameters/accept_partial" - $ref: "#/components/parameters/precisionParam" - name: no_sync in: query schema: $ref: "#/components/schemas/NoSync" - name: Content-Type in: header description: | The content type of the request payload. schema: $ref: "#/components/schemas/LineProtocol" required: false - name: Accept in: header description: | The content type that the client can understand. Writes only return a response body if they fail (partially or completely)--for example, due to a syntax problem or type mismatch. schema: type: string default: application/json enum: - application/json required: false - $ref: "#/components/parameters/ContentEncoding" - $ref: "#/components/parameters/ContentLength" responses: "204": description: Success ("No Content"). All data in the batch is written and queryable. headers: cluster-uuid: $ref: "#/components/headers/ClusterUUID" "400": description: Bad request. "401": $ref: "#/components/responses/Unauthorized" "403": description: Access denied. "413": description: Request entity too large. "422": description: Unprocessable entity. summary: Write line protocol description: > Writes line protocol to the specified database. This is the native InfluxDB 3 Enterprise write endpoint that provides enhanced control over write behavior with advanced parameters for high-performance and fault-tolerant operations. Use this endpoint to send data in [line protocol](/influxdb3/enterprise/reference/syntax/line-protocol/) format to InfluxDB. Use query parameters to specify options for writing data. #### Features - **Partial writes**: Use `accept_partial=true` to allow partial success when some lines in a batch fail - **Asynchronous writes**: Use `no_sync=true` to skip waiting for WAL synchronization, allowing faster response times but sacrificing durability guarantees - **Flexible precision**: Automatic timestamp precision detection with `precision=auto` (default) #### Auto precision detection When you use `precision=auto` or omit the precision parameter, InfluxDB 3 automatically detects the timestamp precision based on the magnitude of the timestamp value: - Timestamps < 5e9 → Second precision (multiplied by 1,000,000,000 to convert to nanoseconds) - Timestamps < 5e12 → Millisecond precision (multiplied by 1,000,000) - Timestamps < 5e15 → Microsecond precision (multiplied by 1,000) - Larger timestamps → Nanosecond precision (no conversion needed) #### Related - [Use the InfluxDB v3 write_lp API to write data](/influxdb3/enterprise/write-data/http-api/v3-write-lp/) requestBody: $ref: "#/components/requestBodies/lineProtocolRequestBody" tags: - Write data x-codeSamples: - label: cURL - Basic write lang: Shell source: | curl --request POST "http://localhost:8181/api/v3/write_lp?db=sensors" \ --header "Authorization: Bearer DATABASE_TOKEN" \ --header "Content-Type: text/plain" \ --data-raw "cpu,host=server01 usage=85.2 1638360000000000000" - label: cURL - Write with millisecond precision lang: Shell source: | curl --request POST "http://localhost:8181/api/v3/write_lp?db=sensors&precision=ms" \ --header "Authorization: Bearer DATABASE_TOKEN" \ --header "Content-Type: text/plain" \ --data-raw "cpu,host=server01 usage=85.2 1638360000000" - label: cURL - Asynchronous write with partial acceptance lang: Shell source: > curl --request POST "http://localhost:8181/api/v3/write_lp?db=sensors&accept_partial=true&no_sync=true&precision=auto" \ --header "Authorization: Bearer DATABASE_TOKEN" \ --header "Content-Type: text/plain" \ --data-raw "cpu,host=server01 usage=85.2 memory,host=server01 used=4096" - label: cURL - Multiple measurements with tags lang: Shell source: | curl --request POST "http://localhost:8181/api/v3/write_lp?db=sensors&precision=ns" \ --header "Authorization: Bearer DATABASE_TOKEN" \ --header "Content-Type: text/plain" \ --data-raw "cpu,host=server01,region=us-west usage=85.2,load=0.75 1638360000000000000 memory,host=server01,region=us-west used=4096,free=12288 1638360000000000000 disk,host=server01,region=us-west,device=/dev/sda1 used=50.5,free=49.5 1638360000000000000" /health: get: operationId: GetHealth responses: "200": description: Service is running. Returns `OK`. content: text/plain: schema: type: string example: OK "401": description: Unauthorized. Authentication is required. "500": description: Service is unavailable. summary: Health check description: | Checks the status of the service. Returns `OK` if the service is running. This endpoint does not return version information. Use the [`/ping`](#operation/GetPing) endpoint to retrieve version details. > **Note**: This endpoint requires authentication by default in InfluxDB 3 Enterprise. tags: - Server information /metrics: get: operationId: GetMetrics responses: "200": description: Success summary: Metrics description: Retrieves Prometheus-compatible server metrics. tags: - Server information /ping: get: operationId: GetPing responses: "200": description: Success. The response body contains server information. headers: x-influxdb-version: description: The InfluxDB version number (for example, `3.8.0`). schema: type: string example: 3.8.0 x-influxdb-build: description: The InfluxDB build type (`Core` or `Enterprise`). schema: type: string example: Enterprise content: application/json: schema: type: object properties: version: type: string description: The InfluxDB version number. example: 3.8.0 revision: type: string description: The git revision hash for the build. example: 83b589b883 process_id: type: string description: A unique identifier for the server process. example: b756d9e0-cecd-4f72-b6d0-19e2d4f8cbb7 "401": description: Unauthorized. Authentication is required. "404": description: | Not Found. Returned for HEAD requests. Use a GET request to retrieve version information. x-client-method: ping summary: Ping the server description: | Returns version information for the server. **Important**: Use a GET request. HEAD requests return `404 Not Found`. The response includes version information in both headers and the JSON body: - **Headers**: `x-influxdb-version` and `x-influxdb-build` - **Body**: JSON object with `version`, `revision`, and `process_id` > **Note**: This endpoint requires authentication by default in InfluxDB 3 Enterprise. tags: - Server information post: operationId: ping responses: "200": description: Success. The response body contains server information. headers: x-influxdb-version: description: The InfluxDB version number (for example, `3.8.0`). schema: type: string example: 3.8.0 x-influxdb-build: description: The InfluxDB build type (`Core` or `Enterprise`). schema: type: string example: Enterprise content: application/json: schema: type: object properties: version: type: string description: The InfluxDB version number. example: 3.8.0 revision: type: string description: The git revision hash for the build. example: 83b589b883 process_id: type: string description: A unique identifier for the server process. example: b756d9e0-cecd-4f72-b6d0-19e2d4f8cbb7 "401": description: Unauthorized. Authentication is required. "404": description: | Not Found. Returned for HEAD requests. Use a GET request to retrieve version information. summary: Ping the server description: Returns version information for the server. Accepts POST in addition to GET. tags: - Server information /query: get: operationId: GetV1ExecuteQuery responses: "200": description: | Success. The response body contains query results. content: application/json: schema: $ref: "#/components/schemas/QueryResponse" application/csv: schema: type: string headers: Content-Type: description: > The content type of the response. Default is `application/json`. If the `Accept` request header is `application/csv` or `text/csv`, the `Content-type` response header is `application/csv` and the response is formatted as CSV. schema: type: string default: application/json enum: - application/json - application/csv "400": description: Bad request. "401": $ref: "#/components/responses/Unauthorized" "403": description: Access denied. "404": description: Database not found. "405": description: Method not allowed. "422": description: Unprocessable entity. summary: Execute InfluxQL query (v1-compatible) description: > Executes an InfluxQL query to retrieve data from the specified database. This endpoint is compatible with InfluxDB 1.x client libraries and third-party integrations such as Grafana. Use query parameters to specify the database and the InfluxQL query. #### Related - [Use the InfluxDB v1 HTTP query API and InfluxQL to query data](/influxdb3/enterprise/query-data/execute-queries/influxdb-v1-api/) parameters: - name: Accept in: header schema: type: string default: application/json enum: - application/json - application/csv - text/csv required: false description: > The content type that the client can understand. If `text/csv` is specified, the `Content-type` response header is `application/csv` and the response is formatted as CSV. Returns an error if the format is invalid or non-UTF8. - in: query name: chunked description: | If true, the response is divided into chunks of size `chunk_size`. schema: type: boolean default: false - in: query name: chunk_size description: | The number of records that will go into a chunk. This parameter is only used if `chunked=true`. schema: type: integer default: 10000 - in: query name: db description: The database to query. If not provided, the InfluxQL query string must specify the database. schema: type: string format: InfluxQL - in: query name: pretty description: | If true, the JSON response is formatted in a human-readable format. schema: type: boolean default: false - in: query name: q description: The InfluxQL query string. required: true schema: type: string - name: epoch description: > Formats timestamps as [unix (epoch) timestamps](/influxdb3/enterprise/reference/glossary/#unix-timestamp) with the specified precision instead of [RFC3339 timestamps](/influxdb3/enterprise/reference/glossary/#rfc3339-timestamp) with nanosecond precision. in: query schema: $ref: "#/components/schemas/EpochCompatibility" - $ref: "#/components/parameters/v1UsernameParam" - $ref: "#/components/parameters/v1PasswordParam" - name: rp in: query required: false schema: type: string description: | Retention policy name. Honored but discouraged. InfluxDB 3 doesn't use retention policies. - name: Authorization in: header required: false schema: type: string description: | Authorization header for token-based authentication. Supported schemes: - `Bearer AUTH_TOKEN` - OAuth bearer token scheme - `Token AUTH_TOKEN` - InfluxDB v2 token scheme - `Basic ` - Basic authentication (username is ignored) tags: - Query data - Compatibility endpoints post: operationId: PostExecuteV1Query responses: "200": description: | Success. The response body contains query results. content: application/json: schema: $ref: "#/components/schemas/QueryResponse" application/csv: schema: type: string headers: Content-Type: description: > The content type of the response. Default is `application/json`. If the `Accept` request header is `application/csv` or `text/csv`, the `Content-type` response header is `application/csv` and the response is formatted as CSV. schema: type: string default: application/json enum: - application/json - application/csv "400": description: Bad request. "401": $ref: "#/components/responses/Unauthorized" "403": description: Access denied. "404": description: Database not found. "405": description: Method not allowed. "422": description: Unprocessable entity. summary: Execute InfluxQL query (v1-compatible) description: > Executes an InfluxQL query to retrieve data from the specified database. #### Related - [Use the InfluxDB v1 HTTP query API and InfluxQL to query data](/influxdb3/enterprise/query-data/execute-queries/influxdb-v1-api/) parameters: - name: Accept in: header schema: type: string default: application/json enum: - application/json - application/csv - text/csv required: false description: > The content type that the client can understand. If `text/csv` is specified, the `Content-type` response header is `application/csv` and the response is formatted as CSV. Returns an error if the format is invalid or non-UTF8. requestBody: content: application/json: schema: type: object properties: db: type: string description: The database to query. If not provided, the InfluxQL query string must specify the database. q: description: The InfluxQL query string. type: string chunked: description: | If true, the response is divided into chunks of size `chunk_size`. type: boolean chunk_size: description: | The number of records that will go into a chunk. This parameter is only used if `chunked=true`. type: integer default: 10000 epoch: description: > A unix timestamp precision. - `h` for hours - `m` for minutes - `s` for seconds - `ms` for milliseconds - `u` or `µ` for microseconds - `ns` for nanoseconds Formats timestamps as [unix (epoch) timestamps](/influxdb3/enterprise/reference/glossary/#unix-timestamp) with the specified precision instead of [RFC3339 timestamps](/influxdb3/enterprise/reference/glossary/#rfc3339-timestamp) with nanosecond precision. enum: - ns - u - µ - ms - s - m - h type: string pretty: description: | If true, the JSON response is formatted in a human-readable format. type: boolean required: - q application/x-www-form-urlencoded: schema: type: object properties: db: type: string description: The database to query. If not provided, the InfluxQL query string must specify the database. q: description: The InfluxQL query string. type: string chunked: description: | If true, the response is divided into chunks of size `chunk_size`. type: boolean chunk_size: description: | The number of records that will go into a chunk. This parameter is only used if `chunked=true`. type: integer default: 10000 epoch: description: > A unix timestamp precision. - `h` for hours - `m` for minutes - `s` for seconds - `ms` for milliseconds - `u` or `µ` for microseconds - `ns` for nanoseconds Formats timestamps as [unix (epoch) timestamps](/influxdb3/enterprise/reference/glossary/#unix-timestamp) with the specified precision instead of [RFC3339 timestamps](/influxdb3/enterprise/reference/glossary/#rfc3339-timestamp) with nanosecond precision. enum: - ns - u - µ - ms - s - m - h type: string pretty: description: | If true, the JSON response is formatted in a human-readable format. type: boolean required: - q application/vnd.influxql: schema: type: string description: InfluxQL query string sent as the request body. tags: - Query data - Compatibility endpoints /write: post: operationId: PostV1Write responses: "204": description: Success ("No Content"). All data in the batch is written and queryable. headers: cluster-uuid: $ref: "#/components/headers/ClusterUUID" "400": description: | Bad request. Some (a _partial write_) or all of the data from the batch was rejected and not written. If a partial write occurred, then some points from the batch are written and queryable. The response body: - indicates if a partial write occurred or all data was rejected. - contains details about the [rejected points](/influxdb3/enterprise/write-data/troubleshoot/#troubleshoot-rejected-points), up to 100 points. content: application/json: examples: rejectedAllPoints: summary: Rejected all points in the batch value: | { "error": "write of line protocol failed", "data": [ { "original_line": "dquote> home,room=Kitchen temp=hi", "line_number": 2, "error_message": "No fields were provided" } ] } partialWriteErrorWithRejectedPoints: summary: Partial write rejected some points in the batch value: | { "error": "partial write of line protocol occurred", "data": [ { "original_line": "dquote> home,room=Kitchen temp=hi", "line_number": 2, "error_message": "No fields were provided" } ] } "401": $ref: "#/components/responses/Unauthorized" "403": description: Access denied. "413": description: Request entity too large. summary: Write line protocol (v1-compatible) description: > Writes line protocol to the specified database. This endpoint provides backward compatibility for InfluxDB 1.x write workloads using tools such as InfluxDB 1.x client libraries, the Telegraf `outputs.influxdb` output plugin, or third-party tools. Use this endpoint to send data in [line protocol](https://docs.influxdata.com/influxdb3/enterprise/reference/syntax/line-protocol/) format to InfluxDB. Use query parameters to specify options for writing data. #### Related - [Use compatibility APIs to write data](/influxdb3/enterprise/write-data/http-api/compatibility-apis/) parameters: - $ref: "#/components/parameters/dbWriteParam" - $ref: "#/components/parameters/compatibilityPrecisionParam" - $ref: "#/components/parameters/v1UsernameParam" - $ref: "#/components/parameters/v1PasswordParam" - name: rp in: query required: false schema: type: string description: | Retention policy name. Honored but discouraged. InfluxDB 3 doesn't use retention policies. - name: consistency in: query required: false schema: type: string description: | Write consistency level. Ignored by InfluxDB 3. Provided for compatibility with InfluxDB 1.x clients. - name: Authorization in: header required: false schema: type: string description: | Authorization header for token-based authentication. Supported schemes: - `Bearer AUTH_TOKEN` - OAuth bearer token scheme - `Token AUTH_TOKEN` - InfluxDB v2 token scheme - `Basic ` - Basic authentication (username is ignored) - name: Content-Type in: header description: | The content type of the request payload. schema: $ref: "#/components/schemas/LineProtocol" required: false - name: Accept in: header description: | The content type that the client can understand. Writes only return a response body if they fail (partially or completely)--for example, due to a syntax problem or type mismatch. schema: type: string default: application/json enum: - application/json required: false - $ref: "#/components/parameters/ContentEncoding" - $ref: "#/components/parameters/ContentLength" requestBody: $ref: "#/components/requestBodies/lineProtocolRequestBody" tags: - Compatibility endpoints - Write data components: parameters: AcceptQueryHeader: name: Accept in: header schema: type: string default: application/json enum: - application/json - application/jsonl - application/vnd.apache.parquet - text/csv required: false description: | The content type that the client can understand. ContentEncoding: name: Content-Encoding in: header description: | The compression applied to the line protocol in the request payload. To send a gzip payload, pass `Content-Encoding: gzip` header. schema: $ref: "#/components/schemas/ContentEncoding" required: false ContentLength: name: Content-Length in: header description: | The size of the entity-body, in bytes, sent to InfluxDB. schema: $ref: "#/components/schemas/ContentLength" ContentType: name: Content-Type description: | The format of the data in the request body. in: header schema: type: string enum: - application/json required: false db: name: db in: query required: true schema: type: string description: | The name of the database. dbWriteParam: name: db in: query required: true schema: type: string description: | The name of the database. InfluxDB creates the database if it doesn't already exist, and then writes all points in the batch to the database. dbQueryParam: name: db in: query required: false schema: type: string description: | The name of the database. If you provide a query that specifies the database, you can omit the 'db' parameter from your request. accept_partial: name: accept_partial in: query required: false schema: $ref: "#/components/schemas/AcceptPartial" compatibilityPrecisionParam: name: precision in: query required: false schema: $ref: "#/components/schemas/PrecisionWriteCompatibility" description: The precision for unix timestamps in the line protocol batch. precisionParam: name: precision in: query required: false schema: $ref: "#/components/schemas/PrecisionWrite" description: The precision for unix timestamps in the line protocol batch. querySqlParam: name: q in: query required: true schema: type: string format: SQL description: | The query to execute. format: name: format in: query required: false schema: $ref: "#/components/schemas/Format" formatRequired: name: format in: query required: true schema: $ref: "#/components/schemas/Format" v1UsernameParam: name: u in: query required: false schema: type: string description: > Username for v1 compatibility authentication. When using Basic authentication or query string authentication, InfluxDB 3 ignores this parameter but allows any arbitrary string for compatibility with InfluxDB 1.x clients. v1PasswordParam: name: p in: query required: false schema: type: string description: | Password for v1 compatibility authentication. For query string authentication, pass a database token with write permissions as this parameter. InfluxDB 3 checks that the `p` value is an authorized token. requestBodies: lineProtocolRequestBody: required: true content: text/plain: schema: type: string examples: line: summary: Example line protocol value: measurement,tag=value field=1 1234567890 multiline: summary: Example line protocol with UTF-8 characters value: | measurement,tag=value field=1 1234567890 measurement,tag=value field=2 1234567900 measurement,tag=value field=3 1234568000 queryRequestBody: required: true content: application/json: schema: $ref: "#/components/schemas/QueryRequestObject" schemas: AdminTokenObject: type: object properties: id: type: integer name: type: string token: type: string hash: type: string created_at: type: string format: date-time expiry: format: date-time example: id: 0 name: _admin token: apiv3_00xx0Xx0xx00XX0x0 hash: 00xx0Xx0xx00XX0x0 created_at: "2025-04-18T14:02:45.331Z" expiry: null ResourceTokenObject: type: object properties: token_name: type: string permissions: type: array items: type: object properties: resource_type: type: string enum: - system - db actions: type: array items: type: string enum: - read - write resource_names: type: array items: type: string description: List of resource names. Use "*" for all resources. expiry_secs: type: integer description: The expiration time in seconds. example: token_name: All system information permissions: - resource_type: system actions: - read resource_names: - "*" expiry_secs: 300000 ContentEncoding: type: string enum: - gzip - identity description: > Content coding. Use `gzip` for compressed data or `identity` for unmodified, uncompressed data. #### Multi-member gzip support InfluxDB 3 supports multi-member gzip payloads (concatenated gzip files per [RFC 1952](https://www.rfc-editor.org/rfc/rfc1952)). This allows you to: - Concatenate multiple gzip files and send them in a single request - Maintain compatibility with InfluxDB v1 and v2 write endpoints - Simplify batch operations using standard compression tools default: identity LineProtocol: type: string enum: - text/plain - text/plain; charset=utf-8 description: | `text/plain` is the content type for line protocol. `UTF-8` is the default character set. default: text/plain; charset=utf-8 ContentLength: type: integer description: The length in decimal number of octets. Database: type: string AcceptPartial: type: boolean default: true description: Accept partial writes. Format: type: string enum: - json - csv - parquet - json_lines - jsonl - pretty description: |- The format of data in the response body. `json_lines` is the canonical name; `jsonl` is accepted as an alias. NoSync: type: boolean default: false description: | Acknowledges a successful write without waiting for WAL persistence. #### Related - [Use the HTTP API and client libraries to write data](/influxdb3/enterprise/write-data/api-client-libraries/) - [Data durability](/influxdb3/enterprise/reference/internals/durability/) PrecisionWriteCompatibility: enum: - ms - s - us - u - ns - "n" type: string description: |- The precision for unix timestamps in the line protocol batch. Use `ms` for milliseconds, `s` for seconds, `us` or `u` for microseconds, or `ns` or `n` for nanoseconds. Optional — defaults to nanosecond precision if omitted. PrecisionWrite: enum: - auto - nanosecond - microsecond - millisecond - second type: string description: | The precision for unix timestamps in the line protocol batch. Supported values: - `auto` (default): Automatically detects precision based on timestamp magnitude - `nanosecond`: Nanoseconds - `microsecond`: Microseconds - `millisecond`: Milliseconds - `second`: Seconds QueryRequestObject: type: object properties: db: description: | The name of the database to query. Required if the query (`q`) doesn't specify the database. type: string q: description: The query to execute. type: string format: description: The format of the query results. type: string enum: - json - csv - parquet - json_lines - jsonl - pretty params: description: | Additional parameters for the query. Use this field to pass query parameters. type: object additionalProperties: true required: - db - q example: db: mydb q: SELECT * FROM mytable format: json params: {} CreateDatabaseRequest: type: object properties: db: type: string pattern: ^[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9]$|^[a-zA-Z0-9]$ description: |- The database name. Database names cannot contain underscores (_). Names must start and end with alphanumeric characters and can contain hyphens (-) in the middle. retention_period: type: string description: |- The retention period for the database. Specifies how long data should be retained. Use duration format (for example, "1d", "1h", "30m", "7d"). example: 7d required: - db CreateTableRequest: type: object properties: db: type: string table: type: string tags: type: array items: type: string fields: type: array items: type: object properties: name: type: string type: type: string enum: - utf8 - int64 - uint64 - float64 - bool required: - name - type retention_period: type: string description: |- The retention period for the table. Specifies how long data in this table should be retained. Use duration format (for example, "1d", "1h", "30m", "7d"). example: 30d required: - db - table - tags - fields DistinctCacheCreateRequest: type: object properties: db: type: string table: type: string node_spec: $ref: "#/components/schemas/ApiNodeSpec" name: type: string description: Optional cache name. columns: type: array items: type: string max_cardinality: type: integer description: Optional maximum cardinality. max_age: type: integer description: Optional maximum age in seconds. required: - db - table - columns example: db: mydb table: mytable columns: - tag1 - tag2 max_cardinality: 1000 max_age: 3600 LastCacheCreateRequest: type: object properties: db: type: string table: type: string node_spec: $ref: "#/components/schemas/ApiNodeSpec" name: type: string description: Optional cache name. key_columns: type: array items: type: string description: Optional list of key columns. value_columns: type: array items: type: string description: Optional list of value columns. count: type: integer description: Optional count. ttl: type: integer description: Optional time-to-live in seconds. required: - db - table example: db: mydb table: mytable key_columns: - tag1 value_columns: - field1 count: 100 ttl: 3600 ProcessingEngineTriggerRequest: type: object properties: db: type: string plugin_filename: type: string description: | The path and filename of the plugin to execute--for example, `schedule.py` or `endpoints/report.py`. The path can be absolute or relative to the `--plugins-dir` directory configured when starting InfluxDB 3. The plugin file must implement the trigger interface associated with the trigger's specification. node_spec: $ref: "#/components/schemas/ApiNodeSpec" trigger_name: type: string trigger_settings: description: | Configuration for trigger error handling and execution behavior. allOf: - $ref: "#/components/schemas/TriggerSettings" trigger_specification: description: > Specifies when and how the processing engine trigger should be invoked. ## Supported trigger specifications: ### Cron-based scheduling Format: `cron:CRON_EXPRESSION` Uses extended (6-field) cron format (second minute hour day_of_month month day_of_week): ``` ┌───────────── second (0-59) │ ┌───────────── minute (0-59) │ │ ┌───────────── hour (0-23) │ │ │ ┌───────────── day of month (1-31) │ │ │ │ ┌───────────── month (1-12) │ │ │ │ │ ┌───────────── day of week (0-6, Sunday=0) │ │ │ │ │ │ * * * * * * ``` Examples: - `cron:0 0 6 * * 1-5` - Every weekday at 6:00 AM - `cron:0 30 14 * * 5` - Every Friday at 2:30 PM - `cron:0 0 0 1 * *` - First day of every month at midnight ### Interval-based scheduling Format: `every:DURATION` Supported durations: `s` (seconds), `m` (minutes), `h` (hours), `d` (days), `w` (weeks), `M` (months), `y` (years): - `every:30s` - Every 30 seconds - `every:5m` - Every 5 minutes - `every:1h` - Every hour - `every:1d` - Every day - `every:1w` - Every week - `every:1M` - Every month - `every:1y` - Every year **Maximum interval**: 1 year ### Table-based triggers - `all_tables` - Triggers on write events to any table in the database - `table:TABLE_NAME` - Triggers on write events to a specific table ### On-demand triggers Format: `request:REQUEST_PATH` Creates an HTTP endpoint `/api/v3/engine/REQUEST_PATH` for manual invocation: - `request:hello-world` - Creates endpoint `/api/v3/engine/hello-world` - `request:data-export` - Creates endpoint `/api/v3/engine/data-export` pattern: ^(cron:[0-9 *,/-]+|every:[0-9]+[smhd]|all_tables|table:[a-zA-Z_][a-zA-Z0-9_]*|request:[a-zA-Z0-9_-]+)$ example: cron:0 0 6 * * 1-5 trigger_arguments: type: object additionalProperties: true description: Optional arguments passed to the plugin. disabled: type: boolean default: false description: Whether the trigger is disabled. required: - db - plugin_filename - trigger_name - trigger_settings - trigger_specification - disabled TriggerSettings: type: object description: | Configuration settings for processing engine trigger error handling and execution behavior. properties: run_async: type: boolean default: false description: | Whether to run the trigger asynchronously. When `true`, the trigger executes in the background without blocking. When `false`, the trigger executes synchronously. error_behavior: type: string enum: - Log - Retry - Disable description: | Specifies how to handle errors that occur during trigger execution: - `Log`: Log the error and continue (default) - `Retry`: Retry the trigger execution - `Disable`: Disable the trigger after an error default: Log required: - run_async - error_behavior ApiNodeSpec: x-enterprise-only: true type: object description: | Optional specification for targeting specific nodes in a multi-node InfluxDB 3 Enterprise cluster. Use this to control which node(s) should handle the cache or trigger. properties: node_id: type: string description: | The ID of a specific node in the cluster. If specified, the cache or trigger will only be created on this node. node_group: type: string description: | The name of a node group in the cluster. If specified, the cache or trigger will be created on all nodes in this group. WALPluginTestRequest: type: object description: | Request body for testing a write-ahead logging (WAL) plugin. properties: filename: type: string description: | The path and filename of the plugin to test. database: type: string description: | The database name to use for the test. input_lp: type: string description: | Line protocol data to use as input for the test. cache_name: type: string description: | Optional name of the cache to use in the test. input_arguments: type: object additionalProperties: type: string description: | Optional key-value pairs of arguments to pass to the plugin. required: - filename - database - input_lp SchedulePluginTestRequest: type: object description: | Request body for testing a scheduling plugin. properties: filename: type: string description: | The path and filename of the plugin to test. database: type: string description: | The database name to use for the test. schedule: type: string description: | Optional schedule specification in cron or interval format. cache_name: type: string description: | Optional name of the cache to use in the test. input_arguments: type: object additionalProperties: type: string description: | Optional key-value pairs of arguments to pass to the plugin. required: - filename - database PluginFileRequest: type: object description: | Request body for updating a plugin file. properties: plugin_name: type: string description: | The name of the plugin file to update. content: type: string description: | The content of the plugin file. required: - plugin_name - content PluginDirectoryRequest: type: object description: | Request body for updating plugin directory with multiple files. properties: plugin_name: type: string description: | The name of the plugin directory to update. files: type: array items: $ref: "#/components/schemas/PluginFileEntry" description: | List of plugin files to include in the directory. required: - plugin_name - files PluginFileEntry: type: object description: | Represents a single file in a plugin directory. properties: content: type: string description: | The content of the file. relative_path: type: string description: The relative path of the file within the plugin directory. required: - relative_path - content ShowDatabasesResponse: type: object properties: databases: type: array items: type: string QueryResponse: type: object properties: results: type: array items: type: object example: results: - series: - name: mytable columns: - time - value values: - - "2024-02-02T12:00:00Z" - 42 ErrorMessage: type: object properties: error: type: string data: type: object nullable: true LineProtocolError: properties: code: description: Code is the machine-readable error code. enum: - internal error - not found - conflict - invalid - empty value - unavailable readOnly: true type: string err: description: Stack of errors that occurred during processing of the request. Useful for debugging. readOnly: true type: string line: description: First line in the request body that contains malformed data. format: int32 readOnly: true type: integer message: description: Human-readable message. readOnly: true type: string op: description: Describes the logical code operation when the error occurred. Useful for debugging. readOnly: true type: string required: - code EpochCompatibility: description: | A unix timestamp precision. - `h` for hours - `m` for minutes - `s` for seconds - `ms` for milliseconds - `u` or `µ` for microseconds - `ns` for nanoseconds enum: - ns - u - µ - ms - s - m - h type: string UpdateDatabaseRequest: type: object properties: retention_period: type: string description: | The retention period for the database. Specifies how long data should be retained. Use duration format (for example, "1d", "1h", "30m", "7d"). example: 7d description: Request schema for updating database configuration. UpdateTableRequest: type: object properties: db: type: string description: The name of the database containing the table. table: type: string description: The name of the table to update. retention_period: type: string description: | The retention period for the table. Specifies how long data in this table should be retained. Use duration format (for example, "1d", "1h", "30m", "7d"). example: 30d required: - db - table description: Request schema for updating table configuration. LicenseResponse: type: object properties: license_type: type: string description: The type of license (for example, "enterprise", "trial"). example: enterprise expires_at: type: string format: date-time description: The expiration date of the license in ISO 8601 format. example: "2025-12-31T23:59:59Z" features: type: array items: type: string description: List of features enabled by the license. example: - clustering - processing_engine - advanced_auth status: type: string enum: - active - expired - invalid description: The current status of the license. example: active description: Response schema for license information. CreateTokenWithPermissionsRequest: type: object properties: token_name: type: string description: The name for the resource token. permissions: type: array items: $ref: "#/components/schemas/PermissionDetailsApi" description: List of permissions to grant to the token. expiry_secs: type: integer description: Optional expiration time in seconds. nullable: true required: - token_name - permissions PermissionDetailsApi: type: object properties: resource_type: type: string enum: - system - db description: The type of resource. resource_names: type: array items: type: string description: List of resource names. Use "*" for all resources. actions: type: array items: type: string enum: - read - write description: List of actions to grant. required: - resource_type - resource_names - actions FileIndexCreateRequest: type: object description: Request body for creating a file index. properties: db: type: string description: The database name. table: type: string description: The table name. If omitted, the file index applies to the database. nullable: true columns: type: array items: type: string description: The columns to use for the file index. required: - db - columns example: db: mydb table: mytable columns: - tag1 - tag2 FileIndexDeleteRequest: type: object description: Request body for deleting a file index. properties: db: type: string description: The database name. table: type: string description: The table name. If omitted, deletes the database-level file index. nullable: true required: - db example: db: mydb table: mytable StopNodeRequest: type: object description: Request body for marking a node as stopped in the catalog. properties: node_id: type: string description: The ID of the node to mark as stopped. required: - node_id example: node_id: node-1 responses: Unauthorized: description: Unauthorized access. content: application/json: schema: $ref: "#/components/schemas/ErrorMessage" BadRequest: description: | Request failed. Possible reasons: - Invalid database name - Malformed request body - Invalid timestamp precision content: application/json: schema: $ref: "#/components/schemas/ErrorMessage" Forbidden: description: Access denied. content: application/json: schema: $ref: "#/components/schemas/ErrorMessage" NotFound: description: Resource not found. content: application/json: schema: $ref: "#/components/schemas/ErrorMessage" headers: ClusterUUID: description: | The catalog UUID of the InfluxDB instance. This header is included in all HTTP API responses and enables you to: - Identify which cluster instance handled the request - Monitor deployments across multiple InfluxDB instances - Debug and troubleshoot distributed systems schema: type: string format: uuid example: 01234567-89ab-cdef-0123-456789abcdef securitySchemes: BasicAuthentication: type: http scheme: basic description: >- Use the `Authorization` header with the `Basic` scheme to authenticate v1 API requests. Works with v1 compatibility [`/write`](#operation/PostV1Write) and [`/query`](#operation/GetV1Query) endpoints in InfluxDB 3. When authenticating requests, InfluxDB 3 checks that the `password` part of the decoded credential is an authorized token and ignores the `username` part of the decoded credential. ### Syntax ```http Authorization: Basic ``` ### Example ```bash curl "http://localhost:8181/write?db=DATABASE_NAME&precision=s" \ --user "":"AUTH_TOKEN" \ --header "Content-type: text/plain; charset=utf-8" \ --data-binary 'home,room=kitchen temp=72 1641024000' ``` Replace the following: - **`DATABASE_NAME`**: your InfluxDB 3 Enterprise database - **`AUTH_TOKEN`**: an admin token or database token authorized for the database QuerystringAuthentication: type: apiKey in: query name: u=&p= description: >- Use InfluxDB 1.x API parameters to provide credentials through the query string for v1 API requests. Querystring authentication works with v1-compatible [`/write`](#operation/PostV1Write) and [`/query`](#operation/GetV1Query) endpoints. When authenticating requests, InfluxDB 3 checks that the `p` (_password_) query parameter is an authorized token and ignores the `u` (_username_) query parameter. ### Syntax ```http https://localhost:8181/query/?[u=any]&p=AUTH_TOKEN https://localhost:8181/write/?[u=any]&p=AUTH_TOKEN ``` ### Examples ```bash curl "http://localhost:8181/write?db=DATABASE_NAME&precision=s&p=AUTH_TOKEN" \ --header "Content-type: text/plain; charset=utf-8" \ --data-binary 'home,room=kitchen temp=72 1641024000' ``` Replace the following: - **`DATABASE_NAME`**: your InfluxDB 3 Enterprise database - **`AUTH_TOKEN`**: an admin token or database token authorized for the database ```bash ####################################### # Use an InfluxDB 1.x compatible username and password # to query the InfluxDB v1 HTTP API ####################################### # Use authentication query parameters: # ?p=AUTH_TOKEN ####################################### curl --get "http://localhost:8181/query" \ --data-urlencode "p=AUTH_TOKEN" \ --data-urlencode "db=DATABASE_NAME" \ --data-urlencode "q=SELECT * FROM MEASUREMENT" ``` Replace the following: - **`DATABASE_NAME`**: the database to query - **`AUTH_TOKEN`**: a database token with sufficient permissions to the database BearerAuthentication: type: http scheme: bearer bearerFormat: JWT description: | Use the OAuth Bearer authentication scheme to provide an authorization token to InfluxDB 3. Bearer authentication works with all endpoints. In your API requests, send an `Authorization` header. For the header value, provide the word `Bearer` followed by a space and a database token. ### Syntax ```http Authorization: Bearer AUTH_TOKEN ``` ### Example ```bash curl http://localhost:8181/api/v3/query_influxql \ --header "Authorization: Bearer AUTH_TOKEN" ``` TokenAuthentication: description: |- Use InfluxDB v2 Token authentication to provide an authorization token to InfluxDB 3. The v2 Token scheme works with v1 and v2 compatibility endpoints in InfluxDB 3. In your API requests, send an `Authorization` header. For the header value, provide the word `Token` followed by a space and a database token. The word `Token` is case-sensitive. ### Syntax ```http Authorization: Token AUTH_TOKEN ``` ### Example ```sh ######################################################## # Use the Token authentication scheme with /api/v2/write # to write data. ######################################################## curl --request post "http://localhost:8181/api/v2/write?bucket=DATABASE_NAME&precision=s" \ --header "Authorization: Token AUTH_TOKEN" \ --data-binary 'home,room=kitchen temp=72 1463683075' ``` in: header name: Authorization type: apiKey x-tagGroups: - name: Using the InfluxDB HTTP API tags: - Quick start - Authentication - Cache data - Common parameters - Response codes - Compatibility endpoints - Database - Processing engine - Server information - Table - Token - Query data - Write data