The InfluxDB HTTP API for InfluxDB 3 Core provides a programmatic interface for interacting with InfluxDB 3 Core databases and resources. Use this API to:
The API includes endpoints under the following paths:
/api/v3
: InfluxDB 3 Core native endpoints/
: Compatibility endpoints for InfluxDB v1 workloads and clients/api/v2/write
: Compatibility endpoint for InfluxDB v2 workloads and clientsCheck the status of the InfluxDB server.
curl "http://localhost:8181/health"
Write data to InfluxDB.
curl "http://localhost:8181/api/v3/write_lp?db=sensors&precision=auto" \
--data-raw "home,room=Kitchen temp=72.0
home,room=Living\ room temp=71.5"
If all data is written, the response is 204 No Content
.
Query data from InfluxDB.
curl -G "http://localhost:8181/api/v3/query_sql" \
--data-urlencode "db=sensors" \
--data-urlencode "q=SELECT * FROM home WHERE room='Living room'" \
--data-urlencode "format=jsonl"
Output:
{"room":"Living room","temp":71.5,"time":"2025-02-25T20:19:34.984098"}
For more information about using InfluxDB 3 Core, see the Get started guide.
InfluxDB 3 provides compatibility endpoints for InfluxDB 1.x and InfluxDB 2.x workloads and clients.
/api/v2/write
endpoint
for InfluxDB v2 clients and when you bring existing InfluxDB v2 write workloads to InfluxDB 3./write
endpoint for InfluxDB v1 clients and when you bring existing InfluxDB v1 write workloads to InfluxDB 3.For new workloads, use the /api/v3/write_lp
endpoint.
All endpoints accept the same line protocol format.
Use the HTTP /query
endpoint for InfluxDB v1 clients and v1 query workloads using InfluxQL.
For new workloads, use one of the following:
/api/v3/query_sql
endpoint for new query workloads using SQL./api/v3/query_influxql
endpoint for new query workloads using InfluxQL.Server information endpoints such as /health
and metrics
are compatible with InfluxDB 1.x and InfluxDB 2.x clients.
Writes line protocol to the specified database.
This endpoint provides backward compatibility for InfluxDB 1.x write workloads using tools such as InfluxDB 1.x client libraries, the Telegraf outputs.influxdb
output plugin, or third-party tools.
Use this endpoint to send data in line protocol format to InfluxDB. Use query parameters to specify options for writing data.
db required | string The name of the database. The name of the database. InfluxDB creates the database if it doesn't already exist, and then writes all points in the batch to the database. |
precision required | string (PrecisionWriteCompatibility) Enum: "ms" "s" "us" "ns" The precision for unix timestamps in the line protocol batch. |
Accept | string Default: application/json Value: "application/json" The content type that the client can understand. Writes only return a response body if they fail (partially or completely)--for example, due to a syntax problem or type mismatch. |
Content-Encoding | string (ContentEncoding) Default: identity Enum: "gzip" "identity" The compression applied to the line protocol in the request payload.
To send a gzip payload, pass |
Content-Length | integer (ContentLength) The size of the entity-body, in bytes, sent to InfluxDB. |
Content-Type | string (LineProtocol) Default: text/plain; charset=utf-8 Enum: "text/plain" "text/plain; charset=utf-8" The content type of the request payload. |
measurement,tag=value field=1 1234567890
"{\n \"error\": \"write of line protocol failed\",\n \"data\": [\n {\n \"original_line\": \"dquote> home,room=Kitchen temp=hi\",\n \"line_number\": 2,\n \"error_message\": \"No fields were provided\"\n }\n ]\n}\n"
Writes line protocol to the specified database.
This endpoint provides backward compatibility for InfluxDB 2.x write workloads using tools such as InfluxDB 2.x client libraries, the Telegraf outputs.influxdb_v2
output plugin, or third-party tools.
Use this endpoint to send data in line protocol format to InfluxDB. Use query parameters to specify options for writing data.
accept_partial | boolean (AcceptPartial) Default: true Accept partial writes. |
db required | string A database name. InfluxDB creates the database if it doesn't already exist, and then writes all points in the batch to the database. |
precision required | string (PrecisionWriteCompatibility) Enum: "ms" "s" "us" "ns" The precision for unix timestamps in the line protocol batch. |
Accept | string Default: application/json Value: "application/json" The content type that the client can understand. Writes only return a response body if they fail (partially or completely)--for example, due to a syntax problem or type mismatch. |
Content-Encoding | string Default: identity Enum: "gzip" "identity" The compression applied to the line protocol in the request payload.
To send a gzip payload, pass |
Content-Length | integer The size of the entity-body, in bytes, sent to InfluxDB. |
Content-Type | string (LineProtocol) Default: text/plain; charset=utf-8 Enum: "text/plain" "text/plain; charset=utf-8" The content type of the request payload. |
measurement,tag=value field=1 1234567890
{- "error": "string",
- "data": { }
}
Executes an InfluxQL query to retrieve data from the specified database.
This endpoint is compatible with InfluxDB 1.x client libraries and third-party integrations such as Grafana. Use query parameters to specify the database and the InfluxQL query.
chunk_size | integer Default: 10000 The number of records that will go into a chunk.
This parameter is only used if |
chunked | boolean Default: false If true, the response is divided into chunks of size |
db | string <InfluxQL> The database to query. If not provided, the InfluxQL query string must specify the database. |
epoch | string (EpochCompatibility) Enum: "ns" "u" "µ" "ms" "s" "m" "h" Formats timestamps as unix (epoch) timestamps with the specified precision instead of RFC3339 timestamps with nanosecond precision. |
pretty | boolean Default: false If true, the JSON response is formatted in a human-readable format. |
q required | string The InfluxQL query string. |
Accept | string Default: application/json Enum: "application/json" "application/csv" "text/csv" The content type that the client can understand. If Returns an error if the format is invalid or non-UTF8. |
{- "results": [
- {
- "series": [
- {
- "name": "mytable",
- "columns": [
- "time",
- "value"
], - "values": [
- [
- "2024-02-02T12:00:00Z",
- 42
]
]
}
]
}
]
}
Executes an InfluxQL query to retrieve data from the specified database.
Accept | string Default: application/json Enum: "application/json" "application/csv" "text/csv" The content type that the client can understand. If Returns an error if the format is invalid or non-UTF8. |
chunk_size | integer Default: 10000 The number of records that will go into a chunk.
This parameter is only used if |
chunked | boolean If true, the response is divided into chunks of size |
db | string The database to query. If not provided, the InfluxQL query string must specify the database. |
epoch | string Enum: "ns" "u" "µ" "ms" "s" "m" "h" A unix timestamp precision.
Formats timestamps as unix (epoch) timestamps with the specified precision instead of RFC3339 timestamps with nanosecond precision. |
pretty | boolean If true, the JSON response is formatted in a human-readable format. |
q required | string The InfluxQL query string. |
{- "db": "string",
- "q": "string",
- "chunked": true,
- "chunk_size": 10000,
- "epoch": "ns",
- "pretty": true
}
{- "results": [
- {
- "series": [
- {
- "name": "mytable",
- "columns": [
- "time",
- "value"
], - "values": [
- [
- "2024-02-02T12:00:00Z",
- 42
]
]
}
]
}
]
}
Write and query data
no_sync
write option is enabled (no_sync=true
), the server sends a response to acknowledge the write.no_sync=false
(default), the server sends a response to acknowledge the write.Writes line protocol to the specified database.
This endpoint provides backward compatibility for InfluxDB 1.x write workloads using tools such as InfluxDB 1.x client libraries, the Telegraf outputs.influxdb
output plugin, or third-party tools.
Use this endpoint to send data in line protocol format to InfluxDB. Use query parameters to specify options for writing data.
db required | string The name of the database. The name of the database. InfluxDB creates the database if it doesn't already exist, and then writes all points in the batch to the database. |
precision required | string (PrecisionWriteCompatibility) Enum: "ms" "s" "us" "ns" The precision for unix timestamps in the line protocol batch. |
Accept | string Default: application/json Value: "application/json" The content type that the client can understand. Writes only return a response body if they fail (partially or completely)--for example, due to a syntax problem or type mismatch. |
Content-Encoding | string (ContentEncoding) Default: identity Enum: "gzip" "identity" The compression applied to the line protocol in the request payload.
To send a gzip payload, pass |
Content-Length | integer (ContentLength) The size of the entity-body, in bytes, sent to InfluxDB. |
Content-Type | string (LineProtocol) Default: text/plain; charset=utf-8 Enum: "text/plain" "text/plain; charset=utf-8" The content type of the request payload. |
measurement,tag=value field=1 1234567890
"{\n \"error\": \"write of line protocol failed\",\n \"data\": [\n {\n \"original_line\": \"dquote> home,room=Kitchen temp=hi\",\n \"line_number\": 2,\n \"error_message\": \"No fields were provided\"\n }\n ]\n}\n"
Writes line protocol to the specified database.
This endpoint provides backward compatibility for InfluxDB 2.x write workloads using tools such as InfluxDB 2.x client libraries, the Telegraf outputs.influxdb_v2
output plugin, or third-party tools.
Use this endpoint to send data in line protocol format to InfluxDB. Use query parameters to specify options for writing data.
accept_partial | boolean (AcceptPartial) Default: true Accept partial writes. |
db required | string A database name. InfluxDB creates the database if it doesn't already exist, and then writes all points in the batch to the database. |
precision required | string (PrecisionWriteCompatibility) Enum: "ms" "s" "us" "ns" The precision for unix timestamps in the line protocol batch. |
Accept | string Default: application/json Value: "application/json" The content type that the client can understand. Writes only return a response body if they fail (partially or completely)--for example, due to a syntax problem or type mismatch. |
Content-Encoding | string Default: identity Enum: "gzip" "identity" The compression applied to the line protocol in the request payload.
To send a gzip payload, pass |
Content-Length | integer The size of the entity-body, in bytes, sent to InfluxDB. |
Content-Type | string (LineProtocol) Default: text/plain; charset=utf-8 Enum: "text/plain" "text/plain; charset=utf-8" The content type of the request payload. |
measurement,tag=value field=1 1234567890
{- "error": "string",
- "data": { }
}
Writes line protocol to the specified database.
Use this endpoint to send data in line protocol format to InfluxDB. Use query parameters to specify options for writing data.
accept_partial | boolean (AcceptPartial) Default: true Accept partial writes. |
db required | string The name of the database. The name of the database. InfluxDB creates the database if it doesn't already exist, and then writes all points in the batch to the database. |
no_sync | boolean (NoSync) Default: false Acknowledges a successful write without waiting for WAL persistence. Data flow in InfluxDB 3 Core
|
precision required | string (PrecisionWrite) Enum: "auto" "millisecond" "second" "microsecond" "nanosecond" Precision of timestamps. |
Accept | string Default: application/json Value: "application/json" The content type that the client can understand. Writes only return a response body if they fail (partially or completely)--for example, due to a syntax problem or type mismatch. |
Content-Encoding | string (ContentEncoding) Default: identity Enum: "gzip" "identity" The compression applied to the line protocol in the request payload.
To send a gzip payload, pass |
Content-Length | integer (ContentLength) The size of the entity-body, in bytes, sent to InfluxDB. |
Content-Type | string (LineProtocol) Default: text/plain; charset=utf-8 Enum: "text/plain" "text/plain; charset=utf-8" The content type of the request payload. |
measurement,tag=value field=1 1234567890
{- "error": "string",
- "data": { }
}
Executes an SQL query to retrieve data from the specified database.
db required | string The name of the database. |
format | string |
q required | string |
Accept | string Default: application/json Enum: "application/json" "application/jsonl" "application/vnd.apache.parquet" "text/csv" The content type that the client can understand. |
Content-Type | string Value: "application/json" The format of the data in the request body. |
{- "results": [
- {
- "series": [
- {
- "name": "mytable",
- "columns": [
- "time",
- "value"
], - "values": [
- [
- "2024-02-02T12:00:00Z",
- 42
]
]
}
]
}
]
}
Executes an SQL query to retrieve data from the specified database.
Accept | string Default: application/json Enum: "application/json" "application/jsonl" "application/vnd.apache.parquet" "text/csv" The content type that the client can understand. |
Content-Type | string Value: "application/json" The format of the data in the request body. |
database required | string The name of the database to query.
Required if the query ( |
format | string Enum: "json" "csv" "parquet" "jsonl" "pretty" The format of the query results. |
object Additional parameters for the query. Use this field to pass query parameters. | |
query_str required | string The query to execute. |
{- "database": "mydb",
- "query_str": "SELECT * FROM mytable",
- "format": "json",
- "params": { }
}
{- "results": [
- {
- "series": [
- {
- "name": "mytable",
- "columns": [
- "time",
- "value"
], - "values": [
- [
- "2024-02-02T12:00:00Z",
- 42
]
]
}
]
}
]
}
Executes an InfluxQL query to retrieve data from the specified database.
db | string The name of the database. If you provide a query that specifies the database, you can omit the 'db' parameter from your request. |
format | string |
q required | string |
Accept | string Default: application/json Enum: "application/json" "application/jsonl" "application/vnd.apache.parquet" "text/csv" The content type that the client can understand. |
{- "results": [
- {
- "series": [
- {
- "name": "mytable",
- "columns": [
- "time",
- "value"
], - "values": [
- [
- "2024-02-02T12:00:00Z",
- 42
]
]
}
]
}
]
}
Executes an InfluxQL query to retrieve data from the specified database.
Accept | string Default: application/json Enum: "application/json" "application/jsonl" "application/vnd.apache.parquet" "text/csv" The content type that the client can understand. |
Content-Type | string Value: "application/json" The format of the data in the request body. |
database required | string The name of the database to query.
Required if the query ( |
format | string Enum: "json" "csv" "parquet" "jsonl" "pretty" The format of the query results. |
object Additional parameters for the query. Use this field to pass query parameters. | |
query_str required | string The query to execute. |
{- "database": "mydb",
- "query_str": "SELECT * FROM mytable",
- "format": "json",
- "params": { }
}
{- "results": [
- {
- "series": [
- {
- "name": "mytable",
- "columns": [
- "time",
- "value"
], - "values": [
- [
- "2024-02-02T12:00:00Z",
- 42
]
]
}
]
}
]
}
Executes an InfluxQL query to retrieve data from the specified database.
This endpoint is compatible with InfluxDB 1.x client libraries and third-party integrations such as Grafana. Use query parameters to specify the database and the InfluxQL query.
chunk_size | integer Default: 10000 The number of records that will go into a chunk.
This parameter is only used if |
chunked | boolean Default: false If true, the response is divided into chunks of size |
db | string <InfluxQL> The database to query. If not provided, the InfluxQL query string must specify the database. |
epoch | string (EpochCompatibility) Enum: "ns" "u" "µ" "ms" "s" "m" "h" Formats timestamps as unix (epoch) timestamps with the specified precision instead of RFC3339 timestamps with nanosecond precision. |
pretty | boolean Default: false If true, the JSON response is formatted in a human-readable format. |
q required | string The InfluxQL query string. |
Accept | string Default: application/json Enum: "application/json" "application/csv" "text/csv" The content type that the client can understand. If Returns an error if the format is invalid or non-UTF8. |
{- "results": [
- {
- "series": [
- {
- "name": "mytable",
- "columns": [
- "time",
- "value"
], - "values": [
- [
- "2024-02-02T12:00:00Z",
- 42
]
]
}
]
}
]
}
Executes an InfluxQL query to retrieve data from the specified database.
Accept | string Default: application/json Enum: "application/json" "application/csv" "text/csv" The content type that the client can understand. If Returns an error if the format is invalid or non-UTF8. |
chunk_size | integer Default: 10000 The number of records that will go into a chunk.
This parameter is only used if |
chunked | boolean If true, the response is divided into chunks of size |
db | string The database to query. If not provided, the InfluxQL query string must specify the database. |
epoch | string Enum: "ns" "u" "µ" "ms" "s" "m" "h" A unix timestamp precision.
Formats timestamps as unix (epoch) timestamps with the specified precision instead of RFC3339 timestamps with nanosecond precision. |
pretty | boolean If true, the JSON response is formatted in a human-readable format. |
q required | string The InfluxQL query string. |
{- "db": "string",
- "q": "string",
- "chunked": true,
- "chunk_size": 10000,
- "epoch": "ns",
- "pretty": true
}
{- "results": [
- {
- "series": [
- {
- "name": "mytable",
- "columns": [
- "time",
- "value"
], - "values": [
- [
- "2024-02-02T12:00:00Z",
- 42
]
]
}
]
}
]
}
Manage Processing engine triggers, test plugins, and send requests to trigger On Request plugins.
InfluxDB 3 Core provides the InfluxDB 3 Processing engine, an embedded Python VM that can dynamically load and trigger Python plugins in response to events in your database. Use Processing engine plugins and triggers to run code and perform tasks for different database events.
To get started with the Processing engine, see the Processing engine and Python plugins guide.
Creates a new processing engine trigger.
db required | string |
disabled | boolean |
plugin_filename required | string |
object | |
trigger_name required | string |
trigger_specification required | string |
{- "db": "string",
- "plugin_filename": "string",
- "trigger_name": "string",
- "trigger_specification": "string",
- "trigger_arguments": { },
- "disabled": true
}
{- "error": "string",
- "data": { }
}
Disables a processing engine trigger.
Content-Type | string Value: "application/json" The format of the data in the request body. |
db required | string |
disabled | boolean |
plugin_filename required | string |
object | |
trigger_name required | string |
trigger_specification required | string |
{- "db": "string",
- "plugin_filename": "string",
- "trigger_name": "string",
- "trigger_specification": "string",
- "trigger_arguments": { },
- "disabled": true
}
{- "error": "string",
- "data": { }
}
Enables a processing engine trigger.
Content-Type | string Value: "application/json" The format of the data in the request body. |
db required | string |
disabled | boolean |
plugin_filename required | string |
object | |
trigger_name required | string |
trigger_specification required | string |
{- "db": "string",
- "plugin_filename": "string",
- "trigger_name": "string",
- "trigger_specification": "string",
- "trigger_arguments": { },
- "disabled": true
}
{- "error": "string",
- "data": { }
}
Installs packages for the plugin environment.
Content-Type | string Value: "application/json" The format of the data in the request body. |
property name* | any |
{ }
{- "error": "string",
- "data": { }
}
Installs requirements for the plugin environment.
Content-Type | string Value: "application/json" The format of the data in the request body. |
property name* | any |
{ }
{- "error": "string",
- "data": { }
}
{- "error": "string",
- "data": { }
}
Sends a request to invoke an On Request processing engine plugin. The request can include request headers, query string parameters, and a request body, which InfluxDB passes to the plugin.
An On Request plugin implements the following signature:
def process_request(influxdb3_local, query_parameters, request_headers, request_body, args=None)
The response depends on the plugin implementation.
plugin_path required | string The path configured in the For example, if you define a trigger with the following:
then, the HTTP API exposes the following plugin endpoint:
|
{- "error": "string",
- "data": { }
}
Sends a request to invoke an On Request processing engine plugin. The request can include request headers, query string parameters, and a request body, which InfluxDB passes to the plugin.
An On Request plugin implements the following signature:
def process_request(influxdb3_local, query_parameters, request_headers, request_body, args=None)
The response depends on the plugin implementation.
plugin_path required | string The path configured in the For example, if you define a trigger with the following:
then, the HTTP API exposes the following plugin endpoint:
|
Content-Type | string Value: "application/json" The format of the data in the request body. |
property name* | any |
{ }
{- "error": "string",
- "data": { }
}
{- "version": "0.1.0",
- "revision": "f3d3d3d"
}
Creates a new table within a database.
db required | string |
Array of objects | |
table required | string |
tags required | Array of strings |
{- "db": "string",
- "table": "string",
- "tags": [
- "string"
], - "fields": [
- {
- "name": "string",
- "type": "utf8"
}
]
}
{- "error": "string",
- "data": { }
}
Creates a distinct cache for a table.
columns required | Array of strings |
db required | string |
max_age | integer Optional maximum age in seconds. |
max_cardinality | integer Optional maximum cardinality. |
name | string Optional cache name. |
table required | string |
{- "db": "mydb",
- "table": "mytable",
- "columns": [
- "tag1",
- "tag2"
], - "max_cardinality": 1000,
- "max_age": 3600
}
Creates a last cache for a table.
count | integer Optional count. |
db required | string |
key_columns | Array of strings Optional list of key columns. |
name | string Optional cache name. |
table required | string |
ttl | integer Optional time-to-live in seconds. |
value_columns | Array of strings Optional list of value columns. |
{- "db": "mydb",
- "table": "mytable",
- "key_columns": [
- "tag1"
], - "value_columns": [
- "field1"
], - "count": 100,
- "ttl": 3600
}
{- "error": "string",
- "data": { }
}