---
title: Use the v3 write_lp API to write data
description: Use the /api/v3/write_lp HTTP API endpoint to write data to InfluxDB 3 Core.
url: https://docs.influxdata.com/influxdb3/core/write-data/http-api/v3-write-lp/
estimated_tokens: 4629
product: InfluxDB 3 Core
version: core
---

# Use the v3 write\_lp API to write data

Use the `/api/v3/write_lp` endpoint to write data to InfluxDB 3 Core.

This endpoint accepts the same [line protocol](/influxdb3/core/reference/line-protocol/) syntax as previous versions of InfluxDB, and supports the following:

## Query parameters

-   `?accept_partial=<BOOLEAN>`: Accept or reject partial writes (default is `true`).
    
-   `?no_sync=<BOOLEAN>`: Control when writes are acknowledged:
    
    -   `no_sync=true`: Acknowledges writes before WAL persistence completes.
    -   `no_sync=false`: Acknowledges writes after WAL persistence completes (default).
-   `?precision=<PRECISION>`: Specify the precision of the timestamp. By default, InfluxDB 3 Core uses the timestamp magnitude to auto-detect the precision (`auto`). To avoid any ambiguity, you can specify the precision of timestamps in your data.
    
    The InfluxDB 3 Core `/api/v3/write_lp` API endpoint supports the following timestamp precisions:
    
    -   `auto` (automatic detection, default)
    -   `nanosecond` (nanoseconds)
    -   `microsecond` (microseconds)
    -   `millisecond` (milliseconds)
    -   `second` (seconds)

### Auto precision detection

When you use `precision=auto` (or omit the parameter), InfluxDB 3 Core automatically detects the timestamp precision based on the magnitude of the timestamp value:

-   Timestamps < 5e9 → Second precision (multiplied by 1,000,000,000 to convert to nanoseconds)
-   Timestamps < 5e12 → Millisecond precision (multiplied by 1,000,000)
-   Timestamps < 5e15 → Microsecond precision (multiplied by 1,000)
-   Larger timestamps → Nanosecond precision (no conversion needed)

### Precision examples

The following examples show how to write data with different timestamp precisions:

<!-- Tabbed content: Select one of the following options -->

**Auto (default):**

```bash
# Auto precision (default) - timestamp magnitude determines precision
curl "http://localhost:8181/api/v3/write_lp?db=sensors" \
  --header "Authorization: Bearer DATABASE_TOKEN" \
  --data-raw "cpu,host=server1 usage=50.0 1708976567"
```

The timestamp `1708976567` is automatically detected as seconds.

**Nanoseconds:**

```bash
# Explicit nanosecond precision
curl "http://localhost:8181/api/v3/write_lp?db=sensors&precision=nanosecond" \
  --header "Authorization: Bearer DATABASE_TOKEN" \
  --data-raw "cpu,host=server1 usage=50.0 1708976567000000000"
```

**Milliseconds:**

```bash
# Millisecond precision
curl "http://localhost:8181/api/v3/write_lp?db=sensors&precision=millisecond" \
  --header "Authorization: Bearer DATABASE_TOKEN" \
  --data-raw "cpu,host=server1 usage=50.0 1708976567000"
```

**Seconds:**

```bash
# Second precision
curl "http://localhost:8181/api/v3/write_lp?db=sensors&precision=second" \
  --header "Authorization: Bearer DATABASE_TOKEN" \
  --data-raw "cpu,host=server1 usage=50.0 1708976567"
```

<!-- End tabbed content -->

## Configure gzip compression

The `/api/v3/write_lp` endpoint supports gzip-encoded request bodies for efficient data transfer.

When sending gzip-compressed data to InfluxDB, include the `Content-Encoding: gzip` header in your InfluxDB API request.

### Multi-member gzip support

InfluxDB 3 Core supports multi-member gzip payloads (concatenated gzip files per [RFC 1952](https://www.rfc-editor.org/rfc/rfc1952)). This allows you to:

-   Concatenate multiple gzip files and send them in a single request
-   Maintain compatibility with InfluxDB v1 and v2 write endpoints
-   Simplify batch operations using standard compression tools

#### Example: Write concatenated gzip files

```bash
# Create multiple gzip files
echo "cpu,host=server1 usage=50.0 1708976567" | gzip > batch1.gz
echo "cpu,host=server2 usage=60.0 1708976568" | gzip > batch2.gz

# Concatenate and send in a single request
cat batch1.gz batch2.gz | curl "http://localhost:8181/api/v3/write_lp?db=sensors" \
  --header "Authorization: Bearer DATABASE_TOKEN" \
  --header "Content-Encoding: gzip" \
  --data-binary @-
```

## Request body

-   Line protocol

POST /api/v3/write\_lp?db=mydb&precision=nanosecond&accept\_partial=true&no\_sync=false

*The following example uses [cURL](https://curl.se/) to send a write request using the [Home sensor sample data](/influxdb3/core/reference/sample-data/#home-sensor-data), but you can use any HTTP client.*

```bash
curl -v "http://localhost:8181/api/v3/write_lp?db=sensors&precision=second" \
  --data-raw "home,room=Living\ Room temp=21.1,hum=35.9,co=0i 1735545600
home,room=Kitchen temp=21.0,hum=35.9,co=0i 1735545600
home,room=Living\ Room temp=21.4,hum=35.9,co=0i 1735549200
home,room=Kitchen temp=23.0,hum=36.2,co=0i 1735549200
home,room=Living\ Room temp=21.8,hum=36.0,co=0i 1735552800
home,room=Kitchen temp=22.7,hum=36.1,co=0i 1735552800
home,room=Living\ Room temp=22.2,hum=36.0,co=0i 1735556400
home,room=Kitchen temp=22.4,hum=36.0,co=0i 1735556400
home,room=Living\ Room temp=22.2,hum=35.9,co=0i 1735560000
home,room=Kitchen temp=22.5,hum=36.0,co=0i 1735560000
home,room=Living\ Room temp=22.4,hum=36.0,co=0i 1735563600
home,room=Kitchen temp=22.8,hum=36.5,co=1i 1735563600
home,room=Living\ Room temp=22.3,hum=36.1,co=0i 1735567200
home,room=Kitchen temp=22.8,hum=36.3,co=1i 1735567200
home,room=Living\ Room temp=22.3,hum=36.1,co=1i 1735570800
home,room=Kitchen temp=22.7,hum=36.2,co=3i 1735570800
home,room=Living\ Room temp=22.4,hum=36.0,co=4i 1735574400
home,room=Kitchen temp=22.4,hum=36.0,co=7i 1735574400
home,room=Living\ Room temp=22.6,hum=35.9,co=5i 1735578000
home,room=Kitchen temp=22.7,hum=36.0,co=9i 1735578000
home,room=Living\ Room temp=22.8,hum=36.2,co=9i 1735581600
home,room=Kitchen temp=23.3,hum=36.9,co=18i 1735581600
home,room=Living\ Room temp=22.5,hum=36.3,co=14i 1735585200
home,room=Kitchen temp=23.1,hum=36.6,co=22i 1735585200
home,room=Living\ Room temp=22.2,hum=36.4,co=17i 1735588800
home,room=Kitchen temp=22.7,hum=36.5,co=26i 1735588800"
```

-   [Partial writes](#partial-writes)
    -   [Accept partial writes](#accept-partial-writes)
    -   [Do not accept partial writes](#do-not-accept-partial-writes)
-   [Write responses](#write-responses)
    -   [Use no\_sync for immediate write responses](#use-no_sync-for-immediate-write-responses)

#### InfluxDB client libraries

InfluxData provides supported InfluxDB 3 client libraries that you can integrate with your code to construct data as time series points, and then write them as line protocol to an InfluxDB 3 Core database. For more information, see how to [use InfluxDB client libraries to write data](/influxdb3/core/write-data/client-libraries/).

## Partial writes

The `/api/v3/write_lp` endpoint lets you accept or reject partial writes using the `accept_partial` parameter. This parameter changes the behavior of the API when the write request contains invalid line protocol or schema conflicts.

For example, the following line protocol contains two points, each using a different datatype for the `temp` field, which causes a schema conflict:

```
home,room=Sunroom temp=96 1735545600
home,room=Sunroom temp="hi" 1735549200
```

### Accept partial writes

With `accept_partial=true` (default), InfluxDB:

-   Accepts and writes line `1`
-   Rejects line `2`
-   Returns a `400 Bad Request` status code and the following response body:

```
< HTTP/1.1 400 Bad Request
...
{
  "error": "partial write of line protocol occurred",
  "data": [
    {
      "original_line": "home,room=Sunroom temp=hi 1735549200",
      "line_number": 2,
      "error_message": "invalid column type for column 'temp', expected iox::column_type::field::float, got iox::column_type::field::string"
    }
  ]
}
```

### Do not accept partial writes

With `accept_partial=false`, InfluxDB:

-   Rejects *all* points in the batch
-   Returns a `400 Bad Request` status code and the following response body:

```
< HTTP/1.1 400 Bad Request
...
{
  "error": "parsing failed for write_lp endpoint",
  "data": {
    "original_line": "home,room=Sunroom temp=hi 1735549200",
    "line_number": 2,
    "error_message": "invalid column type for column 'temp', expected iox::column_type::field::float, got iox::column_type::field::string"
  }
}
```

*For more information about the ingest path and data flow, see [Data durability](/influxdb3/core/reference/internals/durability/).*

## Write responses

By default, InfluxDB 3 Core acknowledges writes after flushing the WAL file to the Object store (occurring every second). For high write throughput, you can send multiple concurrent write requests.

### Use no\_sync for immediate write responses

To reduce the latency of writes, use the `no_sync` write option, which acknowledges writes *before* WAL persistence completes. When `no_sync=true`, InfluxDB validates the data, writes the data to the WAL, and then immediately responds to the client, without waiting for persistence to the Object store.

Using `no_sync=true` is best when prioritizing high-throughput writes over absolute durability.

-   Default behavior (`no_sync=false`): Waits for data to be written to the Object store before acknowledging the write. Reduces the risk of data loss, but increases the latency of the response.
-   With `no_sync=true`: Reduces write latency, but increases the risk of data loss in case of a crash before WAL persistence.

The following example immediately returns a response without waiting for WAL persistence:

```bash
curl "http://localhost:8181/api/v3/write_lp?db=sensors&no_sync=true" \
  --data-raw "home,room=Sunroom temp=96"
```

## Response headers

All HTTP responses from InfluxDB 3 Core include the following standard headers:

### cluster-uuid

The `cluster-uuid` response header contains the catalog UUID of your InfluxDB 3 Core instance. This header is included in all HTTP API responses, including:

-   Write requests (`/api/v3/write_lp`, `/api/v2/write`, `/write`)
-   Query requests
-   Administrative operations
-   Authentication failures
-   CORS preflight requests

#### Use cases

The `cluster-uuid` header enables you to:

-   **Identify cluster instances**: Programmatically determine which InfluxDB instance handled a request
-   **Monitor deployments**: Track requests across multiple InfluxDB instances in load-balanced or multi-cluster environments
-   **Debug and troubleshooting**: Correlate client requests with specific server instances in distributed systems

#### Example response

```bash
curl -v "http://localhost:8181/api/v3/write_lp?db=sensors" \
  --header "Authorization: Bearer DATABASE_TOKEN" \
  --data-raw "cpu,host=server1 usage=50.0"
```

The response headers contain the `cluster-uuid`:

```
< HTTP/1.1 204 No Content
< cluster-uuid: 01234567-89ab-cdef-0123-456789abcdef
< date: Tue, 19 Nov 2025 20:00:00 GMT
```

#### Related

-   [Write data to InfluxDB 3 Core](/influxdb3/core/get-started/write/)
-   [/api/v3/write\_lp endpoint](/influxdb3/core/api/write-data/#operation/PostWriteLP)
