Documentation

Troubleshoot issues writing data

Learn how to avoid unexpected results and recover from errors when writing to InfluxDB Cloud Dedicated.

Handle write responses

InfluxDB Cloud Dedicated does the following when you send a write request:

  1. Validates the request.

  2. If successful, attempts to ingest data from the request body; otherwise, responds with an error status.

  3. Ingests or rejects data from the batch and returns one of the following HTTP status codes:

    • 204 No Content: All of the data is ingested and queryable.
    • 400 Bad Request: Some (when partial writes are configured for the cluster) or all of the data has been rejected. Data that has not been rejected is ingested and queryable.

    The response body contains error details about rejected points, up to 100 points.

Writes are synchronous–the response status indicates the final status of the write and all ingested data is queryable.

To ensure that InfluxDB handles writes in the order you request them, wait for the response before you send the next request.

Review HTTP status codes

InfluxDB uses conventional HTTP status codes to indicate the success or failure of a request. The message property of the response body may contain additional details about the error. InfluxDB Cloud Dedicated returns one the following HTTP status codes for a write request:

HTTP response codeResponse bodyDescription
204 "No Content"EmptyInfluxDB ingested all of the data in the batch
400 "Bad request"error details about rejected points, up to 100 points: line contains the first rejected line, message describes rejectionsSome or all request data isn’t allowed (for example, is malformed or falls outside of the database’s retention period)–the response body indicates whether a partial write has occurred or if all data has been rejected
401 "Unauthorized"EmptyThe Authorization request header is missing or malformed or the token doesn’t have permission to write to the database
404 "Not found"A requested resource type (for example, “database”), and resource nameA requested resource wasn’t found
422 "Unprocessable Entity"message contains details about the errorThe data isn’t allowed (for example, falls outside of the database’s retention period).
500 "Internal server error"EmptyDefault status for an error
503 "Service unavailable"EmptyThe server is temporarily unavailable to accept writes. The Retry-After header contains the number of seconds to wait before trying the write again.

The message property of the response body may contain additional details about the error. If your data did not write to the database, see how to troubleshoot rejected points.

Troubleshoot failures

If you notice data is missing in your database, do the following:

Troubleshoot rejected points

When writing points from a batch, InfluxDB rejects points that have syntax errors or schema conflicts. If InfluxDB processes the data in your batch and then rejects points, the HTTP response body contains the following properties that describe rejected points:

  • code: "invalid"
  • line: the line number of the first rejected point in the batch.
  • message: a string that contains line-separated error messages, one message for each rejected point in the batch, up to 100 rejected points. Line numbers are 1-based.

InfluxDB rejects points for the following reasons:

  • a line protocol parsing error
  • an invalid timestamp
  • a schema conflict

Schema conflicts occur when you try to write data that contains any of the following:

  • a wrong data type: the point falls within the same partition (default partitioning is measurement and day) as existing database data and contains a different data type for an existing field
  • a tag and a field that use the same key

Example

The following example shows a response body for a write request that contains two rejected points:

{
  "code": "invalid",
  "line": 2,
  "message": "failed to parse line protocol:
              errors encountered on line(s):
              error parsing line 2 (1-based): Invalid measurement was provided
              error parsing line 4 (1-based): Unable to parse timestamp value '123461000000000000000000000000'"
}
  • Copy
  • Fill window

Check for field data type differences between the rejected data point and points within the same database and partition (default partitioning is by measurement and day)–for example, did you attempt to write string data to an int field?

Report write issues

If you experience persistent write issues that you can’t resolve using the troubleshooting steps above, use these guidelines to gather the necessary information when reporting the issue to InfluxData support.

Before reporting an issue

Ensure you have followed all troubleshooting steps and reviewed the write optimization guidelines to rule out common configuration and data formatting issues.

Gather essential information

When reporting write issues, provide the following information to help InfluxData engineers diagnose the problem:

1. Error details and logs

Capture the complete error response:

# Example: Capture both successful and failed write attempts
curl --silent --show-error --write-out "\nHTTP Status: %{http_code}\nResponse Time: %{time_total}s\n" \
  --request POST \
  "https://cluster-id.a.influxdb.io/write?db=
DATABASE_NAME
&precision=ns"
\
--header "Authorization: Bearer
AUTH_TOKEN
"
\
--header "Content-Type: text/plain; charset=utf-8" \ --data-binary @problematic-data.lp \ > write-error-response.txt 2>&1
  • Copy
  • Fill window

Log client-side errors:

If using a client library, enable debug logging and capture the full exception details:

import logging
from influxdb_client_3 import InfluxDBClient3

# Enable debug logging
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger("influxdb_client_3")

try:
    client = InfluxDBClient3(token="
AUTH_TOKEN
"
, host="cluster-id.a.influxdb.io", database="
DATABASE_NAME
"
)
client.write(data) except Exception as e: logger.error(f"Write failed: {str(e)}") # Include full stack trace in your report import traceback traceback.print_exc()
  • Copy
  • Fill window
package main

import (
    "context"
    "fmt"
    "log"
    "os"
    
    "github.com/InfluxCommunity/influxdb3-go"
)

func main() {
    // Enable debug logging
    client, err := influxdb3.New(influxdb3.ClientConfig{
        Host:     "https://cluster-id.a.influxdb.io",
        Token:    "
AUTH_TOKEN
"
,
Database: "
DATABASE_NAME
"
,
Debug: true, }) if err != nil { log.Fatal(err) } defer client.Close() err = client.Write(context.Background(), data) if err != nil { // Log the full error details fmt.Fprintf(os.Stderr, "Write error: %+v\n", err) } }
  • Copy
  • Fill window
import com.influxdb.v3.client.InfluxDBClient;
import java.util.logging.Logger;
import java.util.logging.Level;

public class WriteErrorExample {
    private static final Logger logger = Logger.getLogger(WriteErrorExample.class.getName());
    
    public static void main(String[] args) {
        try (InfluxDBClient client = InfluxDBClient.getInstance(
                "https://cluster-id.a.influxdb.io",
                "
AUTH_TOKEN
"
.toCharArray(),
"
DATABASE_NAME
"
)) {
client.writeRecord(data); } catch (Exception e) { logger.log(Level.SEVERE, "Write failed", e); // Include full stack trace in your report e.printStackTrace(); } } }
  • Copy
  • Fill window
import { InfluxDBClient } from '@influxdata/influxdb3-client'

const client = new InfluxDBClient({
  host: 'https://cluster-id.a.influxdb.io',
  token: '
AUTH_TOKEN
'
,
database: '
DATABASE_NAME
'
}) try { await client.write(data) } catch (error) { console.error('Write failed:', error) // Include the full error object in your report console.error('Full error details:', JSON.stringify(error, null, 2)) }
  • Copy
  • Fill window

Replace the following in your code:

  • DATABASE_NAME: the name of the database to query

  • AUTH_TOKEN: a database token with write access to the specified database.

2. Data samples and patterns

Provide representative data samples:

  • Include 10-20 lines of the problematic line protocol data (sanitized if necessary)
  • Show both successful and failing data formats
  • Include timestamp ranges and precision used
  • Specify if the issue occurs with specific measurements, tags, or field types

Example data documentation:

# Successful writes:
measurement1,tag1=value1,tag2=value2 field1=1.23,field2="text" 1640995200000000000

# Failing writes:
measurement1,tag1=value1,tag2=value2 field1="string",field2=456 1640995260000000000
# Error: field data type conflict - field1 changed from float to string
  • Copy
  • Fill window

3. Write patterns and volume

Document your write patterns:

  • Frequency: How often do you write data? (for example, every 10 seconds, once per minute)
  • Batch size: How many points per write request?
  • Concurrency: How many concurrent write operations?
  • Data retention: How long is data retained?
  • Timing: When did the issue first occur? Is it intermittent or consistent?

4. Environment details

Client configuration:

  • Client library version and language
  • Connection settings (timeouts, retry logic)
  • Geographic location relative to cluster

5. Reproduction steps

Provide step-by-step instructions to reproduce the issue:

  1. Environment setup: How to configure a similar environment
  2. Data preparation: Sample data files or generation scripts
  3. Write commands: Exact commands or code used
  4. Expected vs actual results: What should happen vs what actually happens

Create a support package

Organize all gathered information into a comprehensive package:

Files to include:

  • write-error-response.txt - HTTP response details
  • client-logs.txt - Client library debug logs
  • sample-data.lp - Representative line protocol data (sanitized)
  • reproduction-steps.md - Detailed reproduction guide
  • environment-details.md - client configuration
  • write-patterns.md - Usage patterns and volume information

Package format:

# Create a timestamped support package
TIMESTAMP=$(date -Iseconds)
mkdir "write-issue-${TIMESTAMP}"
# Add all relevant files to the directory
tar -czf "write-issue-${TIMESTAMP}.tar.gz" "write-issue-${TIMESTAMP}/"
  • Copy
  • Fill window

Submit the issue

Include the support package when contacting InfluxData support through your standard support channels, along with:

  • A clear description of the problem
  • Impact assessment (how critical is this issue?)
  • Any workarounds you’ve attempted
  • Business context if the issue affects production systems

This comprehensive information will help InfluxData engineers identify root causes and provide targeted solutions for your write issues.


Was this page helpful?

Thank you for your feedback!


The future of Flux

Flux is going into maintenance mode. You can continue using it as you currently are without any changes to your code.

Read more

New in InfluxDB 3.3

Key enhancements in InfluxDB 3.3 and the InfluxDB 3 Explorer 1.1.

See the Blog Post

InfluxDB 3.3 is now available for both Core and Enterprise, which introduces new managed plugins for the Processing Engine. This makes it easier to address common time series tasks with just a plugin. InfluxDB 3 Explorer 1.1 is also available, which includes InfluxDB plugin management and other new features.

For more information, check out: