Documentation

influx write dryrun

The influx write dryrun command prints write output to stdout instead of writing to InfluxDB. Use this command to test writing data.

Supports line protocol, annotated CSV, and extended annotated CSV. Output is always line protocol.

Usage

influx write dryrun [flags]
  • Copy
  • Fill window

Flags

FlagDescriptionInput typeMaps to ?
-c--active-configCLI configuration to use for commandstring
-b--bucketBucket name (mutually exclusive with --bucket-id)stringINFLUX_BUCKET_NAME
--bucket-idBucket ID (mutually exclusive with --bucket)stringINFLUX_BUCKET_ID
--configs-pathPath to influx CLI configurations (default ~/.influxdbv2/configs)stringINFLUX_CONFIGS_PATH
--debugOutput errors to stderr
--encodingCharacter encoding of input (default UTF-8)string
--errors-filePath to a file used for recording rejected row errorsstring
-f--fileFile to importstringArray
--formatInput format (lp or csv, default lp)string
--headerPrepend header line to CSV input datastring
-h--helpHelp for the dryrun command
--hostHTTP address of InfluxDB (default http://localhost:9999)stringINFLUX_HOST
--max-line-lengthMaximum number of bytes that can be read for a single line (default 16000000)integer
-o--orgOrganization name (mutually exclusive with --org-id)stringINFLUX_ORG
--org-idOrganization ID (mutually exclusive with --org)stringINFLUX_ORG_ID
-p--precisionPrecision of the timestamps (default ns)stringINFLUX_PRECISION
--rate-limitThrottle write rate (examples: 5MB/5min or 1MB/s).string
--skip-verifySkip TLS certificate verificationINFLUX_SKIP_VERIFY
--skipHeaderSkip first n rows of input datainteger
--skipRowOnErrorOutput CSV errors to stderr, but continue processing
-t--tokenAPI tokenstringINFLUX_TOKEN
-u--urlURL to import data fromstringArray

Examples

Authentication credentials

The examples below assume your InfluxDB host, organization, and token are provided by either the active influx CLI configuration or by environment variables (INFLUX_HOST, INFLUX_ORG, and INFLUX_TOKEN). If you do not have a CLI configuration set up or the environment variables set, include these required credentials for each command with the following flags:

  • --host: InfluxDB host
  • -o, --org or --org-id: InfluxDB organization name or ID
  • -t, --token: InfluxDB API token

Line protocol

Dry run writing line protocol via stdin
influx write --bucket example-bucket "
m,host=host1 field1=1.2
m,host=host2 field1=2.4
m,host=host1 field2=5i
m,host=host2 field2=3i
"
  • Copy
  • Fill window
Dry run writing line protocol from a file
influx write dryrun \
  --bucket example-bucket \
  --file path/to/line-protocol.txt
  • Copy
  • Fill window
Dry run writing line protocol from multiple files
influx write dryrun \
  --bucket example-bucket \
  --file path/to/line-protocol-1.txt \
  --file path/to/line-protocol-2.txt
  • Copy
  • Fill window
Dry run writing line protocol from a URL
influx write dryrun \
  --bucket example-bucket \
  --url https://example.com/line-protocol.txt
  • Copy
  • Fill window
Dry run writing line protocol from multiple URLs
influx write dryrun \
  --bucket example-bucket \
  --url https://example.com/line-protocol-1.txt \
  --url https://example.com/line-protocol-2.txt
  • Copy
  • Fill window
Dry run writing line protocol from multiple sources
influx write dryrun \
  --bucket example-bucket \
  --file path/to/line-protocol-1.txt \
  --url https://example.com/line-protocol-2.txt
  • Copy
  • Fill window

CSV

Dry run writing annotated CSV data via stdin
influx write dryrun \
  --bucket example-bucket \
  --format csv \
  "#datatype measurement,tag,tag,field,field,ignored,time
m,cpu,host,time_steal,usage_user,nothing,time
cpu,cpu1,host1,0,2.7,a,1482669077000000000
cpu,cpu1,host2,0,2.2,b,1482669087000000000
"
  • Copy
  • Fill window
Dry run writing annotated CSV data from a file
influx write dryrun \
  --bucket example-bucket \
  --file path/to/data.csv
  • Copy
  • Fill window
Dry run writing annotated CSV data from multiple files
influx write dryrun \
  --bucket example-bucket \
  --file path/to/data-1.csv \
  --file path/to/data-2.csv
  • Copy
  • Fill window
Dry run writing annotated CSV data from a URL
influx write dryrun \
  --bucket example-bucket \
  --url https://example.com/data.csv
  • Copy
  • Fill window
Dry run writing annotated CSV data from multiple URLs
influx write dryrun \
  --bucket example-bucket \
  --url https://example.com/data-1.csv \
  --url https://example.com/data-2.csv
  • Copy
  • Fill window
Dry run writing annotated CSV data from multiple sources
influx write dryrun \
  --bucket example-bucket \
  --file path/to/data-1.csv \
  --url https://example.com/data-2.csv
  • Copy
  • Fill window
Dry run prepending CSV data with annotation headers
influx write dryrun \
  --bucket example-bucket \
  --header "#constant measurement,birds" \
  --header "#datatype dataTime:2006-01-02,long,tag" \
  --file path/to/data.csv
  • Copy
  • Fill window

Was this page helpful?

Thank you for your feedback!


The future of Flux

Flux is going into maintenance mode. You can continue using it as you currently are without any changes to your code.

Read more

InfluxDB 3 Core and Enterprise are now in Beta

InfluxDB 3 Core and Enterprise are now available for beta testing, available under MIT or Apache 2 license.

InfluxDB 3 Core is a high-speed, recent-data engine that collects and processes data in real-time, while persisting it to local disk or object storage. InfluxDB 3 Enterprise is a commercial product that builds on Core’s foundation, adding high availability, read replicas, enhanced security, and data compaction for faster queries. A free tier of InfluxDB 3 Enterprise will also be available for at-home, non-commercial use for hobbyists to get the full historical time series database set of capabilities.

For more information, check out:

InfluxDB Cloud powered by TSM