Identify write methods
Many different tools are available for writing data into your InfluxDB cluster. Based on your use case, you should identify the most appropriate tools and methods to use. Below is a summary of some of the tools that are available (this list is not exhaustive).
Telegraf
Telegraf is a data collection agent that collects data from various sources, parses the data into line protocol, and then writes the data to InfluxDB. Telegraf is plugin-based and provides hundreds of plugins that collect, aggregate, process, and write data.
If you need to collect data from well-established systems and technologies, Telegraf likely already supports a plugin for collecting that data. Some of the most common use cases are:
- Monitoring system metrics (memory, CPU, disk usage, etc.)
- Monitoring Docker containers
- Monitoring network devices via SNMP
- Collecting data from a Kafka queue
- Collecting data from an MQTT broker
- Collecting data from HTTP endpoints
- Scraping data from a Prometheus exporter
- Parsing logs
For more information about using Telegraf with InfluxDB Clustered, see Use Telegraf to write data to InfluxDB Clustered.
InfluxDB client libraries
InfluxDB client libraries are language-specific packages that integrate with InfluxDB APIs. They simplify integrating InfluxDB with your own custom application and standardize interactions between your application and your InfluxDB cluster. With client libraries, you can collect and write whatever time series data is useful for your application.
InfluxDB Clustered includes backwards compatible write APIs, so if you are currently using an InfluxDB v1 or v2 client library, you can continue to use the same client library to write data to your cluster.
InfluxDB HTTP write APIs
InfluxDB Clustered provides backwards-compatible HTTP write APIs for writing data to your cluster. The InfluxDB client libraries use these APIs, but if you choose not to use a client library, you can integrate directly with the API. Because these APIs are backwards compatible, you can use existing InfluxDB API integrations with your InfluxDB cluster.
Write optimizations
As you decide on and integrate tooling to write data to your InfluxDB cluster, there are things you can do to ensure your write pipeline is as performant as possible. The list below provides links to more detailed descriptions of these optimizations in the Optimize writes documentation:
- Batch writes
- Sort tags by key
- Use the coarsest time precision possible
- Use gzip compression
- Synchronize hosts with NTP
- Write multiple data points in one request
- Pre-process data before writing
Telegraf and InfluxDB client libraries leverage many of these optimizations by default.
Was this page helpful?
Thank you for your feedback!
Support and feedback
Thank you for being part of our community! We welcome and encourage your feedback and bug reports for InfluxDB and this documentation. To find support, use the following resources:
Customers with an annual or support contract can contact InfluxData Support.