Telegraf Output Plugins
Telegraf output plugins send metrics to various destinations.
Amon
This plugin writes metrics to Amon monitoring
platform. It requires a serverkey
and
amoninstance
URL which can be obtained
here for your account.
If point values being sent cannot be converted to a float64
, the
metric is skipped.
AMQP
This plugin writes to an Advanced Message Queuing Protocol v0.9.1 broker. A prominent implementation of this protocol is RabbitMQ.
This plugin does not bind the AMQP exchange to a queue.
For an introduction check the AMQP concepts page and the RabbitMQ getting started guide.
Azure Application Insights
This plugin writes metrics to the Azure Application Insights service.
Azure Data Explorer
This plugin writes metrics to the Azure Data Explorer, Azure Synapse Data Explorer, and Real time analytics in Fabric services.
Azure Data Explorer is a distributed, columnar store, purpose built for any type of logs, metrics and time series data.
Azure Monitor
This plugin writes metrics to Azure Monitor which has a metric resolution of one minute. To accomodate for this in Telegraf, the plugin will automatically aggregate metrics into one minute buckets and send them to the service on every flush interval.
The Azure Monitor custom metrics service is currently in preview and might not be available in all Azure regions. Please also take the metric time limitations into account!
The metrics from each input plugin will be written to a separate Azure
Monitor namespace, prefixed with Telegraf/
by default. The field name
for each metric is written as the Azure Monitor metric name. All field
values are written as a summarized set that includes: min, max, sum,
count. Tags are written as a dimension on each Azure Monitor metric.
Google BigQuery
This plugin writes metrics to the Google Cloud BigQuery service and requires authentication with Google Cloud using either a service account or user credentials.
Be aware that this plugin accesses APIs that are chargeable and might incur costs.
Clarify
This plugin writes metrics to Clarify. To use this plugin you will need to obtain a set of credentials.
Google Cloud PubSub
This plugin publishes metrics to a Google Cloud PubSub topic in one of the supported data formats.
Amazon CloudWatch
This plugin writes metrics to the Amazon CloudWatch service.
Amazon CloudWatch Logs
This plugin writes log-metrics to the Amazon CloudWatch service.
CrateDB
This plugin writes metrics to CrateDB via its PostgreSQL protocol.
Datadog
This plugin writes metrics to the Datadog Metrics
API and
requires an apikey
which can be obtained
here for the account. >
[!NOTE] This plugin supports the v1 API.
Discard
This plugin discards all metrics written to it and is meant for testing purposes.
Dynatrace
This plugin writes metrics to Dynatrace via the Dynatrace Metrics API V2. It may be run alongside the Dynatrace OneAgent for automatic authentication or it may be run standalone on a host without OneAgent by specifying a URL and API Token.
More information on the plugin can be found in the Dynatrace documentation.
All metrics are reported as gauges, unless they are specified to be
delta counters using the additional_counters
or
additional_counters_patterns
config option (see below). See the
Dynatrace Metrics ingestion protocol
documentation
for details on the types defined there.
Elasticsearch
This plugin writes metrics to Elasticsearch via HTTP using the Elastic client library. The plugin supports Elasticsearch releases from v5.x up to v7.x.
Azure Event Hubs
This plugin writes metrics to the Azure Event Hubs service in any of the supported data formats. Metrics are sent as batches with each message payload containing one metric object, preferably as JSON as this eases integration with downstream components.
Each patch is sent to a single Event Hub within a namespace. In case no partition key is specified the batches will be automatically load-balanced (round-robin) across all the Event Hub partitions.
Executable
This plugin writes metrics to an external application via stdin
. The
command will be executed on each write creating a new process. Metrics
are passed in one of the supported data
formats.
The executable and the individual parameters must be defined as a list.
All outputs of the executable to stderr
will be logged in the Telegraf
log.
For better performance consider execd which runs continuously.
Executable Daemon
This plugin writes metrics to an external daemon program via stdin
. The
command will be executed once and metrics will be passed to it on every
write in one of the supported data
formats. The executable and the
individual parameters must be defined as a list.
All outputs of the executable to stderr
will be logged in the Telegraf
log. Telegraf minimum version: Telegraf 1.15.0
File
This plugin writes metrics to one or more local files in one of the supported data formats.
Graphite
This plugin writes metrics to Graphite via TCP. For details on the translation between Telegraf Metrics and Graphite output see the Graphite data format.
Graylog
This plugin writes metrics to a Graylog instance using the GELF data format.
GroundWork
This plugin writes metrics to a GroundWork Monitor instance.
Plugin only supports GroundWork v8 or later.
Health
This plugin provides a HTTP health check endpoint that can be configured to return failure status codes based on the value of a metric.
When the plugin is healthy it will return a 200 response; when unhealthy it will return a 503 response. The default state is healthy, one or more checks must fail in order for the resource to enter the failed state.
HTTP
This plugin writes metrics to a HTTP endpoint using one of the supported data formats. For data formats supporting batching, metrics are sent in batches by default.
InfluxDB v1.x
This plugin writes metrics to a InfluxDB v1.x instance via HTTP or UDP protocol.
InfluxDB v2.x
This plugin writes metrics to a InfluxDB v2.x instance via HTTP.
Inlong
This plugin publishes metrics to an Apache InLong instance.
Instrumental
This plugin writes metrics to the Instrumental Collector API and requires a project-specific API token.
Instrumental accepts stats in a format very close to Graphite, with the
only difference being that the type of stat (gauge, increment) is the
first token, separated from the metric itself by whitespace. The
increment
type is only used if the metric comes in as a counter via the
statsd input plugin.
Apache IoTDB
This plugin writes metrics to an Apache IoTDB instance, a database for the Internet of Things, supporting session connection and data insertion.
Kafka
This plugin writes metrics to a Kafka Broker acting a Kafka Producer.
Amazon Kinesis
This plugin writes metrics to a Amazon Kinesis endpoint. It will batch all Points in one request to reduce the number of API requests.
Please consult Amazon’s official documentation for more details on the Kinesis architecture and concepts.
Librato
This plugin writes metrics to the Librato
service. It requires an api_user
and api_token
which can be obtained
here for your account.
The source_tag
option in the Configuration file is used to send
contextual information from Point Tags to the API. Besides from this, the
plugin currently does not send any additional associated Point Tags.
If the point value being sent cannot be converted to a float64
, the
metric is skipped.
Grafana Loki
This plugin writes logs to a Grafana Loki
instance, using the metric name and tags as labels. The log line will
contain all fields in key="value"
format easily parsable with the
logfmt
parser in Loki.
Logs within each stream are sorted by timestamp before being sent to Loki.
Microsoft Fabric
This plugin writes metrics to Fabric Eventhouse and Fabric Eventstream artifacts of Real-Time Intelligence in Microsoft Fabric.
Real-Time Intelligence is a SaaS service in Microsoft Fabric that allows you to extract insights and visualize data in motion. It offers an end-to-end solution for event-driven scenarios, streaming data, and data logs.
MongoDB
This plugin writes metrics to MongoDB automatically creating collections as time series collections if they don’t exist.
This plugin requires MongoDB v5 or later for time series collections.
MQTT Producer
This plugin writes metrics to a MQTT broker
acting as a MQTT producer. The plugin supports the MQTT protocols 3.1.1
and 5
.
In v2.0.12+ of the mosquitto MQTT server, there is a
bug requiring the
keep_alive
value to be set non-zero in Telegraf. Otherwise, the server
will return with identifier rejected
. As a reference
eclipse/paho.golang
sets the keep_alive
to 30.
NATS
This plugin writes metrics to subjects of a set of NATS instances in one of the supported data formats.
Nebius Cloud Monitoring
This plugin writes metrics to the Nebuis Cloud Monitoring service.
New Relic
This plugins writes metrics to New Relic Insights using the Metrics API. To use this plugin you have to obtain an Insights API Key.
NSQ
This plugin writes metrics to the given topic of a NSQ instance as a producer in one of the supported data formats.
OpenSearch
This plugin writes metrics to a OpenSearch instance via HTTP. It supports OpenSearch releases v1 and v2 but future comparability with 1.x is not guaranteed and instead will focus on 2.x support.
Consider using the existing Elasticsearch plugin for 1.x.
OpenTelemetry
This plugin writes metrics to OpenTelemetry servers and agents via gRPC.
OpenTSDB
This plugin writes metrics to an OpenTSDB instance using either the telnet or HTTP mode. Using the HTTP API is recommended since OpenTSDB 2.0.
Parquet
This plugin writes metrics to parquet files. By default, metrics are grouped by metric name and written all to the same file.
If a metric schema does not match the schema in the file it will be dropped.
To lean more about the parquet format, check out the parquet docs as well as a blog post on querying parquet.
PostgreSQL
This plugin writes metrics to a PostgreSQL (or compatible) server managing the schema and automatically updating missing columns.
Prometheus
This plugin starts a Prometheus client and
exposes the written metrics on a /metrics
endpoint by default. This
endpoint can then be polled by a Prometheus server.
Quix
This plugin writes metrics to a Quix endpoint.
Please consult Quix’s official documentation for more details on the Quix platform architecture and concepts.
Redis Time Series
This plugin writes metrics to a Redis time-series server.
Remote File
This plugin writes metrics to files in a remote location using the rclone library. Currently the following backends are supported:
Socket Writer
This plugin writes metrics to a network service e.g. via UDP or TCP in one of the supported data formats.
SQL
This plugin writes metrics to a supported SQL database using a simple, hard-coded database schema. There is a table for each metric type with the table name corresponding to the metric name. There is a column per field and a column per tag with an optional column for the metric timestamp.
A row is written for every metric. This means multiple metrics are never merged into a single row, even if they have the same metric name, tags, and timestamp.
The plugin uses Golang’s generic “database/sql” interface and third party drivers. See the driver-specific section for a list of supported drivers and details.
Google Cloud Monitoring
This plugin writes metrics to a project
in Google Cloud
Monitoring (formerly called
Stackdriver).
Authentication
with Google Cloud is required using either a service account or user
credentials.
This plugin accesses APIs which are chargeable and might incur costs.
By default, Metrics are grouped by the namespace
variable and metric
key, eg: custom.googleapis.com/telegraf/system/load5
. However, this is
not the best practice. Setting metric_name_format = "official"
will
produce a more easily queried format of:
metric_type_prefix/[namespace_]name_key/kind
. If the global namespace
is not set, it is omitted as well.
ActiveMQ STOMP
This plugin writes metrics to an Active MQ Broker for STOMP but also supports Amazon MQ brokers. Metrics can be written in one of the supported data formats.
Sumo Logic
This plugin writes metrics to a Sumo Logic HTTP Source using one of the following data formats:
graphite
for Content-Type ofapplication/vnd.sumologic.graphite
carbon2
for Content-Type ofapplication/vnd.sumologic.carbon2
prometheus
for Content-Type ofapplication/vnd.sumologic.prometheus
Syslog
This plugin writes metrics as syslog messages via UDP in RFC5426 format or via TCP in RFC6587 format or via TLS in RFC5425 format, with or without the octet counting framing.
Syslog messages are formatted according to RFC5424 limiting the field sizes when sending messages according to the syslog message format section of the RFC. Sending messages beyond these sizes may get dropped by a strict receiver silently.
Amazon Timestream
This plugin writes metrics to the Amazon Timestream service.
Wavefront
This plugin writes metrics to a Wavefront instance or a Wavefront Proxy instance over HTTP or HTTPS.
Websocket
This plugin writes metrics to a WebSocket endpoint in one of the supported data formats.
Yandex Cloud Monitoring
This plugin writes metrics to the Yandex Cloud Monitoring service.
Zabbix
This plugin writes metrics to Zabbix via traps. It has been tested with versions v3.0, v4.0 and v6.0 but should work with newer versions of Zabbix as long as the protocol doesn’t change.
Was this page helpful?
Thank you for your feedback!
Support and feedback
Thank you for being part of our community! We welcome and encourage your feedback and bug reports for Telegraf and this documentation. To find support, use the following resources:
Customers with an annual or support contract can contact InfluxData Support.