Documentation

Scrape Prometheus metrics

To use Flux to scrape Prometheus-formatted metrics from an HTTP-accessible endpoint:

  1. Import the experimental/prometheus package.
  2. Use prometheus.scrape and specify the url to scrape metrics from.
import "experimental/prometheus"

prometheus.scrape(url: "http://localhost:8086/metrics")

Output structure

prometheus.scrape() returns a stream of tables with the following columns:

  • _time: Data timestamp
  • _measurement: prometheus
  • _field: Prometheus metric name (_bucket is trimmed from histogram metric names)
  • _value: Prometheus metric value
  • url: URL metrics were scraped from
  • Label columns: A column for each Prometheus label. The column label is the label name and the column value is the label value.

Tables are grouped by _measurement, _field, and Label columns.

Columns with the underscore prefix

Columns with the underscore (_) prefix are considered “system” columns. Some Flux functions require these columns to function properly.

Example Prometheus query results

The following are example Prometheus metrics scraped from the InfluxDB OSS 2.x /metrics endpoint:

# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 1.42276424e+09
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 5.259247e+06
# HELP task_executor_run_latency_seconds Records the latency between the time the run was due to run and the time the task started execution, by task type
# TYPE task_executor_run_latency_seconds histogram
task_executor_run_latency_seconds_bucket{task_type="system",le="0.25"} 4413
task_executor_run_latency_seconds_bucket{task_type="system",le="0.5"} 11901
task_executor_run_latency_seconds_bucket{task_type="system",le="1"} 12565
task_executor_run_latency_seconds_bucket{task_type="system",le="2.5"} 12823
task_executor_run_latency_seconds_bucket{task_type="system",le="5"} 12844
task_executor_run_latency_seconds_bucket{task_type="system",le="10"} 12864
task_executor_run_latency_seconds_bucket{task_type="system",le="+Inf"} 74429
task_executor_run_latency_seconds_sum{task_type="system"} 4.256783538679698e+11
task_executor_run_latency_seconds_count{task_type="system"} 74429
# HELP task_executor_run_duration The duration in seconds between a run starting and finishing.
# TYPE task_executor_run_duration summary
task_executor_run_duration{taskID="00xx0Xx0xx00XX0x0",task_type="threshold",quantile="0.5"} 5.178160855
task_executor_run_duration{taskID="00xx0Xx0xx00XX0x0",task_type="threshold",quantile="0.9"} 5.178160855
task_executor_run_duration{taskID="00xx0Xx0xx00XX0x0",task_type="threshold",quantile="0.99"} 5.178160855
task_executor_run_duration_sum{taskID="00xx0Xx0xx00XX0x0",task_type="threshold"} 2121.9758301650004
task_executor_run_duration_count{taskID="00xx0Xx0xx00XX0x0",task_type="threshold"} 570

When scraped by Flux, these metrics return the following stream of tables:

_time_measurementurl_field_value
2021-01-01T00:00:00Zprometheushttp://localhost:8086/metricsgo_memstats_alloc_bytes_total1422764240.0
_time_measurementurl_field_value
2021-01-01T00:00:00Zprometheushttp://localhost:8086/metricsgo_memstats_buck_hash_sys_bytes5259247.0
_time_measurementtask_typeurlle_field_value
2021-01-01T00:00:00Zprometheussystemhttp://localhost:8086/metrics0.25task_executor_run_latency_seconds4413
_time_measurementtask_typeurlle_field_value
2021-01-01T00:00:00Zprometheussystemhttp://localhost:8086/metrics0.5task_executor_run_latency_seconds11901
_time_measurementtask_typeurlle_field_value
2021-01-01T00:00:00Zprometheussystemhttp://localhost:8086/metrics1task_executor_run_latency_seconds12565
_time_measurementtask_typeurlle_field_value
2021-01-01T00:00:00Zprometheussystemhttp://localhost:8086/metrics2.5task_executor_run_latency_seconds12823
_time_measurementtask_typeurlle_field_value
2021-01-01T00:00:00Zprometheussystemhttp://localhost:8086/metrics5task_executor_run_latency_seconds12844
_time_measurementtask_typeurlle_field_value
2021-01-01T00:00:00Zprometheussystemhttp://localhost:8086/metrics+Inftask_executor_run_latency_seconds74429
_time_measurementtask_typeurl_field_value
2021-01-01T00:00:00Zprometheussystemhttp://localhost:8086/metricstask_executor_run_latency_seconds_sum425678353867.9698
_time_measurementtask_typeurl_field_value
2021-01-01T00:00:00Zprometheussystemhttp://localhost:8086/metricstask_executor_run_latency_seconds_count74429
_time_measurementtask_typetaskIDurlquantile_field_value
2021-01-01T00:00:00Zprometheusthreshold00xx0Xx0xx00XX0x0http://localhost:8086/metrics0.5task_executor_run_duration5.178160855
_time_measurementtask_typetaskIDurlquantile_field_value
2021-01-01T00:00:00Zprometheusthreshold00xx0Xx0xx00XX0x0http://localhost:8086/metrics0.9task_executor_run_duration5.178160855
_time_measurementtask_typetaskIDurlquantile_field_value
2021-01-01T00:00:00Zprometheusthreshold00xx0Xx0xx00XX0x0http://localhost:8086/metrics0.99task_executor_run_duration5.178160855
_time_measurementtask_typetaskIDurl_field_value
2021-01-01T00:00:00Zprometheusthreshold00xx0Xx0xx00XX0x0http://localhost:8086/metricstask_executor_run_duration_sum2121.9758301650004
_time_measurementtask_typetaskIDurl_field_value
2021-01-01T00:00:00Zprometheusthreshold00xx0Xx0xx00XX0x0http://localhost:8086/metricstask_executor_run_duration_count570

Different data structures for scraped Prometheus metrics

Telegraf and InfluxDB provide tools that scrape Prometheus metrics and store them in InfluxDB. Depending on the tool and and configuration you use to scrape metrics, the resulting data structure may differ from the structure returned by prometheus.scrape() described above.

For information about the different data structures of scraped Prometheus metrics stored in InfluxDB, see InfluxDB Prometheus metric parsing formats.

Write Prometheus metrics to InfluxDB

To write scraped Prometheus metrics to InfluxDB:

  1. Use prometheus.scrape to scrape Prometheus metrics.
  2. Use to() to write the scraped metrics to InfluxDB.
import "experimental/prometheus"

prometheus.scrape(url: "http://example.com/metrics")
    |> to(bucket: "example-bucket", host: "http://localhost:8086", org: "example-org", token: "mYsuP3R5eCR37t0K3n")

Write Prometheus metrics to InfluxDB at regular intervals

To scrape Prometheus metrics and write them to InfluxDB at regular intervals, scrape Prometheus metrics in an InfluxDB task.

import "experimental/prometheus"

option task = {name: "Scrape Prometheus metrics", every: 10s}

prometheus.scrape(url: "http://example.com/metrics")
    |> to(bucket: "example-bucket")

Was this page helpful?

Thank you for your feedback!


The future of Flux

Flux is going into maintenance mode. You can continue using it as you currently are without any changes to your code.

Read more

InfluxDB v3 enhancements and InfluxDB Clustered is now generally available

New capabilities, including faster query performance and management tooling advance the InfluxDB v3 product line. InfluxDB Clustered is now generally available.

InfluxDB v3 performance and features

The InfluxDB v3 product line has seen significant enhancements in query performance and has made new management tooling available. These enhancements include an operational dashboard to monitor the health of your InfluxDB cluster, single sign-on (SSO) support in InfluxDB Cloud Dedicated, and new management APIs for tokens and databases.

Learn about the new v3 enhancements


InfluxDB Clustered general availability

InfluxDB Clustered is now generally available and gives you the power of InfluxDB v3 in your self-managed stack.

Talk to us about InfluxDB Clustered