Scrape Prometheus metrics
To use Flux to scrape Prometheus-formatted metrics
from an HTTP-accessible endpoint:
- Import the
experimental/prometheus
package. - Use
prometheus.scrape
and
specify the url to scrape metrics from.
import "experimental/prometheus"
prometheus.scrape(url: "http://localhost:8086/metrics")
Output structure
prometheus.scrape()
returns a stream of tables
with the following columns:
- _time: Data timestamp
- _measurement:
prometheus
- _field: Prometheus metric name
(
_bucket
is trimmed from histogram metric names) - _value: Prometheus metric value
- url: URL metrics were scraped from
- Label columns: A column for each Prometheus label.
The column label is the label name and the column value is the label value.
Tables are grouped by _measurement, _field, and Label columns.
Columns with the underscore prefix
Columns with the underscore (_
) prefix are considered “system” columns.
Some Flux functions require these columns to function properly.
Example Prometheus query results
The following are example Prometheus metrics scraped from the InfluxDB OSS 2.x /metrics
endpoint:
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 1.42276424e+09
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 5.259247e+06
# HELP task_executor_run_latency_seconds Records the latency between the time the run was due to run and the time the task started execution, by task type
# TYPE task_executor_run_latency_seconds histogram
task_executor_run_latency_seconds_bucket{task_type="system",le="0.25"} 4413
task_executor_run_latency_seconds_bucket{task_type="system",le="0.5"} 11901
task_executor_run_latency_seconds_bucket{task_type="system",le="1"} 12565
task_executor_run_latency_seconds_bucket{task_type="system",le="2.5"} 12823
task_executor_run_latency_seconds_bucket{task_type="system",le="5"} 12844
task_executor_run_latency_seconds_bucket{task_type="system",le="10"} 12864
task_executor_run_latency_seconds_bucket{task_type="system",le="+Inf"} 74429
task_executor_run_latency_seconds_sum{task_type="system"} 4.256783538679698e+11
task_executor_run_latency_seconds_count{task_type="system"} 74429
# HELP task_executor_run_duration The duration in seconds between a run starting and finishing.
# TYPE task_executor_run_duration summary
task_executor_run_duration{taskID="00xx0Xx0xx00XX0x0",task_type="threshold",quantile="0.5"} 5.178160855
task_executor_run_duration{taskID="00xx0Xx0xx00XX0x0",task_type="threshold",quantile="0.9"} 5.178160855
task_executor_run_duration{taskID="00xx0Xx0xx00XX0x0",task_type="threshold",quantile="0.99"} 5.178160855
task_executor_run_duration_sum{taskID="00xx0Xx0xx00XX0x0",task_type="threshold"} 2121.9758301650004
task_executor_run_duration_count{taskID="00xx0Xx0xx00XX0x0",task_type="threshold"} 570
When scraped by Flux, these metrics return the following stream of tables:
_time | _measurement | url | _field | _value |
---|
2021-01-01T00:00:00Z | prometheus | http://localhost:8086/metrics | go_memstats_alloc_bytes_total | 1422764240.0 |
_time | _measurement | url | _field | _value |
---|
2021-01-01T00:00:00Z | prometheus | http://localhost:8086/metrics | go_memstats_buck_hash_sys_bytes | 5259247.0 |
_time | _measurement | task_type | url | le | _field | _value |
---|
2021-01-01T00:00:00Z | prometheus | system | http://localhost:8086/metrics | 0.25 | task_executor_run_latency_seconds | 4413 |
_time | _measurement | task_type | url | le | _field | _value |
---|
2021-01-01T00:00:00Z | prometheus | system | http://localhost:8086/metrics | 0.5 | task_executor_run_latency_seconds | 11901 |
_time | _measurement | task_type | url | le | _field | _value |
---|
2021-01-01T00:00:00Z | prometheus | system | http://localhost:8086/metrics | 1 | task_executor_run_latency_seconds | 12565 |
_time | _measurement | task_type | url | le | _field | _value |
---|
2021-01-01T00:00:00Z | prometheus | system | http://localhost:8086/metrics | 2.5 | task_executor_run_latency_seconds | 12823 |
_time | _measurement | task_type | url | le | _field | _value |
---|
2021-01-01T00:00:00Z | prometheus | system | http://localhost:8086/metrics | 5 | task_executor_run_latency_seconds | 12844 |
_time | _measurement | task_type | url | le | _field | _value |
---|
2021-01-01T00:00:00Z | prometheus | system | http://localhost:8086/metrics | +Inf | task_executor_run_latency_seconds | 74429 |
_time | _measurement | task_type | url | _field | _value |
---|
2021-01-01T00:00:00Z | prometheus | system | http://localhost:8086/metrics | task_executor_run_latency_seconds_sum | 425678353867.9698 |
_time | _measurement | task_type | url | _field | _value |
---|
2021-01-01T00:00:00Z | prometheus | system | http://localhost:8086/metrics | task_executor_run_latency_seconds_count | 74429 |
_time | _measurement | task_type | taskID | url | quantile | _field | _value |
---|
2021-01-01T00:00:00Z | prometheus | threshold | 00xx0Xx0xx00XX0x0 | http://localhost:8086/metrics | 0.5 | task_executor_run_duration | 5.178160855 |
_time | _measurement | task_type | taskID | url | quantile | _field | _value |
---|
2021-01-01T00:00:00Z | prometheus | threshold | 00xx0Xx0xx00XX0x0 | http://localhost:8086/metrics | 0.9 | task_executor_run_duration | 5.178160855 |
_time | _measurement | task_type | taskID | url | quantile | _field | _value |
---|
2021-01-01T00:00:00Z | prometheus | threshold | 00xx0Xx0xx00XX0x0 | http://localhost:8086/metrics | 0.99 | task_executor_run_duration | 5.178160855 |
_time | _measurement | task_type | taskID | url | _field | _value |
---|
2021-01-01T00:00:00Z | prometheus | threshold | 00xx0Xx0xx00XX0x0 | http://localhost:8086/metrics | task_executor_run_duration_sum | 2121.9758301650004 |
_time | _measurement | task_type | taskID | url | _field | _value |
---|
2021-01-01T00:00:00Z | prometheus | threshold | 00xx0Xx0xx00XX0x0 | http://localhost:8086/metrics | task_executor_run_duration_count | 570 |
Different data structures for scraped Prometheus metrics
Telegraf and InfluxDB
provide tools that scrape Prometheus metrics and store them in InfluxDB.
Depending on the tool and and configuration you use to scrape metrics,
the resulting data structure may differ from the structure returned by prometheus.scrape()
described above.
For information about the different data structures of scraped Prometheus metrics
stored in InfluxDB, see InfluxDB Prometheus metric parsing formats.
Write Prometheus metrics to InfluxDB
To write scraped Prometheus metrics to InfluxDB:
- Use
prometheus.scrape
to scrape Prometheus metrics. - Use
to()
to write the scraped
metrics to InfluxDB.
import "experimental/prometheus"
prometheus.scrape(url: "http://example.com/metrics")
|> to(bucket: "example-bucket", host: "http://localhost:8086", org: "example-org", token: "mYsuP3R5eCR37t0K3n")
Write Prometheus metrics to InfluxDB at regular intervals
To scrape Prometheus metrics and write them to InfluxDB at regular intervals,
scrape Prometheus metrics in an InfluxDB task.
import "experimental/prometheus"
option task = {name: "Scrape Prometheus metrics", every: 10s}
prometheus.scrape(url: "http://example.com/metrics")
|> to(bucket: "example-bucket")
Was this page helpful?
Thank you for your feedback!
Support and feedback
Thank you for being part of our community!
We welcome and encourage your feedback and bug reports for Flux and this documentation.
To find support, use the following resources:
Customers with an annual or support contract can contact InfluxData Support.