The following guides walk through both common and complex queries and use cases for Flux.
Use filter()
to query data based on fields, tags, or any other column value. filter()
performs operations similar to the SELECT
statement and the WHERE
clause in InfluxQL and other SQL-like query languages.
from(bucket: "example-bucket")
|> range(start: -1h)
|> filter(fn: (r) => r._measurement == "example-measurement" and r.tag == "example-tag")
|> filter(fn: (r) => r._field == "example-field")
Read more
Use group()
to group data with common values in specific columns.
data
|> group(columns: ["host"], mode: "by")
Input:
_time | host | _value |
---|
2020-01-01T00:01:00Z | host1 | 1.0 |
2020-01-01T00:01:00Z | host2 | 2.0 |
2020-01-01T00:02:00Z | host1 | 1.0 |
2020-01-01T00:02:00Z | host2 | 3.0 |
Output:
_time | host | _value |
---|
2020-01-01T00:01:00Z | host1 | 1.0 |
2020-01-01T00:02:00Z | host1 | 1.0 |
_time | host | _value |
---|
2020-01-01T00:01:00Z | host2 | 2.0 |
2020-01-01T00:02:00Z | host2 | 3.0 |
Read more
Use sort()
to order records within each table by specific columns and limit()
to limit the number of records in output tables to a fixed number, n
.
data
|> sort(columns: ["host", "_value"])
|> limit(n: 4)
Input:
_time | host | _value |
---|
2020-01-01T00:01:00Z | A | 1.0 |
2020-01-01T00:02:00Z | B | 1.2 |
2020-01-01T00:03:00Z | A | 1.8 |
2020-01-01T00:04:00Z | B | 0.9 |
2020-01-01T00:05:00Z | B | 1.4 |
2020-01-01T00:06:00Z | B | 2.0 |
Output:
_time | host | _value |
---|
2020-01-01T00:03:00Z | A | 1.8 |
2020-01-01T00:01:00Z | A | 1.0 |
2020-01-01T00:06:00Z | B | 2.0 |
2020-01-01T00:05:00Z | B | 1.4 |
Read more
This guide walks through windowing and aggregating data with Flux and outlines how it shapes your data in the process.
data
|> aggregateWindow(every: 20m, fn: mean)
Input:
_time | _value |
---|
2020-01-01T00:00:00Z | 250 |
2020-01-01T00:04:00Z | 160 |
2020-01-01T00:12:00Z | 150 |
2020-01-01T00:19:00Z | 220 |
2020-01-01T00:32:00Z | 200 |
2020-01-01T00:51:00Z | 290 |
2020-01-01T01:00:00Z | 340 |
Output:
_time | _value |
---|
2020-01-01T00:20:00Z | 195 |
2020-01-01T00:40:00Z | 200 |
2020-01-01T01:00:00Z | 290 |
2020-01-01T01:20:00Z | 340 |
Read more
Flux provides functions that let you explore the structure and schema of your data stored in InfluxDB.
import "influxdata/influxdb/schema"
// List buckets
buckets()
// List measurements
schema.measurements(bucket: "example-bucket")
// List field keys
schema.fieldKeys(bucket: "example-bucket")
// List tag keys
schema.tagKeys(bucket: "example-bucket")
// List tag values
schema.tagValues(bucket: "example-bucket", tag: "example-tag")
Read more
Use map()
to remap column values and apply mathematic operations.
data
|> map(fn: (r) => ({ r with _value: r._value * r._value }))
Input:
_time | _value |
---|
2020-01-01T00:01:00Z | 2 |
2020-01-01T00:02:00Z | 4 |
2020-01-01T00:03:00Z | 3 |
2020-01-01T00:04:00Z | 5 |
Output:
_time | _value |
---|
2020-01-01T00:01:00Z | 4 |
2020-01-01T00:02:00Z | 16 |
2020-01-01T00:03:00Z | 9 |
2020-01-01T00:04:00Z | 25 |
Read more
Use pivot()
or join()
and the map()
function to align operand values into rows and calculate a percentage.
data
|> pivot(rowKey:["_time"], columnKey: ["_field"], valueColumn: "_value")
|> map(
fn: (r) => ({
_time: r._time,
_field: "used_percent",
_value: float(v: r.used) / float(v: r.total) * 100.0,
}),
)
Input:
_time | _field | _value |
---|
2020-01-01T00:00:00Z | used | 2.5 |
2020-01-01T00:00:10Z | used | 3.1 |
2020-01-01T00:00:20Z | used | 4.2 |
_time | _field | _value |
---|
2020-01-01T00:00:00Z | total | 8.0 |
2020-01-01T00:00:10Z | total | 8.0 |
2020-01-01T00:00:20Z | total | 8.0 |
Output:
_time | _field | _value |
---|
2020-01-01T00:00:00Z | used_percent | 31.25 |
2020-01-01T00:00:10Z | used_percent | 38.75 |
2020-01-01T00:00:20Z | used_percent | 52.50 |
Read more
Use increase()
to track increases across multiple columns in a table. This function is especially useful when tracking changes in counter values that wrap over time or periodically reset.
Input:
_time | _value |
---|
2020-01-01T00:01:00Z | 1 |
2020-01-01T00:02:00Z | 2 |
2020-01-01T00:03:00Z | 8 |
2020-01-01T00:04:00Z | 10 |
2020-01-01T00:05:00Z | 0 |
2020-01-01T00:06:00Z | 4 |
Output:
_time | _value |
---|
2020-01-01T00:02:00Z | 1 |
2020-01-01T00:03:00Z | 7 |
2020-01-01T00:04:00Z | 9 |
2020-01-01T00:05:00Z | 9 |
2020-01-01T00:06:00Z | 13 |
Read more
Use movingAverage()
or timedMovingAverage()
to return the moving average of data.
data
|> movingAverage(n: 3)
Input:
_time | _value |
---|
2020-01-01T00:01:00Z | 1.0 |
2020-01-01T00:02:00Z | 1.2 |
2020-01-01T00:03:00Z | 1.8 |
2020-01-01T00:04:00Z | 0.9 |
2020-01-01T00:05:00Z | 1.4 |
2020-01-01T00:06:00Z | 2.0 |
Output:
_time | _value |
---|
2020-01-01T00:03:00Z | 1.33 |
2020-01-01T00:04:00Z | 1.30 |
2020-01-01T00:05:00Z | 1.36 |
2020-01-01T00:06:00Z | 1.43 |
data
|> timedMovingAverage(every: 2m, period: 4m)
Input:
_time | _value |
---|
2020-01-01T00:01:00Z | 1.0 |
2020-01-01T00:02:00Z | 1.2 |
2020-01-01T00:03:00Z | 1.8 |
2020-01-01T00:04:00Z | 0.9 |
2020-01-01T00:05:00Z | 1.4 |
2020-01-01T00:06:00Z | 2.0 |
Output:
_time | _value |
---|
2020-01-01T00:02:00Z | 1.000 |
2020-01-01T00:04:00Z | 1.333 |
2020-01-01T00:06:00Z | 1.325 |
2020-01-01T00:06:00Z | 1.150 |
Read more
Use derivative()
to calculate the rate of change between subsequent values or aggregate.rate()
to calculate the average rate of change per window of time. If time between points varies, these functions normalize points to a common time interval making values easily comparable.
data
|> derivative(unit: 1m, nonNegative: true)
Input:
_time | _value |
---|
2020-01-01T00:00:00Z | 250 |
2020-01-01T00:04:00Z | 160 |
2020-01-01T00:12:00Z | 150 |
2020-01-01T00:19:00Z | 220 |
2020-01-01T00:32:00Z | 200 |
2020-01-01T00:51:00Z | 290 |
2020-01-01T01:00:00Z | 340 |
Output:
_time | _value |
---|
2020-01-01T00:04:00Z | |
2020-01-01T00:12:00Z | |
2020-01-01T00:19:00Z | 10.0 |
2020-01-01T00:32:00Z | |
2020-01-01T00:51:00Z | 4.74 |
2020-01-01T01:00:00Z | 5.56 |
import "experimental/aggregate"
data
|> aggregate.rate(every: 20m, unit: 1m)
Input:
_time | _value |
---|
2020-01-01T00:00:00Z | 250 |
2020-01-01T00:04:00Z | 160 |
2020-01-01T00:12:00Z | 150 |
2020-01-01T00:19:00Z | 220 |
2020-01-01T00:32:00Z | 200 |
2020-01-01T00:51:00Z | 290 |
2020-01-01T01:00:00Z | 340 |
Output:
_time | _value |
---|
2020-01-01T00:20:00Z | |
2020-01-01T00:40:00Z | 10.0 |
2020-01-01T01:00:00Z | 4.74 |
2020-01-01T01:20:00Z | 5.56 |
Read more
Use histogram()
to create cumulative histograms with Flux.
data
|> histogram(
column: "_value",
upperBoundColumn: "le",
countColumn: "_value",
bins: [100.0, 200.0, 300.0, 400.0],
)
Input:
_time | _value |
---|
2020-01-01T00:00:00Z | 250.0 |
2020-01-01T00:01:00Z | 160.0 |
2020-01-01T00:02:00Z | 150.0 |
2020-01-01T00:03:00Z | 220.0 |
2020-01-01T00:04:00Z | 200.0 |
2020-01-01T00:05:00Z | 290.0 |
2020-01-01T01:00:00Z | 340.0 |
Output:
le | _value |
---|
100.0 | 0.0 |
200.0 | 3.0 |
300.0 | 6.0 |
400.0 | 7.0 |
Read more
Use fill()
function to replace null values.
data
|> fill(usePrevious: true)
Input:
_time | _value |
---|
2020-01-01T00:01:00Z | null |
2020-01-01T00:02:00Z | 0.8 |
2020-01-01T00:03:00Z | null |
2020-01-01T00:04:00Z | null |
2020-01-01T00:05:00Z | 1.4 |
Output:
_time | _value |
---|
2020-01-01T00:01:00Z | null |
2020-01-01T00:02:00Z | 0.8 |
2020-01-01T00:03:00Z | 0.8 |
2020-01-01T00:04:00Z | 0.8 |
2020-01-01T00:05:00Z | 1.4 |
Input:
_time | _value |
---|
2020-01-01T00:01:00Z | null |
2020-01-01T00:02:00Z | 0.8 |
2020-01-01T00:03:00Z | null |
2020-01-01T00:04:00Z | null |
2020-01-01T00:05:00Z | 1.4 |
Output:
_time | _value |
---|
2020-01-01T00:01:00Z | 0.0 |
2020-01-01T00:02:00Z | 0.8 |
2020-01-01T00:03:00Z | 0.0 |
2020-01-01T00:04:00Z | 0.0 |
2020-01-01T00:05:00Z | 1.4 |
Read more
Use median()
to return a value representing the 0.5
quantile (50th percentile) or median of input data.
Input:
_time | _value |
---|
2020-01-01T00:01:00Z | 1.0 |
2020-01-01T00:02:00Z | 1.0 |
2020-01-01T00:03:00Z | 2.0 |
2020-01-01T00:04:00Z | 3.0 |
Read more
Use the quantile()
function to return all values within the q
quantile or percentile of input data.
data
|> quantile(q: 0.99, method: "estimate_tdigest")
Input:
_time | _value |
---|
2020-01-01T00:01:00Z | 1.0 |
2020-01-01T00:02:00Z | 1.0 |
2020-01-01T00:03:00Z | 2.0 |
2020-01-01T00:04:00Z | 3.0 |
Read more
This guide walks through joining data with Flux and outlines how it shapes your data in the process.
import "join"
import "sql"
left =
from(bucket: "example-bucket-1")
|> range(start: "-1h")
|> filter(fn: (r) => r._measurement == "example-m")
|> filter(fn: (r) => r._field == "example-f")
|> drop(columns: ["_measurement", "_field"])
right =
sql.from(
driverName: "postgres",
dataSourceName: "postgresql://username:password@localhost:5432",
query: "SELECT * FROM example_table",
)
join.inner(
left: left |> group(),
right: right,
on: (l, r) => l.sensorID == r.ID,
as: (l, r) => ({l with expired: r.expired}),
)
|> group(columns: ["_time", "_value"], mode: "except")
Input:
left
_time | sensorID | _value |
---|
2020-01-01T00:01:00Z | 1234 | 1 |
2020-01-01T00:02:00Z | 1234 | 2 |
2020-01-01T00:03:00Z | 1234 | 1 |
2020-01-01T00:04:00Z | 1234 | 3 |
_time | sensorID | _value |
---|
2020-01-01T00:01:00Z | 5678 | 2 |
2020-01-01T00:02:00Z | 5678 | 5 |
2020-01-01T00:03:00Z | 5678 | 1 |
2020-01-01T00:04:00Z | 5678 | 8 |
right
ID | expired | serviced |
---|
1234 | false | 2022-01-01 |
5678 | true | 2022-01-01 |
Output:
_time | sensorID | _value | expired |
---|
2020-01-01T00:01:00Z | 1234 | 1 | false |
2020-01-01T00:02:00Z | 1234 | 2 | false |
2020-01-01T00:03:00Z | 1234 | 1 | false |
2020-01-01T00:04:00Z | 1234 | 3 | false |
_time | sensorID | _value | expired |
---|
2020-01-01T00:01:00Z | 5678 | 2 | true |
2020-01-01T00:02:00Z | 5678 | 5 | true |
2020-01-01T00:03:00Z | 5678 | 1 | true |
2020-01-01T00:04:00Z | 5678 | 8 | true |
Read more
Use the cumulativeSum()
function to calculate a running total of values.
Input:
_time | _value |
---|
2020-01-01T00:01:00Z | 1 |
2020-01-01T00:02:00Z | 2 |
2020-01-01T00:03:00Z | 1 |
2020-01-01T00:04:00Z | 3 |
Output:
_time | _value |
---|
2020-01-01T00:01:00Z | 1 |
2020-01-01T00:02:00Z | 3 |
2020-01-01T00:03:00Z | 4 |
2020-01-01T00:04:00Z | 7 |
Read more
Use first()
or last()
to return the first or last point in an input table.
Input:
_time | _value |
---|
2020-01-01T00:01:00Z | 1.0 |
2020-01-01T00:02:00Z | 1.0 |
2020-01-01T00:03:00Z | 2.0 |
2020-01-01T00:04:00Z | 3.0 |
Output:
_time | _value |
---|
2020-01-01T00:01:00Z | 1.0 |
Input:
_time | _value |
---|
2020-01-01T00:01:00Z | 1.0 |
2020-01-01T00:02:00Z | 1.0 |
2020-01-01T00:03:00Z | 2.0 |
2020-01-01T00:04:00Z | 3.0 |
Output:
_time | _value |
---|
2020-01-01T00:04:00Z | 3.0 |
Read more
Use the Flux exists
operator to check if a row record contains a column or if that column’s value is null
.
Filter null values
data
|> filter(fn: (r) => exists r._value)
Read more
Create your own custom Flux functions to transform and operate on data.
multByX = (tables=<-, x) => tables
|> map(fn: (r) => ({r with _value: r._value * x}))
data
|> multByX(x: 2.0)
Read more
Use Flux dynamic query functions to extract scalar values from Flux query output. This lets you, for example, dynamically set variables using query results.
scalarValue = (tables=<-) => {
_record = tables
|> findRecord(fn: (key) => true, idx: 0)
return _record._value
}
Read more
Flux provides several functions to help monitor states and state changes in your data.
Use Flux to process and operate on timestamps.
The Flux sql
package provides functions for working with SQL data sources. Use sql.from()
to query SQL databases like PostgreSQL, MySQL, Snowflake, SQLite, Microsoft SQL Server, Amazon Athena, and Google BigQuery.
import "sql"
sql.from(
driverName: "postgres",
dataSourceName: "postgresql://user:password@localhost",
query: "SELECT * FROM example_table",
)
Read more
This guide describes how to use Flux conditional expressions, such as if
, else
, and then
, to query and transform data. Flux evaluates statements from left to right and stops evaluating once a condition matches.
if color == "green" then "008000" else "ffffff"
Read more
This guide walks through using regular expressions in evaluation logic in Flux functions.
data
|> filter(fn: (r) => r.tag =~ /^foo[1-3]/)
Input:
_time | tag | _value |
---|
2020-01-01T00:01:00Z | foo1 | 1.0 |
2020-01-01T00:02:00Z | foo5 | 1.2 |
2020-01-01T00:03:00Z | bar3 | 1.8 |
2020-01-01T00:04:00Z | foo3 | 0.9 |
2020-01-01T00:05:00Z | foo2 | 1.4 |
2020-01-01T00:06:00Z | bar1 | 2.0 |
Output:
_time | tag | _value |
---|
2020-01-01T00:01:00Z | foo1 | 1.0 |
2020-01-01T00:04:00Z | foo3 | 0.9 |
2020-01-01T00:05:00Z | foo2 | 1.4 |
Read more
Use the Flux Geo package to filter geo-temporal data and group by geographic location or track.
import "experimental/geo"
sampleGeoData
|> geo.filterRows(region: {lat: 30.04, lon: 31.23, radius: 200.0})
|> geo.groupByArea(newColumn: "geoArea", level: 5)
Read more
Use runtime.version()
to return the version of Flux installed in InfluxDB.
import "array"
import "runtime"
array.from(rows: [{version: runtime.version()}])
Read more