Flux vs InfluxQL
This page documents an earlier version of InfluxDB OSS. InfluxDB OSS v2 is the latest stable version.
Flux is an alternative to InfluxQL and other SQL-like query languages for querying and analyzing data. Flux uses functional language patterns making it incredibly powerful, flexible, and able to overcome many of the limitations of InfluxQL. This article outlines many of the tasks possible with Flux but not InfluxQL and provides information about Flux and InfluxQL parity.
Possible with Flux
- Joins
- Math across measurements
- Sort by tags
- Group by any column
- Window by calendar months and years
- Work with multiple data sources
- DatePart-like queries
- Pivot
- Histograms
- Covariance
- Cast booleans to integers
- String manipulation and data shaping
- Work with geo-temporal data
Joins
InfluxQL has never supported joins. They can be accomplished using TICKscript,
but even TICKscript’s join capabilities are limited.
Flux’s join()
function allows you
to join data from any bucket, any measurement, and on any columns as long as
each data set includes the columns on which they are to be joined.
This opens the door for really powerful and useful operations.
dataStream1 = from(bucket: "bucket1")
|> range(start: -1h)
|> filter(fn: (r) =>
r._measurement == "network" and
r._field == "bytes-transferred"
)
dataStream2 = from(bucket: "bucket1")
|> range(start: -1h)
|> filter(fn: (r) =>
r._measurement == "httpd" and
r._field == "requests-per-sec"
)
join(
tables: {d1:dataStream1, d2:dataStream2},
on: ["_time", "_stop", "_start", "host"]
)
For an in-depth walkthrough of using the join()
function, see How to join data with Flux.
Math across measurements
Being able to perform cross-measurement joins also allows you to run calculations using
data from separate measurements – a highly requested feature from the InfluxData community.
The example below takes two data streams from separate measurements, mem
and processes
,
joins them, then calculates the average amount of memory used per running process:
// Memory used (in bytes)
memUsed = from(bucket: "telegraf/autogen")
|> range(start: -1h)
|> filter(fn: (r) =>
r._measurement == "mem" and
r._field == "used"
)
// Total processes running
procTotal = from(bucket: "telegraf/autogen")
|> range(start: -1h)
|> filter(fn: (r) =>
r._measurement == "processes" and
r._field == "total"
)
// Join memory used with total processes and calculate
// the average memory (in MB) used for running processes.
join(
tables: {mem:memUsed, proc:procTotal},
on: ["_time", "_stop", "_start", "host"]
)
|> map(fn: (r) => ({
_time: r._time,
_value: (r._value_mem / r._value_proc) / 1000000
})
)
Sort by tags
InfluxQL’s sorting capabilities are very limited, allowing you only to control the
sort order of time
using the ORDER BY time
clause.
Flux’s sort()
function sorts records based on list of columns.
Depending on the column type, records are sorted lexicographically, numerically, or chronologically.
from(bucket:"telegraf/autogen")
|> range(start:-12h)
|> filter(fn: (r) =>
r._measurement == "system" and
r._field == "uptime"
)
|> sort(columns:["region", "host", "_value"])
Group by any column
InfluxQL lets you group by tags or by time intervals, but nothing else.
Flux lets you group by any column in the dataset, including _value
.
Use the Flux group()
function
to define which columns to group data by.
from(bucket:"telegraf/autogen")
|> range(start:-12h)
|> filter(fn: (r) => r._measurement == "system" and r._field == "uptime" )
|> group(columns:["host", "_value"])
Window by calendar months and years
InfluxQL does not support windowing data by calendar months and years due to their varied lengths.
Flux supports calendar month and year duration units (1mo
, 1y
) and lets you
window and aggregate data by calendar month and year.
from(bucket:"telegraf/autogen")
|> range(start:-1y)
|> filter(fn: (r) => r._measurement == "mem" and r._field == "used_percent" )
|> aggregateWindow(every: 1mo, fn: mean)
Work with multiple data sources
InfluxQL can only query data stored in InfluxDB. Flux can query data from other data sources such as CSV, PostgreSQL, MySQL, Google BigTable, and more. Join that data with data in InfluxDB to enrich query results.
import "csv"
import "sql"
csvData = csv.from(csv: rawCSV)
sqlData = sql.from(
driverName: "postgres",
dataSourceName: "postgresql://user:password@localhost",
query:"SELECT * FROM example_table"
)
data = from(bucket: "telegraf/autogen")
|> range(start: -24h)
|> filter(fn: (r) => r._measurement == "sensor")
auxData = join(tables: {csv: csvData, sql: sqlData}, on: ["sensor_id"])
enrichedData = join(tables: {data: data, aux: auxData}, on: ["sensor_id"])
enrichedData
|> yield(name: "enriched_data")
For an in-depth walkthrough of querying SQL data, see Query SQL data sources.
DatePart-like queries
InfluxQL doesn’t support DatePart-like queries that only return results during specified hours of the day.
The Flux hourSelection
function
returns only data with time values in a specified hour range.
from(bucket: "telegraf/autogen")
|> range(start: -1h)
|> filter(fn: (r) =>
r._measurement == "cpu" and
r.cpu == "cpu-total"
)
|> hourSelection(start: 9, stop: 17)
Pivot
Pivoting data tables has never been supported in InfluxQL.
The Flux pivot()
function provides the ability
to pivot data tables by specifying rowKey
, columnKey
, and valueColumn
parameters.
from(bucket: "telegraf/autogen")
|> range(start: -1h)
|> filter(fn: (r) =>
r._measurement == "cpu" and
r.cpu == "cpu-total"
)
|> pivot(
rowKey:["_time"],
columnKey: ["_field"],
valueColumn: "_value"
)
Histograms
The ability to generate histograms has been a highly requested feature for InfluxQL, but has never been supported.
Flux’s histogram()
function uses input
data to generate a cumulative histogram with support for other histogram types coming in the future.
from(bucket: "telegraf/autogen")
|> range(start: -1h)
|> filter(fn: (r) =>
r._measurement == "mem" and
r._field == "used_percent"
)
|> histogram(
buckets: [10, 20, 30, 40, 50, 60, 70, 80, 90, 100]
)
For an example of using Flux to create a cumulative histogram, see Create histograms.
Covariance
Flux provides functions for simple covariance calculation.
The covariance()
function
calculates the covariance between two columns and the cov()
function
calculates the covariance between two data streams.
Covariance between two columns
from(bucket: "telegraf/autogen")
|> range(start:-5m)
|> covariance(columns: ["x", "y"])
Covariance between two streams of data
table1 = from(bucket: "telegraf/autogen")
|> range(start: -15m)
|> filter(fn: (r) =>
r._measurement == "measurement_1"
)
table2 = from(bucket: "telegraf/autogen")
|> range(start: -15m)
|> filter(fn: (r) =>
r._measurement == "measurement_2"
)
cov(x: table1, y: table2, on: ["_time", "_field"])
Cast booleans to integers
InfluxQL supports type casting, but only for numeric data types (floats to integers and vice versa). Flux type conversion functions provide much broader support for type conversions and let you perform some long-requested operations like casting a boolean values to integers.
Cast boolean field values to integers
from(bucket: "telegraf/autogen")
|> range(start: -1h)
|> filter(fn: (r) =>
r._measurement == "m" and
r._field == "bool_field"
)
|> toInt()
String manipulation and data shaping
InfluxQL doesn’t support string manipulation when querying data.
The Flux Strings package is a collection of functions that operate on string data.
When combined with the map()
function,
functions in the string package allow for operations like string sanitization and normalization.
import "strings"
from(bucket: "telegraf/autogen")
|> range(start: -1h)
|> filter(fn: (r) =>
r._measurement == "weather" and
r._field == "temp"
)
|> map(fn: (r) => ({
r with
location: strings.toTitle(v: r.location),
sensor: strings.replaceAll(v: r.sensor, t: " ", u: "-"),
status: strings.substring(v: r.status, start: 0, end: 8)
}))
Work with geo-temporal data
InfluxQL doesn’t provide functionality for working with geo-temporal data. The Flux Geo package is a collection of functions that let you shape, filter, and group geo-temporal data.
import "experimental/geo"
from(bucket: "geo/autogen")
|> range(start: -1w)
|> filter(fn: (r) => r._measurement == "taxi")
|> geo.shapeData(latField: "latitude", lonField: "longitude", level: 20)
|> geo.filterRows(
region: {lat: 40.69335938, lon: -73.30078125, radius: 20.0},
strict: true
)
|> geo.asTracks(groupBy: ["fare-id"])
InfluxQL and Flux parity
Flux is working towards complete parity with InfluxQL and new functions are being added to that end. The table below shows InfluxQL statements, clauses, and functions along with their equivalent Flux functions.
For a complete list of Flux functions, view all Flux functions.
InfluxQL and Flux parity
* The to()
function only writes to InfluxDB 2.0.
Was this page helpful?
Thank you for your feedback!
Support and feedback
Thank you for being part of our community! We welcome and encourage your feedback and bug reports for InfluxDB and this documentation. To find support, use the following resources:
Customers with an annual or support contract can contact InfluxData Support.