Documentation

Get started with InfluxDB tasks

This page documents an earlier version of InfluxDB. InfluxDB v2.6 is the latest stable version. View this page in the v2.6 documentation.

An InfluxDB task is a scheduled Flux script that takes a stream of input data, modifies or analyzes it in some way, then writes the modified data back to InfluxDB or performs other actions.

This article walks through writing a basic InfluxDB task that downsamples data and stores it in a new bucket.

Components of a task

Every InfluxDB task needs the following components. Their form and order can vary, but they are all essential parts of a task.

Skip to the full example task script

Define task options

Task options define specific information about the task. The example below illustrates how task options are defined in your Flux script:

option task = {name: "downsample_5m_precision", every: 1h, offset: 0m}

See Task configuration options for detailed information about each option.

The InfluxDB UI provides a form for defining task options.

Define a data source

  1. Use from() to query data from InfluxDB . Use other Flux input functions to retrieve data from other sources.
  2. Use range() to define the time range to return data from.
  3. Use filter() to filter data based on column values.
data = from(bucket: "example-bucket")
    |> range(start: -task.every)
    |> filter(fn: (r) => r._measurement == "mem" and r.host == "myHost")

Use task options in your Flux script

Task options are defined in a task option record and can be referenced in your Flux script. In the example above, the time range is defined as -task.every.

task.every is dot notation that references the every property of the task option record. every is defined as 1h, therefore -task.every equates to -1h.

Using task options to define values in your Flux script can make reusing your task easier.

Process or transform your data

Tasks automatically process or transform data in some way at regular intervals. Data processing can include operations such as downsampling data, detecting anomalies, sending notifications, and more.

Use offset to account for latent data

Use the offset task option to account for potentially latent data (like data from edge devices). A task that runs at one hour intervals (every: 1h) with an offset of five minutes (offset: 5m) executes 5 minutes after the hour, but queries data from the original one hour interval.

The task example below downsamples data by calculating the average of set intervals. It uses aggregateWindow() to group points into 5 minute windows and calculate the average of each window with mean().

option task = {name: "downsample_5m_precision", every: 1h, offset: 5m}

from(bucket: "example-bucket")
    |> range(start: -task.every)
    |> filter(fn: (r) => r._measurement == "mem" and r.host == "myHost")
    |> aggregateWindow(every: 5m, fn: mean)

See Common tasks for examples of tasks commonly used with InfluxDB.

Define a destination

In most cases, you’ll want to send and store data after the task has transformed it. The destination could be a separate InfluxDB measurement or bucket.

The example below uses to() to write the transformed data back to another InfluxDB bucket:

// ...
    |> to(bucket: "example-downsampled", org: "my-org")

To write data into InfluxDB, to() requires the following columns:

  • _time
  • _measurement
  • _field
  • _value

You can also write data to other destinations using Flux output functions.

Full example task script

Below is a task script that combines all of the components described above:

// Task options
option task = {name: "downsample_5m_precision", every: 1h, offset: 0m}

// Data source
from(bucket: "example-bucket")
    |> range(start: -task.every)
    |> filter(fn: (r) => r._measurement == "mem" and r.host == "myHost")
    // Data processing
    |> aggregateWindow(every: 5m, fn: mean)
    // Data destination
    |> to(bucket: "example-downsampled")

To learn more about InfluxDB tasks and how they work, watch the following video:


Was this page helpful?

Thank you for your feedback!


Set your InfluxDB URL

Linux Package Signing Key Rotation

All signed InfluxData Linux packages have been resigned with an updated key. If using Linux, you may need to update your package configuration to continue to download and verify InfluxData software packages.

For more information, see the Linux Package Signing Key Rotation blog post.

State of the InfluxDB Cloud (IOx) documentation

The new documentation for InfluxDB Cloud backed by InfluxDB IOx is a work in progress. We are adding new information and content almost daily. Thank you for your patience!

If there is specific information you’re looking for, please submit a documentation issue.