Documentation

experimental.integral() function

experimental.integral() is subject to change at any time.

experimental.integral() computes the area under the curve per unit of time of subsequent non-null records.

The curve is defined using _time as the domain and record values as the range.

Input tables must have _start, _stop, _time, and _valuecolumns._startand_stop` must be part of the group key.

Function type signature
(<-tables: stream[{A with _value: B, _time: time}], ?interpolate: string, ?unit: duration) => stream[{A with _value: B}]

For more information, see Function type signatures.

Parameters

unit

Time duration used to compute the integral.

interpolate

Type of interpolation to use. Default is "" (no interpolation).

Use one of the following interpolation options:

  • empty string ("") for no interpolation
  • linear

tables

Input data. Default is piped-forward data (<-).

Examples

Calculate the integral

import "experimental"
import "sampledata"

data =
    sampledata.int()
        |> range(start: sampledata.start, stop: sampledata.stop)

data
    |> experimental.integral(unit: 20s)

View example input and output

Calculate the integral with linear interpolation

import "experimental"
import "sampledata"

data =
    sampledata.int()
        |> range(start: sampledata.start, stop: sampledata.stop)

data
    |> experimental.integral(unit: 20s, interpolate: "linear")

View example input and output


Was this page helpful?

Thank you for your feedback!


The future of Flux

Flux is going into maintenance mode. You can continue using it as you currently are without any changes to your code.

Read more

New in InfluxDB 3.4

Key enhancements in InfluxDB 3.4 and the InfluxDB 3 Explorer 1.2.

See the Blog Post

InfluxDB 3.4 is now available for both Core and Enterprise, which introduces offline token generation for use in automated deployments and configurable license type selection that lets you bypass the interactive license prompt. InfluxDB 3 Explorer 1.2 is also available, which includes InfluxDB cache management and other new features.

For more information, check out: