Documentation

Downsample data with notebooks

Create a notebook to downsample data. Downsampling aggregates or summarizes data within specified time intervals, reducing the overall disk usage as data collects over time.

The following example creates a notebook that queries Coinbase bitcoin price sample data from the last hour, downsamples the data into ten minute summaries, and then writes the downsampled data to an InfluxDB bucket.

  1. If you do not have an existing bucket to write the downsampled data to, create a new bucket.

  2. Create a new notebook.

  3. Select Past 1h from the time range drop-down list at the top of your notebook.

  4. In the Build a Query cell:

    1. In the FROM column under Sample, select Coinbase bitcoin price.
    2. In the next FILTER column, select _measurement from the drop-down list and select the coindesk measurement in the list of measurements.
    3. In the next FILTER column, select _field from the drop-down list, and select the price field from the list of fields.
    4. In the next FILTER column, select code from the drop-down list, and select a currency code.
  5. Click after your Build a Query cell to add a new cell and select Flux Script.

  6. In the Flux script cell:

    1. Use __PREVIOUS_RESULT__ to load the output of the previous notebook cell into the Flux script.

    2. Use aggregateWindow() to window data into ten minute intervals and return the average of each interval. Specify the following parameters:

      • every: Window interval (should be less than or equal to the duration of the queried time range). For this example, use 10m.
      • fn: Aggregate or selector function to apply to each window. For this example, use mean.
    3. Use to() to write the downsampled data back to an InfluxDB bucket.

    __PREVIOUS_RESULT__
        |> aggregateWindow(every: 10m, fn: mean)
        |> to(bucket: "example-bucket")
  7. (Optional) Click and select Note to add a note to describe your notebook, for example, “Downsample Coinbase bitcoin prices into hourly averages.”

  8. Click Run to run the notebook and write the downsampled data to your bucket.

Continuously run a notebook

To continuously run your notebook, export the notebook as a task:

  1. Click to add a new cell, and then select Task.

  2. Provide the following:

    • Every: Interval that the task should run at.
    • Offset: (Optional) Time to wait after the defined interval to execute the task. This allows the task to capture late-arriving data.
  3. Click Export as Task.


Was this page helpful?

Thank you for your feedback!


New in InfluxDB 3.5

Key enhancements in InfluxDB 3.5 and the InfluxDB 3 Explorer 1.3.

See the Blog Post

InfluxDB 3.5 is now available for both Core and Enterprise, introducing custom plugin repository support, enhanced operational visibility with queryable CLI parameters and manual node management, stronger security controls, and general performance improvements.

InfluxDB 3 Explorer 1.3 brings powerful new capabilities including Dashboards (beta) for saving and organizing your favorite queries, and cache querying for instant access to Last Value and Distinct Value caches—making Explorer a more comprehensive workspace for time series monitoring and analysis.

For more information, check out:

InfluxDB Docker latest tag changing to InfluxDB 3 Core

On November 3, 2025, the latest tag for InfluxDB Docker images will point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in your Docker deployments.

If using Docker to install and run InfluxDB, the latest tag will point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in your Docker deployments. For example, if using Docker to run InfluxDB v2, replace the latest version tag with a specific version tag in your Docker pull command–for example:

docker pull influxdb:2

InfluxDB Cloud powered by TSM