Documentation

Scraping and discovery

Data can be pulled from a dynamic list of remote targets with the discovery and scraping features of Kapacitor. Use those features with TICKscripts to monitor targets, process the data, and write data to InfluxDB. Currently, Kapacitor supports only Prometheus style targets.

Note: Scraping and discovery is currently under technical preview. There may be changes to the configuration and behavior in subsequent releases.

Content

Overview

The diagram below outlines the infrastructure for discovering and scraping data with Kapacitor.

Image 1 – Scrapping and Discovery work flow

configuration-open
  1. First, Kapacitor implements the discovery process to identify the available targets in your infrastructure. It requests that information at regular intervals and receives that information from an authority. In the diagram, the authority informs Kapacitor of three targets: A, B, and C.
  2. Next, Kapacitor implements the scraping process to pull metrics data from the existing targets. It runs the scraping process at regular intervals. Here, Kapacitor requests metrics from targets A, B, and C. The application running on A, B, and C exposes a /metrics endpoint on its HTTP API which returns application-specific statistics.
  3. Finally, Kapacitor processes the data according to configured TICKscripts. Use TICKscripts to filter, transform, and perform other tasks on the metrics data. In addition, if the data should be stored, configure a TICKscript to send it to InfluxDB.

Pushing vs. Pulling Metrics

By combining discovery with scraping, Kapacitor enables a metrics gathering infrastructure to pull metrics off of targets instead of requiring them to push metrics out to InfluxDB. Pulling metrics has several advantages in dynamic environments where a target may have a short lifecycle.

Configuring Scrapers and Discoverers

A single scraper scrapes the targets from a single discoverer. Configuring both scrapers and discoverers comes down to configuring each individually and then informing the scraper about the discoverer.

Below are all the configuration options for a scraper.

Example 1 – Scrapper Configuration

[[scraper]]
  enabled = false
  name = "myscraper"
  # ID of the discoverer to use
  discoverer-id = ""
  # The kind of discoverer to use
  discoverer-service = ""
  db = "mydb"
  rp = "myrp"
  type = "prometheus"
  scheme = "http"
  metrics-path = "/metrics"
  scrape-interval = "1m0s"
  scrape-timeout = "10s"
  username = ""
  password = ""
  bearer-token = ""
  ssl-ca = ""
  ssl-cert = ""
  ssl-key = ""
  ssl-server-name = ""
  insecure-skip-verify = false

Available Discoverers

Kapacitor supports the following services for discovery:

NameDescription
azureDiscover targets hosted in Azure.
consulDiscover targets using Consul service discovery.
dnsDiscover targets via DNS queries.
ec2Discover targets hosted in AWS EC2.
file-discoveryDiscover targets listed in files.
gceDiscover targets hosted in GCE.
kubernetesDiscover targets hosted in Kubernetes.
marathonDiscover targets using Marathon service discovery.
nerveDiscover targets using Nerve service discovery.
serversetDiscover targets using Serversets service discovery.
static-discoveryStatically list targets.
tritonDiscover targets using Triton service discovery.

See the example configuration file for details on configuring each discoverer.


Was this page helpful?

Thank you for your feedback!


New in InfluxDB 3.5

Key enhancements in InfluxDB 3.5 and the InfluxDB 3 Explorer 1.3.

See the Blog Post

InfluxDB 3.5 is now available for both Core and Enterprise, introducing custom plugin repository support, enhanced operational visibility with queryable CLI parameters and manual node management, stronger security controls, and general performance improvements.

InfluxDB 3 Explorer 1.3 brings powerful new capabilities including Dashboards (beta) for saving and organizing your favorite queries, and cache querying for instant access to Last Value and Distinct Value caches—making Explorer a more comprehensive workspace for time series monitoring and analysis.

For more information, check out:

InfluxDB Docker latest tag changing to InfluxDB 3 Core

On November 3, 2025, the latest tag for InfluxDB Docker images will point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in your Docker deployments.

If using Docker to install and run InfluxDB, the latest tag will point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in your Docker deployments. For example, if using Docker to run InfluxDB v2, replace the latest version tag with a specific version tag in your Docker pull command–for example:

docker pull influxdb:2