Documentation

Create an InfluxDB scraper

InfluxDB scrapers collect data from specified targets at regular intervals, then write the scraped data to an InfluxDB bucket. Scrapers can collect data from any HTTP(S)-accessible endpoint that provides data in the Prometheus data format.

Create a scraper in the InfluxDB UI

  1. In the navigation menu on the left, select Data (Load Data) > Scrapers.

  2. Click Create Scraper.

  3. Enter a Name for the scraper.

  4. Select a Bucket to store the scraped data.

  5. Enter the Target URL to scrape. The default URL value is http://localhost:8086/metrics, which provides InfluxDB-specific metrics in the Prometheus data format.

  6. Click Create.

The new scraper will begin scraping data after approximately 10 seconds, then continue scraping in 10 second intervals.


Was this page helpful?

Thank you for your feedback!


The future of Flux

Flux is going into maintenance mode. You can continue using it as you currently are without any changes to your code.

Read more

InfluxDB 3 Core and Enterprise are now in Beta

InfluxDB 3 Core and Enterprise are now available for beta testing, available under MIT or Apache 2 license.

InfluxDB 3 Core is a high-speed, recent-data engine that collects and processes data in real-time, while persisting it to local disk or object storage. InfluxDB 3 Enterprise is a commercial product that builds on Core’s foundation, adding high availability, read replicas, enhanced security, and data compaction for faster queries. A free tier of InfluxDB 3 Enterprise will also be available for at-home, non-commercial use for hobbyists to get the full historical time series database set of capabilities.

For more information, check out: