Documentation

Create an InfluxDB scraper

InfluxDB scrapers collect data from specified targets at regular intervals, then write the scraped data to an InfluxDB bucket. Scrapers can collect data from any HTTP(S)-accessible endpoint that provides data in the Prometheus data format.

Create a scraper in the InfluxDB UI

  1. In the navigation menu on the left, select Data (Load Data) > Scrapers.

  2. Click Create Scraper.

  3. Enter a Name for the scraper.

  4. Select a Bucket to store the scraped data.

  5. Enter the Target URL to scrape. The default URL value is http://localhost:8086/metrics, which provides InfluxDB-specific metrics in the Prometheus data format.

  6. Click Create.

The new scraper will begin scraping data after approximately 10 seconds, then continue scraping in 10 second intervals.


Was this page helpful?

Thank you for your feedback!


The future of Flux

Flux is going into maintenance mode. You can continue using it as you currently are without any changes to your code.

Read more

Now Generally Available

InfluxDB 3 Core and Enterprise

Start fast. Scale faster.

Get the Updates

InfluxDB 3 Core is an open source, high-speed, recent-data engine that collects and processes data in real-time and persists it to local disk or object storage. InfluxDB 3 Enterprise builds on Core’s foundation, adding high availability, read replicas, enhanced security, and data compaction for faster queries and optimized storage. A free tier of InfluxDB 3 Enterprise is available for non-commercial at-home or hobbyist use.

For more information, check out: