---
title: Create an InfluxDB scraper
description: Create an InfluxDB scraper that collects data from InfluxDB or a remote endpoint.
url: https://docs.influxdata.com/influxdb/v2/write-data/no-code/scrape-data/manage-scrapers/create-a-scraper/
estimated_tokens: 653
product: InfluxDB OSS v2
version: v2
---

# Create an InfluxDB scraper

This page documents an earlier version of InfluxDB OSS. [InfluxDB 3 Core](/influxdb3/core/) is the latest stable version.

#### API token hashing is enabled by default in InfluxDB OSS 2.9.0

Stronger token security: tokens are stored as hashes on disk, so a copy of the database file doesn’t expose usable tokens. Existing tokens are hashed on first startup and the original strings can’t be recovered afterward — **capture any plaintext tokens you still need before you upgrade**.

For more information, see [Token hashing](/influxdb/v2/admin/tokens/#token-hashing).

InfluxDB scrapers collect data from specified targets at regular intervals, then write the scraped data to an InfluxDB bucket. Scrapers can collect data from any HTTP(S)-accessible endpoint that provides data in the [Prometheus data format](https://prometheus.io/docs/instrumenting/exposition_formats/).

## Create a scraper in the InfluxDB UI

1. In the navigation menu on the left, select **Data** (**Load Data**) > **Scrapers**.
    
    Load Data
    
2. Click **Create Scraper**.
    
3. Enter a **Name** for the scraper.
    
4. Select a **Bucket** to store the scraped data.
    
5. Enter the **Target URL** to scrape. The default URL value is `http://localhost:8086/metrics`, which provides InfluxDB-specific metrics in the [Prometheus data format](https://prometheus.io/docs/instrumenting/exposition_formats/).
    
6. Click **Create**.
    

The new scraper will begin scraping data after approximately 10 seconds, then continue scraping in 10 second intervals.

#### Related

-   [Scrape data using InfluxDB scrapers](/influxdb/v2/write-data/no-code/scrape-data/)

[scraper](/influxdb/v2/tags/scraper/)
