Documentation

Netfilter Conntrack Input Plugin

This plugin collects metrics from Netfilter’s conntrack tools. There are two collection mechanisms for this plugin:

  1. Extracting information from /proc/net/stat/nf_conntrack files if the collect option is set accordingly for finding CPU specific values.
  2. Using specific files and directories by specifying the dirs option. At runtime, conntrack exposes many of those connection statistics within /proc/sys/net. Depending on your kernel version, these files can be found in either /proc/sys/net/ipv4/netfilter or /proc/sys/net/netfilter and will be prefixed with either ip or nf.

In order to simplify configuration in a heterogeneous environment, a superset of directory and filenames can be specified. Any locations that doesn’t exist is ignored.

Introduced in: Telegraf v1.0.0 Tags: system OS support: linux

Global configuration options

Plugins support additional global and plugin configuration settings for tasks such as modifying metrics, tags, and fields, creating aliases, and configuring plugin ordering. See CONFIGURATION.md for more details.

Configuration

# Collects conntrack stats from the configured directories and files.
# This plugin ONLY supports Linux
[[inputs.conntrack]]
  ## The following defaults would work with multiple versions of conntrack.
  ## Note the nf_ and ip_ filename prefixes are mutually exclusive across
  ## kernel versions, as are the directory locations.

  ## Look through /proc/net/stat/nf_conntrack for these metrics
  ## all - aggregated statistics
  ## percpu - include detailed statistics with cpu tag
  collect = ["all", "percpu"]

  ## User-specified directories and files to look through
  ## Directories to search within for the conntrack files above.
  ## Missing directories will be ignored.
  dirs = ["/proc/sys/net/ipv4/netfilter","/proc/sys/net/netfilter"]

  ## Superset of filenames to look for within the conntrack dirs.
  ## Missing files will be ignored.
  files = ["ip_conntrack_count","ip_conntrack_max",
          "nf_conntrack_count","nf_conntrack_max"]

Metrics

A detailed explanation of each fields can be found in kernel documentation

  • conntrack
    • ip_conntrack_count (int, count): The number of entries in the conntrack table
    • ip_conntrack_max (int, size): The max capacity of the conntrack table
    • ip_conntrack_buckets (int, size): The size of hash table.

With collect = ["all"]:

  • entries: The number of entries in the conntrack table
  • searched: The number of conntrack table lookups performed
  • found: The number of searched entries which were successful
  • new: The number of entries added which were not expected before
  • invalid: The number of packets seen which can not be tracked
  • ignore: The number of packets seen which are already connected to an entry
  • delete: The number of entries which were removed
  • delete_list: The number of entries which were put to dying list
  • insert: The number of entries inserted into the list
  • insert_failed: The number of insertion attempted but failed (duplicate entry)
  • drop: The number of packets dropped due to conntrack failure
  • early_drop: The number of dropped entries to make room for new ones, if maxsize is reached
  • icmp_error: Subset of invalid. Packets that can’t be tracked due to error
  • expect_new: Entries added after an expectation was already present
  • expect_create: Expectations added
  • expect_delete: Expectations deleted
  • search_restart: Conntrack table lookups restarted due to hashtable resizes

Tags

With collect = ["percpu"] will include detailed statistics per CPU thread.

Without "percpu" the cpu tag will have all value.

Example Output

conntrack,host=myhost ip_conntrack_count=2,ip_conntrack_max=262144 1461620427667995735

with stats:

conntrack,cpu=all,host=localhost delete=0i,delete_list=0i,drop=2i,early_drop=0i,entries=5568i,expect_create=0i,expect_delete=0i,expect_new=0i,found=7i,icmp_error=1962i,ignore=2586413402i,insert=0i,insert_failed=2i,invalid=46853i,new=0i,search_restart=453336i,searched=0i 1615233542000000000
conntrack,host=localhost ip_conntrack_count=464,ip_conntrack_max=262144 1615233542000000000

Was this page helpful?

Thank you for your feedback!


InfluxDB OSS 2.9.0: API tokens are hashed by default

Stronger token security in InfluxDB OSS 2.9.0 — tokens are hashed on disk by default. Existing tokens are hashed on first startup and can’t be recovered afterward. Capture any plaintext tokens you still need before you upgrade.

View InfluxDB OSS 2.9.0 release notes

Hashed tokens authenticate exactly like unhashed tokens — clients and integrations keep working.

Also new in 2.9.0:

  • Configurable backup compression
  • Restore support for backups containing hashed tokens
  • Tighter Edge Data Replication queue validation
  • Flux upgrade
  • Compaction reliability improvements

Key enhancements in Explorer 1.8

Explorer 1.8 is now available with streaming data subscriptions (beta), line protocol preview, and query history & saved queries.

View Explorer 1.8 release notes

Explorer 1.8 includes new features and improvements that make it easier to ingest, explore, and manage data.

Highlights:

  • Streaming data subscriptions (beta): Stream data into Explorer from MQTT, Kafka, and AMQP sources.
  • Line protocol preview: Preview line protocol, schema, and parse errors before data is written.
  • Custom sample data: Generate custom sample datasets with line protocol and schema preview.
  • Query history and saved queries: Browse query history and save/re-run named queries.
  • Retention period management: Set, update, or clear retention periods on databases and tables.

For more details, see Explorer 1.8 release notes

InfluxDB 3.9: Performance upgrade preview

InfluxDB 3 Enterprise 3.9 includes a beta of major performance upgrades with faster single-series queries, wide-and-sparse table support, and more.

InfluxDB 3 Enterprise 3.9 includes a beta of major performance and feature updates.

Key improvements:

  • Faster single-series queries
  • Consistent resource usage
  • Wide-and-sparse table support
  • Automatic distinct value caches for reduced latency with metadata queries

Preview features are subject to breaking changes.

For more information, see:

Telegraf Enterprise now in public beta

Get early access to the Telegraf Controller and provide feedback to help shape the future of Telegraf Enterprise.

See the Blog Post

The upcoming Telegraf Enterprise offering is for organizations running Telegraf at scale and is comprised of two key components:

  • Telegraf Controller: A control plane (UI + API) that centralizes Telegraf configuration management and agent health visibility.
  • Telegraf Enterprise Support: Official support for Telegraf Controller and Telegraf plugins.

Join the Telegraf Enterprise beta to get early access to the Telegraf Controller and provide feedback to help shape the future of Telegraf Enterprise.

For more information:

Telegraf Controller v0.0.7-beta now available

Telegraf Controller v0.0.7-beta is now available with new features, improvements, bug fixes, and an important breaking change.

View the release notes
Download Telegraf Controller v0.0.7-beta

InfluxDB Docker latest tag changing to InfluxDB 3 Core

On May 27, 2026, the latest tag for InfluxDB Docker images will point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in your Docker deployments.

If using Docker to install and run InfluxDB, the latest tag will point to InfluxDB 3 Core. To avoid unexpected upgrades, use specific version tags in your Docker deployments. For example, if using Docker to run InfluxDB v2, replace the latest version tag with a specific version tag in your Docker pull command–for example:

docker pull influxdb:2