EXPLAIN command
InfluxDB 3 Enterprise is in public beta and available for testing and feedback,
but is not meant for production use yet.
Both the product and this documentation are works in progress.
We welcome and encourage your input about your experience with the beta and
invite you to join our public channels for updates and to
share feedback.
Beta expectations and recommendations
- No more breaking changes
While in beta, we will no longer make breaking changes to the
InfluxDB 3 Enterprise API. Any updates to file formats or organization
will have in-place upgrade paths for beta builds. Use the Beta
for testing and validation purposes, knowing that when you upgrade,
your data will come with it, and your APIs will all work the same. - Weekly releases
While in beta, we’ll be making weekly InfluxDB 3 Enterprise releases
and providing release notes to make it easy to track updates. - Processing engine is still in alpha
The embedded Python VM that is the processing engine should still
be considered alpha software. It is fully functional and allows
users to create and share plugins that trigger off of writes, a
schedule, or requests to an HTTP endpoint. However, we want to
continue iterating on user feedback and making changes where they
make sense. While we don't anticipate any breaking changes,
we're not yet committed to the current API as the long term
support target. Feedback we get during this phase will be important
to fine-tune the API for our users' needs. - Object store file organization and format
InfluxDB 3 Enterprise supports a "diskless" architecture, keeping all
state in object storage. The specific file organization and format
are not part of a stable API. Only the HTTP and Apache
Arrow Flight APIs are considered stable.
The EXPLAIN
command returns the logical plan and the physical plan for the
specified SQL statement.
EXPLAIN [ANALYZE] [VERBOSE] statement
Returns the logical plan and physical (execution) plan of a statement.
To output more details, use EXPLAIN VERBOSE
.
EXPLAIN
doesn’t execute the statement.
To execute the statement and view runtime metrics, use EXPLAIN ANALYZE
.
EXPLAIN
SELECT
room,
avg(temp) AS temp
FROM home
GROUP BY room
View EXPLAIN
example output
| plan_type | plan |
---|
0 | logical_plan | Projection: home.room, AVG(home.temp) AS temp |
| | Aggregate: groupBy=[[home.room]], aggr=[[AVG(home.temp)]] |
| | TableScan: home projection=[room, temp] |
1 | physical_plan | ProjectionExec: expr=[room@0 as room, AVG(home.temp)@1 as temp] |
| | AggregateExec: mode=FinalPartitioned, gby=[room@0 as room], aggr=[AVG(home.temp)] |
| | CoalesceBatchesExec: target_batch_size=8192 |
| | RepartitionExec: partitioning=Hash([room@0], 8), input_partitions=8 |
| | AggregateExec: mode=Partial, gby=[room@0 as room], aggr=[AVG(home.temp)] |
| | ParquetExec: file_groups={8 groups: [[70434/116281/404d73cea0236530ea94f5470701eb814a8f0565c0e4bef5a2d2e33dfbfc3567/1be334e8-0af8-00da-2615-f67cd4be90f7.parquet, 70434/116281/b7a9e7c57fbfc3bba9427e4b3e35c89e001e2e618b0c7eb9feb4d50a3932f4db/d29370d4-262f-0d32-2459-fe7b099f682f.parquet], [70434/116281/c14418ba28a22a3abb693a1cb326a63b62dc611aec58c9bed438fdafd3bc5882/8b29ae98-761f-0550-2fe4-ee77503658e9.parquet], [70434/116281/fa677477eed622ae8123da1251aa7c351f801e2ee2f0bc28c0fe3002a30b3563/65bb4dc3-04e1-0e02-107a-90cee83c51b0.parquet], [70434/116281/db162bdd30261019960dd70da182e6ebd270284569ecfb5deffea7e65baa0df9/2505e079-67c5-06d9-3ede-89aca542dd18.parquet], [70434/116281/0c025dcccae8691f5fd70b0f131eea4ca6fafb95a02f90a3dc7bb015efd3ab4f/3f3e44c3-b71e-0ca4-3dc7-8b2f75b9ff86.parquet], …]}, projection=[room, temp] |
Executes a statement and returns the execution plan and runtime metrics of the statement.
The report includes the logical plan and the physical plan annotated with execution counters, number of rows produced, and runtime metrics sampled during the query execution.
If the plan requires reading lots of data files, EXPLAIN
and EXPLAIN ANALYZE
may truncate the list of files in the report.
To output more information, including intermediate plans and paths for all scanned Parquet files, use EXPLAIN ANALYZE VERBOSE
.
EXPLAIN ANALYZE
SELECT
room,
avg(temp) AS temp
FROM home
WHERE time >= '2023-01-01' AND time <= '2023-12-31'
GROUP BY room
View EXPLAIN ANALYZE
example output
| plan_type | plan |
---|
0 | Plan with Metrics | ProjectionExec: expr=[room@0 as room, AVG(home.temp)@1 as temp], metrics=[output_rows=2, elapsed_compute=4.768µs] |
| | AggregateExec: mode=FinalPartitioned, gby=[room@0 as room], aggr=[AVG(home.temp)], ordering_mode=Sorted, metrics=[output_rows=2, elapsed_compute=140.405µs] |
| | CoalesceBatchesExec: target_batch_size=8192, metrics=[output_rows=2, elapsed_compute=6.821µs] |
| | RepartitionExec: partitioning=Hash([room@0], 8), input_partitions=8, preserve_order=true, sort_exprs=room@0 ASC, metrics=[output_rows=2, elapsed_compute=18.408µs, repart_time=59.698µs, fetch_time=1.057882762s, send_time=5.83µs] |
| | AggregateExec: mode=Partial, gby=[room@0 as room], aggr=[AVG(home.temp)], ordering_mode=Sorted, metrics=[output_rows=2, elapsed_compute=137.577µs] |
| | RepartitionExec: partitioning=RoundRobinBatch(8), input_partitions=6, preserve_order=true, sort_exprs=room@0 ASC, metrics=[output_rows=46, elapsed_compute=26.637µs, repart_time=6ns, fetch_time=399.971411ms, send_time=6.658µs] |
| | ProjectionExec: expr=[room@0 as room, temp@2 as temp], metrics=[output_rows=46, elapsed_compute=3.102µs] |
| | CoalesceBatchesExec: target_batch_size=8192, metrics=[output_rows=46, elapsed_compute=25.585µs] |
| | FilterExec: time@1 >= 1672531200000000000 AND time@1 <= 1703980800000000000, metrics=[output_rows=46, elapsed_compute=26.51µs] |
| | ParquetExec: file_groups={6 groups: [[70434/116281/404d73cea0236530ea94f5470701eb814a8f0565c0e4bef5a2d2e33dfbfc3567/1be334e8-0af8-00da-2615-f67cd4be90f7.parquet], [70434/116281/c14418ba28a22a3abb693a1cb326a63b62dc611aec58c9bed438fdafd3bc5882/8b29ae98-761f-0550-2fe4-ee77503658e9.parquet], [70434/116281/fa677477eed622ae8123da1251aa7c351f801e2ee2f0bc28c0fe3002a30b3563/65bb4dc3-04e1-0e02-107a-90cee83c51b0.parquet], [70434/116281/db162bdd30261019960dd70da182e6ebd270284569ecfb5deffea7e65baa0df9/2505e079-67c5-06d9-3ede-89aca542dd18.parquet], [70434/116281/0c025dcccae8691f5fd70b0f131eea4ca6fafb95a02f90a3dc7bb015efd3ab4f/3f3e44c3-b71e-0ca4-3dc7-8b2f75b9ff86.parquet], …]}, projection=[room, time, temp], output_ordering=[room@0 ASC, time@1 ASC], predicate=time@6 >= 1672531200000000000 AND time@6 <= 1703980800000000000, pruning_predicate=time_max@0 >= 1672531200000000000 AND time_min@1 <= 1703980800000000000, required_guarantees=[], metrics=[output_rows=46, elapsed_compute=6ns, predicate_evaluation_errors=0, bytes_scanned=3279, row_groups_pruned_statistics=0, file_open_errors=0, file_scan_errors=0, pushdown_rows_filtered=0, num_predicate_creation_errors=0, row_groups_pruned_bloom_filter=0, page_index_rows_filtered=0, time_elapsed_opening=398.462968ms, time_elapsed_processing=1.626106ms, time_elapsed_scanning_total=1.36822ms, page_index_eval_time=33.474µs, pushdown_eval_time=14.267µs, time_elapsed_scanning_until_data=1.27694ms] |
Executes a statement and returns the execution plan, runtime metrics, and additional details helpful for debugging the statement.
The report includes the following:
- the logical plan
- the physical plan annotated with execution counters, number of rows produced, and runtime metrics sampled during the query execution
- Information truncated in the
EXPLAIN
report–for example, the paths for all Parquet files retrieved for the query. - All intermediate physical plans that DataFusion and the Querier generate before generating the final physical plan–helpful in debugging to see when an
ExecutionPlan
node is added or removed, and how InfluxDB optimizes the query.
EXPLAIN ANALYZE VERBOSE SELECT temp FROM home
WHERE time >= now() - INTERVAL '7 days' AND room = 'Kitchen'
ORDER BY time
Was this page helpful?
Thank you for your feedback!
Thank you for being part of our community!
We welcome and encourage your feedback and bug reports for InfluxDB 3 Enterprise and this documentation.
To find support, use the following resources:
Customers with an annual or support contract can contact InfluxData Support.