Documentation

Kapacitor HTTP API reference documentation

Use the Kapacitor HTTP API manage Kapacitor tasks, templates, recordings, collect troubleshooting data and more.

General information

  • Kapacitor provides an HTTP API on port 9092 by default.
  • Each section below defines the available API endpoints and their inputs and outputs.
  • All requests are versioned and namespaced using the base path /kapacitor/v1/.
In this section

HTTP response codes

All requests can return these response codes:

Response CodeMeaning
2xxThe request was a success, content is dependent on the request.
4xxInvalid request, refer to error for what is wrong with the request. Repeating the request will continue to return the same error.
5xxThe server was unable to process the request, refer to the error for a reason. Repeating the request may result in a success if the server issue has been resolved.

Errors

All requests can return JSON in the following format to provide more information about a failed request.

{ "error" : "error message" }

Query parameters vs JSON body

Query parameters are used only for GET requests. All other requests expect parameters in the JSON body.

The /kapacitor/v1/write endpoint is the one exception to this rule since Kapacitor is compatible with the InfluxDB /write endpoint.

When creating resources, the Kapacitor API server returns a link object with an href of the resource. Clients should be able to use the links provided from previous calls without any path manipulation.

IDs

To give you control over IDs, the Kapacitor API lets the client specify IDs for various resources. If you do not specify an ID, Kapacitor generates a random UUID for the resource.

All IDs must match the following regular expression (essentially numbers, unicode letters, -, . and _.):

^[-\._\p{L}0-9]+$`

Backwards compatibility

Currently, Kapacitor is in 1.x release with a guarantee that all new releases will be backwards compatible with previous releases. This applies directly to the API. Updates may be made to the API, but existing endpoints will not be changed in backwards-incompatible ways in 1.x releases.

Technical preview

When a new feature is added to Kapacitor, it may be added in a “technical preview” release for a few minor releases, and then later promoted to a fully supported v1 feature. Technical preview features may be changed in backwards-incompatible ways until they are promoted to v1 features. Technical previews allow new features to mature while maintaining regularly scheduled releases.

To make it clear which features of the API are in technical preview, the base path /kapacitor/v1preview is used. If you wish to preview some of these new features, use the path /kapacitor/v1preview instead of /kapacitor/v1 for your requests. All v1 endpoints are available under the v1preview path so that your client does not need to be configured with multiple paths. Technical preview endpoints are only available under the v1preview path.

Using a technical preview means that you may have to update your client for breaking changes to the previewed endpoints.

Write data

Kapacitor accepts writing data over HTTP using InfluxData’s Line Protocol data format. The kapacitor/v1/write endpoint is identical in nature to the InfluxDB /write endpoint.

Query ParameterPurpose
dbDatabase name for the writes.
rpRetention policy name for the writes.

Kapacitor scopes all points by their database and retention policy. As a result, you MUST specify the rp for writes so that Kapacitor uses the correct retention policy.

Example

Write data to Kapacitor.

POST /kapacitor/v1/write?db=DB_NAME&rp=RP_NAME
cpu,host=example.com value=87.6

To maintain compatibility with the equivalent InfluxDB /write endpoint, the /write endpoint is an alias for the /kapacitor/v1/write endpoint.

POST /write?db=DB_NAME&rp=RP_NAME
cpu,host=example.com value=87.6

Manage tasks

A task represents work for Kapacitor to perform. A task is defined by its id, type, TICKscript, and list of database retention policy pairs it is allowed to access.

Define or update a task

To define a task, POST to the /kapacitor/v1/tasks endpoint. If a task already exists, then use the PATCH method to modify any property of the task.

Define a task using a JSON object with the following options:

PropertyPurpose
idUnique identifier for the task. If empty a random ID will be chosen.
template-id(Optional) Template ID to use instead of specifying a TICKscript and type.
typeThe task type: stream or batch.
dbrpsList of database retention policy pairs the task is allowed to access.
scriptThe content of the script.
statusOne of enabled or disabled.
varsA set of vars for overwriting any defined vars in the TICKscript.

When using PATCH, if any property is missing, the task will be left unmodified.

Note: When patching a task, no changes are made to the running task. The task must be disabled and re-enabled for changes to take effect.

Vars

The vars object has the form:

{
    "field_name" : {
        "value": <VALUE>,
        "type": <TYPE>
    },
    "another_field" : {
        "value": <VALUE>,
        "type": <TYPE>
    }
}

The following is a table of valid types and example values.

TypeExample ValueDescription
booltrue“true” or “false”
int42Any integer value
float2.5 or 67Any numeric value
duration“1s” or 1000000000Any integer value interpreted in nanoseconds or an influxql duration string, (i.e. 10000000000 is 10s)
string“a string”Any string value
regex“^abc.*xyz”Any string value that represents a valid Go regular expression https://golang.org/pkg/regexp/
lambda“"value" > 5”Any string that is a valid TICKscript lambda expression
star""No value is required, a star type var represents the literal * in TICKscript (i.e. .groupBy(*))
list[{“type”: TYPE, “value”: VALUE}]A list of var objects. Currently lists may only contain string or star vars

Example

Create a new task with the id value of TASK_ID. To create a task from a template, add the template-id.

POST /kapacitor/v1/tasks
{
    "id" : "TASK_ID",
    "template-id" : "TEMPLATE_ID", 
    "type" : "stream",
    "dbrps": [{"db": "DATABASE_NAME", "rp" : "RP_NAME"}],
    "script": "stream\n    |from()\n        .measurement('cpu')\n",
    "vars" : {
        "var1": {
            "value": 42,
            "type": "float"
        }
    }
}

Response with task details.

{
    "link" : {"rel": "self", "href": "/kapacitor/v1/tasks/TASK_ID"},
    "id" : "TASK_ID",
    "template-id" : "TEMPLATE_ID",
    "type" : "stream",
    "dbrps" : [{"db": "DATABASE_NAME", "rp" : "RP_NAME"}],
    "script" : "stream\n    |from()\n        .measurement('cpu')\n",
    "dot" : "digraph TASK_ID { ... }",
    "vars" : {
        "var1": {
            "value": 42,
            "type": "float"
        }
    },
    "status" : "enabled",
    "executing" : true,
    "error" : "",
    "created": "2006-01-02T15:04:05Z07:00",
    "modified": "2006-01-02T15:04:05Z07:00",
    "stats" : {},
    "template-id": "TASK_ID"
}

Modify only the dbrps of the task.

PATCH /kapacitor/v1/tasks/TASK_ID
{
    "dbrps": [{"db": "NEW_DATABASE_NAME", "rp" : "NEW_RP_NAME"}]
}

Note: Setting any DBRP will overwrite all stored DBRPs. Setting any Vars will overwrite all stored Vars.

Enable an existing task.

PATCH /kapacitor/v1/tasks/TASK_ID
{
    "status" : "enabled",
}

Disable an existing task.

PATCH /kapacitor/v1/tasks/TASK_ID
{
    "status" : "disabled",
}

Define a new task that is enabled on creation.

POST /kapacitor/v1/tasks
{
    "id" : "TASK_ID",
    "template-id" : "TEMPLATE_ID",
    "type" : "stream",
    "dbrps" : [{"db": "DATABASE_NAME", "rp" : "RP_NAME"}],
    "script" : "stream\n    |from()\n        .measurement('cpu')\n",
    "status" : "enabled"
}

The response contains the created task.

{
    "id" : "TASK_ID",
    "template-id" : "TEMPLATE_ID",
    "link" : {"rel": "self", "href": "/kapacitor/v1/tasks/TASK_ID"}
}

Response

CodeMeaning
200Task created, contains task information.
404Task does not exist

Get a task

To get information about a task, make a GET request to the /kapacitor/v1/tasks/TASK_ID endpoint.

Query ParameterDefaultPurpose
dot-viewattributesOne of labels or attributes. Labels is less readable but will correctly render with all the information contained in labels.
script-formatformattedOne of formatted or raw. Raw will return the script identical to how it was defined. Formatted will first format the script.
replay-idOptional ID of a running replay. The returned task information will be in the context of the task for the running replay.

A task has these read-only properties in addition to the properties listed above.

PropertyDescription
dotGraphViz DOT syntax formatted representation of the task DAG.
executingWhether the task is currently executing.
errorAny error encountered when executing the task.
statsMap of statistics about a task.
createdDate the task was first created
modifiedDate the task was last modified
last-enabledDate the task was last set to status enabled

Example

Get information about a task using defaults. If a task is associated with a template, the template ID is included in the response.

GET /kapacitor/v1/tasks/TASK_ID
{
    "link" : {"rel": "self", "href": "/kapacitor/v1/tasks/TASK_ID"},
    "id" : "TASK_ID",
    "template-id" : "TEMPLATE_ID",
    "type" : "stream",
    "dbrps" : [{"db": "DATABASE_NAME", "rp" : "RP_NAME"}],
    "script" : "stream\n    |from()\n        .measurement('cpu')\n",
    "dot" : "digraph TASK_ID { ... }",
    "status" : "enabled",
    "executing" : true,
    "error" : "",
    "created": "2006-01-02T15:04:05Z07:00",
    "modified": "2006-01-02T15:04:05Z07:00",
    "last-enabled": "2006-01-03T15:04:05Z07:00",
    "stats" : {},
    "template-id": "TASK_ID"
}

Get information about a task using only labels in the DOT content and skip the format step.

GET /kapacitor/v1/tasks/TASK_ID?dot-view=labels&script-format=raw
{
    "link" : {"rel": "self", "href": "/kapacitor/v1/tasks/TASK_ID"},
    "id" : "TASK_ID",
    "template-id" : "TEMPLATE_ID",
    "type" : "stream",
    "dbrps" : [{"db": "DATABASE_NAME", "rp" : "RP_NAME"}],
    "script" : "stream|from().measurement('cpu')",
    "dot" : "digraph TASK_ID { ... }",
    "status" : "enabled",
    "executing" : true,
    "error" : "",
    "created": "2006-01-02T15:04:05Z07:00",
    "modified": "2006-01-02T15:04:05Z07:00",
    "last-enabled": "2006-01-03T15:04:05Z07:00",
    "stats" : {}
}

Response

CodeMeaning
200Success
404Task does not exist

Delete a task

To delete a task, make a DELETE request to the /kapacitor/v1/tasks/TASK_ID endpoint.

DELETE /kapacitor/v1/tasks/TASK_ID

Response

CodeMeaning
204Success

Note: Deleting a non-existent task is not an error and will return a 204 success.

List tasks

To get information about several tasks, make a GET request to the /kapacitor/v1/tasks endpoint.

Query ParameterDefaultPurpose
patternFilter results based on the pattern. Uses standard shell glob matching, see this for more details.
fieldsList of fields to return. If empty returns all fields. Fields id and link are always returned.
dot-viewattributesOne of labels or attributes. Labels is less readable but will correctly render with all the information contained in labels.
script-formatformattedOne of formatted or raw. Raw will return the script identical to how it was defined. Formatted will first format the script.
offset0Offset count for paginating through tasks.
limit100Maximum number of tasks to return.

Example

Get all tasks list details.

GET /kapacitor/v1/tasks
{
    "tasks" : [
        {
            "link" : {"rel":"self", "href":"/kapacitor/v1/tasks/TASK_ID"},
            "id" : "TASK_ID",
            "type" : "stream",
            "dbrps" : [{"db": "DATABASE_NAME", "rp" : "RP_NAME"}],
            "script" : "stream|from().measurement('cpu')",
            "dot" : "digraph TASK_ID { ... }",
            "status" : "enabled",
            "executing" : true,
            "error" : "",
            "stats" : {},
            "template-id" : "TEMPLATE_ID"
        },
        {
            "link" : {"rel":"self", "href":"/kapacitor/v1/tasks/ANOTHER_TASK_ID"},
            "id" : "ANOTHER_TASK_ID",
            "type" : "stream",
            "dbrps" : [{"db": "DATABASE_NAME", "rp" : "RP_NAME"}],
            "script" : "stream|from().measurement('cpu')",
            "dot" : "digraph ANOTHER_TASK_ID{ ... }",
            "status" : "disabled",
            "executing" : true,
            "error" : "",
            "stats" : {},
            "template-id" : "TEMPLATE_ID"
        }
    ]
}

Optionally, you can specify a glob pattern to list only matching tasks.

GET /kapacitor/v1/tasks?pattern=TASK*
{
    "tasks" : [
        {
            "link" : {"rel":"self", "href":"/kapacitor/v1/tasks/TASK_ID"},
            "id" : "TASK_ID",
            "type" : "stream",
            "dbrps" : [{"db": "DATABASE_NAME", "rp" : "RP_NAME"}],
            "script" : "stream|from().measurement('cpu')",
            "dot" : "digraph TASK_ID { ... }",
            "status" : "enabled",
            "executing" : true,
            "error" : "",
            "stats" : {},
            "template-id" : "TEMPLATE_ID"
        }
    ]
}

Get all tasks, but only the status, executing and error fields.

GET /kapacitor/v1/tasks?fields=status&fields=executing&fields=error
{
    "tasks" : [
        {
            "link" : {"rel":"self", "href":"/kapacitor/v1/tasks/TASK_ID"},
            "id" : "TASK_ID",
            "status" : "enabled",
            "executing" : true,
            "error" : "",
        },
        {
            "link" : {"rel":"self", "href":"/kapacitor/v1/tasks/ANOTHER_TASK_ID"},
            "id" : "ANOTHER_TASK_ID",
            "status" : "disabled",
            "executing" : true,
            "error" : "",
        }
    ]
}

Response

CodeMeaning
200Success

Note: If the pattern does not match any tasks an empty list will be returned, with a 200 success.

Custom task HTTP endpoints

In TICKscript, it is possible to expose a cache of recent data via the HTTPOut node. The data is available at the path /kapacitor/v1/tasks/TASK_ID/ENDPOINT_NAME.

Example

For the TICKscript:

stream
    |from()
        .measurement('cpu')
    |window()
        .period(60s)
        .every(60s)
    |httpOut('mycustom_endpoint')
GET /kapacitor/v1/tasks/TASK_ID/mycustom_endpoint
{
    "series": [
        {
            "name": "cpu",
            "columns": [
                "time",
                "value"
            ],
            "values": [
                [
                    "2015-01-29T21:55:43.702900257Z",
                    55
                ],
                [
                    "2015-01-29T21:56:43.702900257Z",
                    42
                ],
            ]
        }
    ]
}

The output is the same as a query for data to InfluxDB.

Manage templates

You can also define task templates. A task template is defined by a template TICKscript, and a task type.

Define a template

To define a template POST to the /kapacitor/v1/templates endpoint. If a template already exists then use the PATCH method to modify any property of the template.

Define a template using a JSON object with the following options:

PropertyPurpose
idUnique identifier for the template. If empty a random ID will be chosen.
typeThe template type: stream or batch.
scriptThe content of the script.

When using PATCH, if any option is missing it will be left unmodified.

Update a template

When updating an existing template all associated tasks are reloaded with the new template definition. The first error if any is returned when reloading associated tasks. If an error occurs, any task that was updated to the new definition is reverted to the old definition. This ensures that all associated tasks for a template either succeed or fail together.

As a result, you will not be able to update a template if it introduces a breaking change in the TICKscript. In order to update a template in a breaking way, you have two options:

  1. Create a new template and reassign each task to the new template, updating the task vars as needed.
  2. If the breaking change is forward compatible (i.e. adds a new required var), first update each task with the needed vars, then update the template once all tasks are ready.

Example

Create a new template with ID TEMPLATE_ID.

POST /kapacitor/v1/templates
{
    "id" : "TEMPLATE_ID",
    "type" : "stream",
    "script": "stream\n    |from()\n        .measurement('cpu')\n"
}

Response with template id and link.

{
    "link" : {"rel": "self", "href": "/kapacitor/v1/templates/TASK_ID"},
    "id" : "TASK_ID",
    "type" : "stream",
    "script" : "stream\n    |from()\n        .measurement('cpu')\n",
    "dot" : "digraph TASK_ID { ... }",
    "error" : "",
    "created": "2006-01-02T15:04:05Z07:00",
    "modified": "2006-01-02T15:04:05Z07:00",
}

Modify only the script of the template.

PATCH /kapacitor/v1/templates/TEMPLATE_ID
{
    "script": "stream|from().measurement('mem')"
}

Response

CodeMeaning
200Template created, contains template information.
404Template does not exist

Get a template

To get information about a template, make a GET request to the /kapacitor/v1/templates/TEMPLATE_ID endpoint.

Query ParameterDefaultPurpose
script-formatformattedOne of formatted or raw. Raw will return the script identical to how it was defined. Formatted will first format the script.

A template has these read only properties in addition to the properties listed above.

PropertyDescription
varsSet of named vars from the TICKscript with their type, default values and description.
dotGraphViz DOT syntax formatted representation of the template DAG. NOTE: labels vs attributes does not matter since a template is never executing.
errorAny error encountered when reading the template.
createdDate the template was first created
modifiedDate the template was last modified

Example

Get information about a template using defaults.

GET /kapacitor/v1/templates/TEMPLATE_ID
{
    "link" : {"rel": "self", "href": "/kapacitor/v1/templates/TEMPLATE_ID"},
    "id" : "TASK_ID",
    "type" : "stream",
    "script" : "var x = 5\nstream\n    |from()\n        .measurement('cpu')\n",
    "vars": {"x":{"value": 5, "type":"int", "description": "threshold value"}},
    "dot" : "digraph TASK_ID { ... }",
    "error" : "",
    "created": "2006-01-02T15:04:05Z07:00",
    "modified": "2006-01-02T15:04:05Z07:00",
}

Response

CodeMeaning
200Success
404Template does not exist

Delete a template

To delete a template, make a DELETE request to the /kapacitor/v1/templates/TEMPLATE_ID endpoint.

Note:Deleting a template renders all associated tasks as orphans. The current state of the orphaned tasks will be left unmodified, but orphaned tasks will not be able to be enabled.

DELETE /kapacitor/v1/templates/TEMPLATE_ID

Response

CodeMeaning
204Success

Note: Deleting a non-existent template is not an error and will return a 204 success.

List templates

To get information about several templates, make a GET request to the /kapacitor/v1/templates endpoint.

Query ParameterDefaultPurpose
patternFilter results based on the pattern. Uses standard shell glob matching, see this for more details.
fieldsList of fields to return. If empty returns all fields. Fields id and link are always returned.
script-formatformattedOne of formatted or raw. Raw will return the script identical to how it was defined. Formatted will first format the script.
offset0Offset count for paginating through templates.
limit100Maximum number of templates to return.

Example

Get all templates.

GET /kapacitor/v1/templates
{
    "templates" : [
        {
            "link" : {"rel":"self", "href":"/kapacitor/v1/templates/TEMPLATE_ID"},
            "id" : "TEMPLATE_ID",
            "type" : "stream",
            "script" : "stream|from().measurement('cpu')",
            "dot" : "digraph TEMPLATE_ID { ... }",
            "error" : ""
        },
        {
            "link" : {"rel":"self", "href":"/kapacitor/v1/templates/ANOTHER_TEMPLATE_ID"},
            "id" : "ANOTHER_TEMPLATE_ID",
            "type" : "stream",
            "script" : "stream|from().measurement('cpu')",
            "dot" : "digraph ANOTHER_TEMPLATE_ID{ ... }",
            "error" : ""
        }
    ]
}

Optionally, specify a glob pattern to list only matching templates.

GET /kapacitor/v1/template?pattern=TEMPLATE*
{
    "templates" : [
        {
            "link" : {"rel":"self", "href":"/kapacitor/v1/templates/TEMPLATE_ID"},
            "id" : "TEMPLATE_ID",
            "type" : "stream",
            "script" : "stream|from().measurement('cpu')",
            "dot" : "digraph TEMPLATE_ID { ... }",
            "error" : ""
        }
    ]
}

Get all templates, but only the script and error fields.

GET /kapacitor/v1/templates?fields=status&fields=executing&fields=error
{
    "templates" : [
        {
            "link" : {"rel":"self", "href":"/kapacitor/v1/templates/TEMPLATE_ID"},
            "id" : "TEMPLATE_ID",
            "script" : "stream|from().measurement('cpu')",
            "error" : ""
        },
        {
            "link" : {"rel":"self", "href":"/kapacitor/v1/templates/ANOTHER_TEMPLATE_ID"},
            "id" : "ANOTHER_TEMPLATE_ID",
            "script" : "stream|from().measurement('cpu')",
            "error" : ""
        }
    ]
}

Response

CodeMeaning
200Success

Note: If the pattern does not match any templates an empty list will be returned, with a 200 success.

Manage recordings

Kapacitor can save recordings of data and replay them against a specified task.

Create a recording

There are three methods for recording data with Kapacitor: To create a recording make a POST request to the /kapacitor/v1/recordings/METHOD endpoint.

MethodDescription
streamRecord the incoming stream of data.
batchRecord the results of the queries in a batch task.
queryRecord the result of an explicit query.

The request returns once the recording is started and does not wait for it to finish. A recording ID is returned to later identify the recording.

Stream
ParameterPurpose
idUnique identifier for the recording. If empty a random one will be chosen.
taskID of a task, used to only record data for the DBRPs of the task.
stopRecord stream data until stop date.
Batch
ParameterPurpose
idUnique identifier for the recording. If empty a random one will be chosen.
taskID of a task, records the results of the queries defined in the task.
startEarliest date for which data will be recorded. RFC3339Nano formatted.
stopLatest date for which data will be recorded. If not specified uses the current time. RFC3339Nano formatted data.
Query
ParameterPurpose
idUnique identifier for the recording. If empty a random one will be chosen.
typeType of recording, stream or batch.
queryQuery to execute.
clusterName of a configured InfluxDB cluster. If empty uses the default cluster.

Note: A recording itself is typed as either a stream or batch recording and can only be replayed to a task of a corresponding type. Therefore when you record the result of a raw query you must specify the type recording you wish to create.

Example

Create a recording using the stream method

POST /kapacitor/v1/recordings/stream
{
    "task" : "TASK_ID",
    "stop" : "2006-01-02T15:04:05Z07:00"
}

Create a recording using the batch method specifying a start time.

POST /kapacitor/v1/recordings/batch
{
    "task" : "TASK_ID",
    "start" : "2006-01-02T15:04:05Z07:00"
}

Create a recording using the query method specifying a stream type.

POST /kapacitor/v1/recordings/query
{
    "query" : "SELECT mean(usage_idle) FROM cpu WHERE time > now() - 1h GROUP BY time(10m)",
    "type" : "stream"
}

Create a recording using the query method specifying a batch type.

POST /kapacitor/v1/recordings/query
{
    "query" : "SELECT mean(usage_idle) FROM cpu WHERE time > now() - 1h GROUP BY time(10m)",
    "type" : "batch"
}

Create a recording with a custom ID.

POST /kapacitor/v1/recordings/query
{
    "id" : "MY_RECORDING_ID",
    "query" : "SELECT mean(usage_idle) FROM cpu WHERE time > now() - 1h GROUP BY time(10m)",
    "type" : "batch"
}

Response

All recordings are assigned an ID which is returned in this format with a link.

{
    "link" : {"rel": "self", "href": "/kapacitor/v1/recordings/e24db07d-1646-4bb3-a445-828f5049bea0"},
    "id" : "e24db07d-1646-4bb3-a445-828f5049bea0",
    "type" : "stream",
    "size" : 0,
    "date" : "2006-01-02T15:04:05Z07:00",
    "error" : "",
    "status" : "running",
    "progress" : 0
}
CodeMeaning
201Success, the recording has started.

Wait for a recording

In order to determine when a recording has finished you must make a GET request to the returned link typically something like /kapacitor/v1/recordings/RECORDING_ID.

A recording has these read only properties.

PropertyDescription
sizeSize of the recording on disk in bytes.
dateDate the recording finished.
errorAny error encountered when creating the recording.
statusOne of recording or finished.
progressNumber between 0 and 1 indicating the approximate progress of the recording.

Example

GET /kapacitor/v1/recordings/e24db07d-1646-4bb3-a445-828f5049bea0
{
    "link" : {"rel": "self", "href": "/kapacitor/v1/recordings/e24db07d-1646-4bb3-a445-828f5049bea0"},
    "id" : "e24db07d-1646-4bb3-a445-828f5049bea0",
    "type" : "stream",
    "size" : 1980353,
    "date" : "2006-01-02T15:04:05Z07:00",
    "error" : "",
    "status" : "running",
    "progress" : 0.75
}

Once the recording is complete.

GET /kapacitor/v1/recordings/e24db07d-1646-4bb3-a445-828f5049bea0
{
    "link" : {"rel": "self", "href": "/kapacitor/v1/recordings/e24db07d-1646-4bb3-a445-828f5049bea0"},
    "id" : "e24db07d-1646-4bb3-a445-828f5049bea0",
    "type" : "stream",
    "size" : 1980353,
    "date" : "2006-01-02T15:04:05Z07:00",
    "error" : "",
    "status" : "finished",
    "progress" : 1
}

Or if the recording fails.

GET /kapacitor/v1/recordings/e24db07d-1646-4bb3-a445-828f5049bea0
{
    "link" : {"rel": "self", "href": "/kapacitor/v1/recordings/e24db07d-1646-4bb3-a445-828f5049bea0"},
    "id" : "e24db07d-1646-4bb3-a445-828f5049bea0",
    "type" : "stream",
    "size" : 1980353,
    "date" : "2006-01-02T15:04:05Z07:00",
    "error" : "error message explaining failure",
    "status" : "failed",
    "progress" : 1
}

Response

CodeMeaning
200Success, the recording is no longer running.
202Success, the recording exists but is not finished.
404No such recording exists.

Delete a recording

To delete a recording make a DELETE request to the /kapacitor/v1/recordings/RECORDING_ID endpoint.

DELETE /kapacitor/v1/recordings/RECORDING_ID

Response

CodeMeaning
204Success

NOTE: Deleting a non-existent recording is not an error and will return a 204 success.

List recordings

To list all recordings, make a GET request to the /kapacitor/v1/recordings endpoint. Recordings are sorted by date.

Query ParameterDefaultPurpose
patternFilter results based on the pattern. Uses standard shell glob matching, see this for more details.
fieldsList of fields to return. If empty returns all fields. Fields id and link are always returned.
offset0Offset count for paginating through tasks.
limit100Maximum number of tasks to return.

Example

GET /kapacitor/v1/recordings
{
    "recordings" : [
        {
            "link" : {"rel": "self", "href": "/kapacitor/v1/recordings/e24db07d-1646-4bb3-a445-828f5049bea0"},
            "id" : "e24db07d-1646-4bb3-a445-828f5049bea0",
            "type" : "stream",
            "size" : 1980353,
            "date" : "2006-01-02T15:04:05Z07:00",
            "error" : "",
            "status" : "finished",
            "progress" : 1
        },
        {
            "link" : {"rel": "self", "href": "/kapacitor/v1/recordings/8a4c06c6-30fb-42f4-ac4a-808aa31278f6"},
            "id" : "8a4c06c6-30fb-42f4-ac4a-808aa31278f6",
            "type" : "batch",
            "size" : 216819562,
            "date" : "2006-01-02T15:04:05Z07:00",
            "error" : "",
            "status" : "finished",
            "progress" : 1
        }
    ]
}

Response

CodeMeaning
200Success

Manage replays

Replay a recording

To replay a recording make a POST request to /kapacitor/v1/replays/

ParameterDefaultPurpose
idrandomUnique identifier for the replay. If empty a random ID is chosen.
taskID of task.
recordingID of recording.
recording-timefalseIf true, use the times in the recording, otherwise adjust times relative to the current time.
clockfastOne of fast or real. If real wait for real time to pass corresponding with the time in the recordings. If fast replay data without delay. For example, if clock is real then a stream recording of duration 5m will take 5m to replay.

Example

Replay a recording using default parameters.

POST /kapacitor/v1/replays/
{
    "task" : "TASK_ID",
    "recording" : "RECORDING_ID"
}

Replay a recording in real-time mode and preserve recording times.

POST /kapacitor/v1/replays/
{
    "task" : "TASK_ID",
    "recording" : "RECORDING_ID",
    "clock" : "real",
    "recording-time" : true,
}

Replay a recording using a custom ID.

POST /kapacitor/v1/replays/
{
    "id" : "MY_REPLAY_ID",
    "task" : "TASK_ID",
    "recording" : "RECORDING_ID"
}

Response

The request returns once the replay is started and provides a replay ID and link.

{
    "link" : {"rel": "self", "href": "/kapacitor/v1/replays/ad95677b-096b-40c8-82a8-912706f41d4c"},
    "id" : "ad95677b-096b-40c8-82a8-912706f41d4c",
    "task" : "TASK_ID",
    "recording" : "RECORDING_ID",
    "clock" : "fast",
    "recording-time" : false,
    "status" : "running",
    "progress" : 0,
    "error" : "",
    "stats": {},
}
CodeMeaning
201Success, replay has started.

Replay data without recording

It is also possible to replay data directly without recording it first. This is done by issuing a request similar to either a batch or query recording but instead of storing the data it is immediately replayed against a task. Using a stream recording for immediately replaying against a task is equivalent to enabling the task and so is not supported.

MethodDescription
batchReplay the results of the queries in a batch task.
queryReplay the results of an explicit query.
Batch
ParameterDefaultPurpose
idrandomUnique identifier for the replay. If empty a random one will be chosen.
taskID of a task, replays the results of the queries defined in the task against the task.
startEarliest date for which data will be replayed. RFC3339Nano formatted.
stopnowLatest date for which data will be replayed. If not specified uses the current time. RFC3339Nano formatted data.
recording-timefalseIf true, use the times in the recording, otherwise adjust times relative to the current time.
clockfastOne of fast or real. If real wait for real time to pass corresponding with the time in the recordings. If fast replay data without delay. For example, if clock is real then a stream recording of duration 5m will take 5m to replay.
Query
ParameterDefaultPurpose
idrandomUnique identifier for the replay. If empty a random one will be chosen.
taskID of a task, replays the results of the queries against the task.
queryQuery to execute.
clusterName of a configured InfluxDB cluster. If empty uses the default cluster.
recording-timefalseIf true, use the times in the recording, otherwise adjust times relative to the current time.
clockfastOne of fast or real. If real wait for real time to pass corresponding with the time in the recordings. If fast replay data without delay. For example, if clock is real then a stream recording of duration 5m will take 5m to replay.

Example

Perform a replay using the batch method specifying a start time.

POST /kapacitor/v1/replays/batch
{
    "task" : "TASK_ID",
    "start" : "2006-01-02T15:04:05Z07:00"
}

Replay the results of the query against the task.

POST /kapacitor/v1/replays/query
{
    "task" : "TASK_ID",
    "query" : "SELECT mean(usage_idle) FROM cpu WHERE time > now() - 1h GROUP BY time(10m)",
}

Create a replay with a custom ID.

POST /kapacitor/v1/replays/query
{
    "id" : "MY_REPLAY_ID",
    "task" : "TASK_ID",
    "query" : "SELECT mean(usage_idle) FROM cpu WHERE time > now() - 1h GROUP BY time(10m)",
}

Response

All replays are assigned an ID which is returned in this format with a link.

{
    "link" : {"rel": "self", "href": "/kapacitor/v1/replays/e24db07d-1646-4bb3-a445-828f5049bea0"},
    "id" : "e24db07d-1646-4bb3-a445-828f5049bea0",
    "task" : "TASK_ID",
    "recording" : "",
    "clock" : "fast",
    "recording-time" : false,
    "status" : "running",
    "progress" : 0.57,
    "error" : "",
    "stats": {}
}

Note: For a replay created in this manner the recording ID will be empty since no recording was used or created.

CodeMeaning
201Success, the replay has started.

Wait for replays

Like recordings you make a GET request to the /kapacitor/v1/replays/REPLAY_ID endpoint to get the status of the replay.

A replay has these read only properties in addition to the properties listed above.

PropertyDescription
statusOne of replaying or finished.
progressNumber between 0 and 1 indicating the approximate progress of the replay.
errorAny error that occurred while performing the replay

Example

Get the status of a replay.

GET /kapacitor/v1/replays/ad95677b-096b-40c8-82a8-912706f41d4c
{
    "link" : {"rel": "self", "href": "/kapacitor/v1/replays/ad95677b-096b-40c8-82a8-912706f41d4c"},
    "id" : "ad95677b-096b-40c8-82a8-912706f41d4c",
    "task" : "TASK_ID",
    "recording" : "RECORDING_ID",
    "clock" : "fast",
    "recording-time" : false,
    "status" : "running",
    "progress" : 0.57,
    "error" : "",
    "stats": {}
}

Once the replay is complete.

GET /kapacitor/v1/replays/ad95677b-096b-40c8-82a8-912706f41d4c
{
    "link" : {"rel": "self", "href": "/kapacitor/v1/replays/ad95677b-096b-40c8-82a8-912706f41d4c"},
    "id" : "ad95677b-096b-40c8-82a8-912706f41d4c",
    "task" : "TASK_ID",
    "recording" : "RECORDING_ID",
    "clock" : "fast",
    "recording-time" : false,
    "status" : "finished",
    "progress" : 1,
    "error" : "",
    "stats": {
        "task-stats": {
            "throughput": 0
        },
        "node-stats": {
            "alert2": {
                "alerts_triggered": 5,
                "avg_exec_time_ns": 1267486,
                "collected": 8,
                "crits_triggered": 2,
                "emitted": 0,
                "errors": 0,
                "infos_triggered": 0,
                "oks_triggered": 1,
                "warns_triggered": 2,
                "working_cardinality": 1
            },
            "from1": {
                "avg_exec_time_ns": 0,
                "collected": 8,
                "emitted": 8,
                "errors": 0,
                "working_cardinality": 0
            },
            "stream0": {
                "avg_exec_time_ns": 0,
                "collected": 8,
                "emitted": 8,
                "errors": 0,
                "working_cardinality": 0
            }
        }
    }
}

If the replay has finished, the stats field contains the statistics about the replay.

Or if the replay fails.

GET /kapacitor/v1/replays/ad95677b-096b-40c8-82a8-912706f41d4c
{
    "link" : {"rel": "self", "href": "/kapacitor/v1/replays/ad95677b-096b-40c8-82a8-912706f41d4c"},
    "id" : "ad95677b-096b-40c8-82a8-912706f41d4c",
    "task" : "TASK_ID",
    "recording" : "RECORDING_ID",
    "clock" : "fast",
    "recording-time" : false,
    "status" : "failed",
    "progress" : 1,
    "error" : "error message explaining failure",
    "stats": {}
}

Response

CodeMeaning
200Success, replay is no longer running.
202Success, the replay exists but is not finished.
404No such replay exists.

Delete a replay

To delete a replay make a DELETE request to the /kapacitor/v1/replays/REPLAY_ID endpoint.

DELETE /kapacitor/v1/replays/REPLAY_ID

Response

CodeMeaning
204Success

Note: Deleting a non-existent replay is not an error and will return a 204 success.

List replays

You can list replays for a given recording by making a GET request to /kapacitor/v1/replays.

Query ParameterDefaultPurpose
patternFilter results based on the pattern. Uses standard shell glob matching, see this for more details.
fieldsList of fields to return. If empty returns all fields. Fields id and link are always returned.
offset0Offset count for paginating through tasks.
limit100Maximum number of tasks to return.

Example

GET /kapacitor/v1/replays
{
    "replays": [
        {
            "link" : {"rel": "self", "href": "/kapacitor/v1/replays/ad95677b-096b-40c8-82a8-912706f41d4c"},
            "id" : "ad95677b-096b-40c8-82a8-912706f41d4c",
            "task" : "TASK_ID",
            "recording" : "RECORDING_ID",
            "clock" : "fast",
            "recording-time" : false,
            "status" : "finished",
            "progress" : 1,
            "error" : ""
        },
        {
            "link" : {"rel": "self", "href": "/kapacitor/v1/replays/be33f0a1-0272-4019-8662-c730706dac7d"},
            "id" : "be33f0a1-0272-4019-8662-c730706dac7d",
            "task" : "TASK_ID",
            "recording" : "RECORDING_ID",
            "clock" : "fast",
            "recording-time" : false,
            "status" : "finished",
            "progress" : 1,
            "error" : ""
        }
    ]
}

Manage alerts

Kapacitor can generate and handle alerts. The API allows you to see the current state of any alert and to configure various handlers for the alerts.

Topics

Alerts are grouped into topics. An alert handler “listens” on a topic for any new events. You can either specify the alert topic in the TICKscript or one will be generated for you.

Create and remove topics

Topics are created dynamically when they referenced in TICKscripts or in handlers. To delete a topic make a DELETE request to /kapacitor/v1/alerts/topics/<topic id>. This will delete all known events and state for the topic.

Note: Since topics are dynamically created, a topic may return after having deleted it, if a new event is created for the topic.

Example

DELETE /kapacitor/v1/alerts/topics/system

List topics

To query the list of available topics make a GET requests to /kapacitor/v1/alerts/topics.

Query ParameterDefaultPurpose
min-levelOKOnly return topics that are greater or equal to the min-level. Valid values include OK, INFO, WARNING, CRITICAL.
pattern*Filter results based on the pattern. Uses standard shell glob matching on the topic ID, see this for more details.

Example

Get all topics.

GET /kapacitor/v1/alerts/topics
{
    "link": {"rel":"self","href":"/kapacitor/v1/alerts/topics"},
    "topics": [
        {
            "link": {"rel":"self","href":"/kapacitor/v/alerts/topics/system"},
            "events-link" : {"rel":"events","href":"/kapacitor/v1/alerts/topics/system/events"},
            "handlers-link": {"rel":"handlers","href":"/kapacitor/v1/alerts/topics/system/handlers"},
            "id": "system",
            "level":"CRITICAL"
        },
        {
            "link": {"rel":"self","href":"/kapacitor/v1/alerts/topics/app"},
            "events-link" : {"rel":"events","href":"/kapacitor/v1/alerts/topics/app/events"},
            "handlers-link": {"rel":"handlers","href":"/kapacitor/v1/alerts/topics/app/handlers"},
            "id": "app",
            "level":"OK"
        }
    ]
}

Get all topics in a WARNING or CRITICAL state.

GET /kapacitor/v1/alerts/topics?min-level=WARNING
{
    "link": {"rel":"self","href":"/kapacitor/v1/alerts/topics"},
    "topics": [
        {
            "link": {"rel":"self","href":"/kapacitor/v1/alerts/topics/system"},
            "events-link" : {"rel":"events","href":"/kapacitor/v1/alerts/topics/system/events"},
            "handlers-link": {"rel":"handlers","href":"/kapacitor/v1/alerts/topics/system/handlers"},
            "id": "system",
            "level":"CRITICAL"
        }
    ]
}

Topic state

To query the state of a topic make a GET request to /kapacitor/v1/alerts/topics/<topic id>.

Example

GET /kapacitor/v1/alerts/topics/system
{
    "link": {"rel":"self","href":"/kapacitor/v1/alerts/topics/system"},
    "id": "system",
    "level":"CRITICAL"
    "events-link" : {"rel":"events","href":"/kapacitor/v1/alerts/topics/system/events"},
    "handlers-link": {"rel":"handlers","href":"/kapacitor/v1/alerts/topics/system/handlers"},
}

List topic events

To query all the events within a topic make a GET request to /kapacitor/v1/alerts/topics/<topic id>/events.

Query ParameterDefaultPurpose
min-levelOKOnly return events that are greater or equal to the min-level. Valid values include OK, INFO, WARNING, CRITICAL.

Example

GET /kapacitor/v1/alerts/topics/system/events
{
    "link": {"rel":"self","href":"/kapacitor/v1/alerts/topics/system/events"},
    "topic": "system",
    "events": [
        {
            "link":{"rel":"self","href":"/kapacitor/v1/alerts/topics/system/events/cpu"},
            "id": "cpu",
            "state": {
                "level": "WARNING",
                "message": "cpu is WARNING",
                "time": "2016-12-01T00:00:00Z",
                "duration": "5m"
            }
        },
        {
            "link":{"rel":"self","href":"/kapacitor/v1/alerts/topics/system/events/mem"},
            "id": "mem",
            "state": {
                "level": "CRITICAL",
                "message": "mem is CRITICAL",
                "time": "2016-12-01T00:10:00Z",
                "duration": "1m"
            }
        }
    ]
}

Topic events

You can query a specific event within a topic by making a GET request to /kapacitor/v1/alerts/topics/<topic id>/events/<event id>.

Example

GET /kapacitor/v1/alerts/topics/system/events/cpu
{
    "link":{"rel":"self","href":"/kapacitor/v1/alerts/topics/system/events/cpu"},
    "id": "cpu",
    "state": {
        "level": "WARNING",
        "message": "cpu is WARNING",
        "time": "2016-12-01T00:00:00Z",
        "duration": "5m"
    }
}

List topic handlers

Handlers are created within a topic. You can get a list of handlers configured for a topic by making a GET request to /kapacitor/v1/alerts/topics/<topic id>/handlers.

Query ParameterDefaultPurpose
pattern*Filter results based on the pattern. Uses standard shell glob matching on the service name, see this for more details.

Note: Anonymous handlers (created automatically from TICKscripts) will not be listed under their associated anonymous topic as they are not configured via the API.

Example

Get the handlers for the system topic.

GET /kapacitor/v1/alerts/topics/system/handlers
{
    "link":{"rel":"self","href":"/kapacitor/v1/alerts/topics/system/handlers"},
    "topic": "system",
    "handlers": [
        {
            "link":{"rel":"self","href":"/kapacitor/v1/alerts/topics/system/handlers/slack"},
            "id":"slack",
            "kind":"slack",
            "options":{
              "channel":"#alerts"
            },
        },
        {
            "link":{"rel":"self","href":"/kapacitor/v1/alerts/topics/system/handlers/smtp"},
            "id":"smtp",
            "kind":"smtp"
        }
    ]
}

This main:alert_cpu:alert5 topic represents an auto-generated topic from a task that has defined handlers explicitly in the TICKscript. Anonymous handlers cannot be listed or modified via the API.

GET /kapacitor/v1/alerts/topics/main:alert_cpu:alert5/handlers
{
    "link":{"rel":"self","href":"/kapacitor/v1/alerts/topics/system/handlers"},
    "topic": "main:alert_cpu:alert5",
    "handlers": null
}

Get a topic handler

To query information about a specific handler make a GET request to /kapacitor/v1/alerts/topics/<topic id>/handlers/<handler id>.

Example

GET /kapacitor/v1/alerts/topics/system/handlers/slack
{
  "link": {
    "rel": "self",
      "href": "/kapacitor/v1/alerts/topics/system/handlers/slack"
  },
  "id": "slack",
  "kind": "slack",
  "options": {
    "channel": "#alerts"
  },
  "match": ""
}

Create a topic handler

To create a new handler make a POST request to /kapacitor/v1/alerts/topics/system/handlers.

POST /kapacitor/v1/alerts/topics/system/handlers
{
  "id":"slack",
  "kind":"slack",
  "options": {
    "channel":"#alerts"
  }
}
{
  "link": {
    "rel": "self",
      "href": "/kapacitor/v1/alerts/topics/system/handlers/slack"
  },
  "id": "slack",
  "kind": "slack",
  "options": {
    "channel": "#alerts"
  },
  "match": ""
}

Update a topic handler

To update an existing handler you can either make a PUT or PATCH request to /kapacitor/v1/alerts/topics/system/handlers/<handler id>.

Using PUT will replace the entire handler, by using PATCH specific parts of the handler can be modified.

PATCH will apply JSON patch object to the existing handler, see rfc6902 for more details.

Example

Update the topics and actions for a handler using the PATCH method.

PATCH /kapacitor/v1/alerts/topics/system/handlers/slack
[
    {"op":"replace", "path":"/topics", "value":["system", "test"]},
    {"op":"replace", "path":"/options/channel", "value":"#testing_alerts"}
]
{
  "link": {
    "rel": "self",
      "href": "/kapacitor/v1/alerts/topics/system/handlers/slack"
  },
  "id": "slack",
  "kind": "slack",
  "options": {
    "channel": "#testing_alerts"
  },
  "match": ""
}

Replace an entire handler using the PUT method.

PUT /kapacitor/v1/alerts/topics/system/handlers/slack
{
  "id": "slack",
  "kind":"slack",
  "options": {
    "channel":"#testing_alerts"
  }
}
{
  "link": {
    "rel": "self",
      "href": "/kapacitor/v1/alerts/topics/system/handlers/slack"
  },
  "id": "slack",
  "kind": "slack",
  "options": {
    "channel": "#testing_alerts"
  },
  "match": ""
}

Remove a topic handler

To remove an existing handler make a DELETE request to /kapacitor/v1/alerts/topics/system/handlers/<handler id>.

DELETE /kapacitor/v1/alerts/topics/system/handlers/<handler id>

Override Kapacitor configurations

You can set configuration overrides via the API for certain sections of the config. The overrides set via the API always take precedent over what may exist in the configuration file. The sections available for overriding include the InfluxDB clusters and the alert handler sections.

The intent of the API is to allow for dynamic configuration of sensitive credentials without requiring that the Kapacitor process be restarted. As such, it is recommended to use either the configuration file or the API to manage these configuration sections, but not both. This will help to eliminate any confusion that may arise as to the source of a given configuration option.

Enable and disable configuration overrides

By default the ability to override the configuration is enabled. If you do not wish to enable this feature it can be disabled via the config-override configuration section.

[config-override]
  enabled = false

If the config-override service is disabled then the relevant API endpoints will return 403 forbidden errors.

Recover from bad configurations

If somehow you have created a configuration that causes Kapacitor to crash or otherwise not function, you can disable applying overrides during startup with the skip-config-overrides top level configuration option.

# This configuration option is only a safe guard and should not be needed in practice.
skip-config-overrides = true

This allows you to still access the API to fix any unwanted configuration without applying that configuration during startup.

Note: It is probably easiest and safest to set this option as an environment variable KAPACITOR_SKIP_CONFIG_OVERRIDES=true, since it is meant to be temporary. That way you do not have to modify your on disk configuration file or accidentally leave it in place causing issues later on.

Overview

The paths for the configuration API endpoints are as follows:

/kapacitor/v1/config/<section name>/[<element name>]

Example:

/kapacitor/v1/config/smtp/
/kapacitor/v1/config/influxdb/localhost
/kapacitor/v1/config/influxdb/remote

The optional element name path element corresponds to a specific item from a list of entries.

For example the above paths correspond to the following configuration sections:

[smtp]
    # SMTP configuration here

[[influxdb]]
    name = "localhost"
    # InfluxDB configuration here for the "localhost" cluster

[[influxdb]]
    name = "remote"
    # InfluxDB configuration here for the "remote" cluster

Retrieve the current configuration

To retrieve the current configuration perform a GET request to the desired path. The returned configuration will be the merged values from the configuration file and what has been stored in the overrides. The returned content will be JSON encoded version of the configuration objects.

All sensitive information will not be returned in the request body. Instead a Boolean value will be in its place indicating whether the value is empty or not. A list of which options are redacted is returned for each element.

Example

Retrieve all the configuration sections which can be overridden.

GET /kapacitor/v1/config
{
    "link" : {"rel": "self", "href": "/kapacitor/v1/config"},
    "sections": {
        "influxdb": {
            "link" : {"rel": "self", "href": "/kapacitor/v1/config/influxdb"},
            "elements": [
                {
                    "link" : {"rel": "self", "href": "/kapacitor/v1/config/influxdb/localhost"},
                    "options": {
                        "name": "localhost",
                        "urls": ["http://localhost:8086"],
                        "default": true,
                        "username": "",
                        "password": false
                    },
                    "redacted" : [
                        "password"
                    ]
                },
                {
                    "link" : {"rel": "self", "href": "/kapacitor/v1/config/influxdb/remote"},
                    "options": {
                        "name": "remote",
                        "urls": ["http://influxdb.example.com:8086"],
                        "default": false,
                        "username": "jim",
                        "password": true
                    },
                    "redacted" : [
                        "password"
                    ]
                }
            ]
        },
        "smtp": {
            "link" : {"rel": "self", "href": "/kapacitor/v1/config/smtp"},
            "elements": [{
                "link" : {"rel": "self", "href": "/kapacitor/v1/config/smtp/"},
                "options": {
                    "enabled": true,
                    "host": "smtp.example.com",
                    "port": 587,
                    "username": "bob",
                    "password": true,
                    "no-verify": false,
                    "global": false,
                    "to": [ "oncall@example.com"],
                    "from": "kapacitor@example.com",
                    "idle-timeout": "30s"
                },
                "redacted" : [
                    "password"
                ]
            }]
        }
    }
}

Retrieve only the SMTP section.

GET /kapacitor/v1/config/smtp
{
    "link" : {"rel": "self", "href": "/kapacitor/v1/config/smtp"},
    "elements": [{
        "link" : {"rel": "self", "href": "/kapacitor/v1/config/smtp/"},
        "options": {
            "enabled": true,
            "host": "smtp.example.com",
            "port": 587,
            "username": "bob",
            "password": true,
            "no-verify": false,
            "global": false,
            "to": ["oncall@example.com"],
            "from": "kapacitor@example.com",
            "idle-timeout": "30s"
        },
        "redacted" : [
            "password"
        ]
    }]
}

Retrieve the single element from the SMTP section.

GET /kapacitor/v1/config/smtp/
{
    "link" : {"rel": "self", "href": "/kapacitor/v1/config/smtp/"},
    "options": {
        "enabled": true,
        "host": "smtp.example.com",
        "port": 587,
        "username": "bob",
        "password": true,
        "no-verify": false,
        "global": false,
        "to": ["oncall@example.com"],
        "from": "kapacitor@example.com",
        "idle-timeout": "30s"
    },
    "redacted" : [
        "password"
    ]
}

Note: Sections that are not lists can be treated as having an empty string for their element name.

Retrieve only the InfluxDB section.

GET /kapacitor/v1/config/influxdb
{
    "link" : {"rel": "self", "href": "/kapacitor/v1/config/influxdb"},
    "elements" : [
        {
            "link" : {"rel": "self", "href": "/kapacitor/v1/config/influxdb/localhost"},
            "options": {
               "name": "localhost",
               "urls": ["http://localhost:8086"],
               "default": true,
               "username": "",
               "password": false
            },
            "redacted" : [
                "password"
            ]
        },
        {
            "link" : {"rel": "self", "href": "/kapacitor/v1/config/influxdb/remote"},
            "options": {
                "name": "remote",
                "urls": ["http://influxdb.example.com:8086"],
                "default": false,
                "username": "jim",
                "password": true
            },
            "redacted" : [
                "password"
            ]
        }
    ]
}

Retrieve only the remote element of the InfluxDB section.

GET /kapacitor/v1/config/influxdb/remote
{
    "link" : {"rel": "self", "href": "/kapacitor/v1/config/influxdb/remote"},
    "options": {
        "name": "remote",
        "urls": ["http://influxdb.example.com:8086"],
        "default": false,
        "username": "jim",
        "password": true
    },
    "redacted" : [
        "password"
    ]
}

Note: The password value is not returned, but the true value indicates that a non empty password has been set.

Response

CodeMeaning
200Success
403Config override service not enabled

Override configuration values

To override a value in the configuration make a POST request to the desired path. The request should contain a JSON object describing what should be modified.

Use the following top level actions:

KeyDescription
setSet the value in the configuration overrides.
deleteDelete the value from the configuration overrides.
addAdd a new element to a list configuration section.
removeRemove a previously added element from a list configuration section.

Configuration options not specified in the request will be left unmodified.

Example

To disable the SMTP alert handler:

POST /kapacitor/v1/config/smtp/
{
    "set":{
        "enabled": false
    }
}

To delete the override for the SMTP alert handler:

POST /kapacitor/v1/config/smtp/
{
    "delete":[
        "enabled"
    ]
}

Actions can be combined in a single request. Enable the SMTP handler, set its host and remove the port override.

POST /kapacitor/v1/config/smtp/
{
    "set":{
        "enabled": true,
        "host": "smtp.example.com"
    },
    "delete":[
        "port"
    ]
}

Add a new InfluxDB cluster:

POST /kapacitor/v1/config/influxdb
{
    "add":{
        "name": "example",
        "urls": ["https://influxdb.example.com:8086"],
        "default": true,
        "disable-subscriptions": true
    }
}

Remove an existing InfluxDB cluster override:

POST /kapacitor/v1/config/influxdb
{
    "remove":[
        "example"
    ]
}

Note: Only the overrides can be removed, this means that InfluxDB clusters that exist in the configuration cannot be removed.

Modify an existing InfluxDB cluster:

POST /kapacitor/v1/config/influxdb/remote
{
    "set":{
        "disable-subscriptions": false,
    },
    "delete": [
        "default"
    ]
}

Response

CodeMeaning
200Success
403Config override service not enabled
404The specified configuration section/option does not exist

Manage storage

Kapacitor exposes some operations that can be performed on the underlying storage.

Warning: Everything storage operation is directly manipulating the underlying storage database. Always make a backup of the database before performing any of these operations.

Back up storage

Making a GET request to /kapacitor/v1/storage/backup will return a dump of the Kapacitor database. To restore from a backup replace the kapacitor.db file with the contents of the backup request.

# Create a backup.
curl http://localhost:9092/kapacitor/v1/storage/backup > kapacitor.db
# Restore a backup.
# The destination path is dependent on your configuration.
cp kapacitor.db ~/.kapacitor/kapacitor.db

Stores

Kapacitor’s underlying storage system is organized into different stores. Various actions can be performed on each individual store.

Warning: Everything storage operation is directly manipulating the underlying storage database. Always make a backup of the database before performing any of these operations.

Available actions:

ActionDescription
rebuildRebuild all indexes in a store, this operation can be very expensive.

To perform an action make a POST request to the /kapacitor/v1/storage/stores/<name of store>

Example

POST /kapacitor/v1/storage/stores/tasks
{
    "action" : "rebuild"
}

Response

CodeMeaning
204Success
400Unknown action
404The specified store does not exist

Users

Kapacitor exposes operations to manage users.

List all users

To list users, use the GET request method with the /kapacitor/v1/users endpoint.

Example

curl GET "http://localhost:9092/kapacitor/v1/users"

Create a user

To create a user, use the POST request method with the /kapacitor/v1/users endpoint.

Define a user as a JSON object with the following properties:

PropertyDescription
nameUsername
passwordPassword
typeUser type (normal or admin, see User)
permissionsList of valid user permission strings (none, api, config_api, write_points, all)

See Kapacitor user types and permissions.

Example

curl --XPOST 'http://localhost:9092/kapacitor/v1/users' \
  --data '{
    "name": "stan",
    "password": "pass",
    "type":"normal",
    "permissions": ["config_api"]
}'

Update a user

To update a user, use the PATCH request method with the /kapacitor/v1/users/<name> endpoint.

Define a modified user as a JSON object with the following properties:

PropertyPurpose
passwordPassword
typeUser type (normal or admin, see User)
permissionsList of valid user permission strings (none, api, config_api, write_points, all)

Example

curl -XPATCH "http://localhost:9092/kapacitor/v1/users/bob" -d '{ "password": "pass", "type":"admin", "permissions":["all"] }'

Delete a user

To delete a user, use the DELETE request method with the /kapacitor/v1/users/<name> endpoint.

Example

curl -XDELETE "http://localhost:9092/kapacitor/v1/users/steve"

Manage Flux tasks

Kapacitor exposes operations to manage Flux tasks. For more information, see Use Flux tasks.

Create a Flux task

Use the following request method and endpoint to create a new Kapacitor Flux task.

POST /kapacitor/v1/api/v2/tasks

Provide the following with your request (* Required):

Headers

  • * Content-type: application/json

Request body

JSON object with the following schema:

  • * flux: Flux task code
  • status: Flux tasks status (active or inactive, default is active)
  • description: Flux task description
curl --request POST 'http://localhost:9092/kapacitor/v1/api/v2/tasks' \
  --header 'Content-Type: application/json' \
  --data-raw '{
    "flux": "option task = {name: \"CPU Total 1 Hour New\", every: 1h}\n\nhost = \"http://localhost:8086\"\ntoken = \"\"\n\nfrom(bucket: \"db/rp\", host:host, token:token)\n\t|> range(start: -1h)\n\t|> filter(fn: (r) =>\n\t\t(r._measurement == \"cpu\"))\n\t|> filter(fn: (r) =>\n\t\t(r._field == \"usage_system\"))\n\t|> filter(fn: (r) =>\n\t\t(r.cpu == \"cpu-total\"))\n\t|> aggregateWindow(every: 1h, fn: max)\n\t|> to(bucket: \"cpu_usage_user_total_1h\", host:host, token:token)",
    "status": "active",
    "description": "Downsample CPU data every hour"
}'

List Flux tasks

Use the following request method and endpoint to list Kapacitor Flux tasks.

GET /kapacitor/v1/api/v2/tasks

Provide the following with your request (* Required):

Headers

  • * Content-type: application/json

Query parameters

  • after: List tasks after a specific task ID
  • limit: Limit the number of tasks returned (default is 500)
  • name: Filter tasks by name
  • status: Filter tasks by status (active or inactive)

Examples

List all Flux tasks
curl --GET 'http://localhost:9092/kapacitor/v1/api/v2/tasks' \
  --header 'Content-Type: application/json'
List a limited number of Flux tasks
curl --GET 'http://localhost:9092/kapacitor/v1/api/v2/tasks' \
  --header 'Content-Type: application/json' \
  --data-urlencode "limit=1"
List a specific Flux task by name
curl --GET 'http://localhost:9092/kapacitor/v1/api/v2/tasks' \
  --header 'Content-Type: application/json' \
  --data-urlencode "name=example-flux-task-name"
List Flux tasks after a specific task ID
curl --GET 'http://localhost:9092/kapacitor/v1/api/v2/tasks' \
  --header 'Content-Type: application/json' \
  --data-urlencode "after=000x00xX0xXXx00"
List only active Flux tasks
curl --GET 'http://localhost:9092/kapacitor/v1/api/v2/tasks' \
  --header 'Content-Type: application/json' \
  --data-urlencode "status=active"

Update a Flux task

Use the following request method and endpoint to update a new Kapacitor Flux task.

PATCH /kapacitor/v1/api/v2/tasks/{taskID}

Provide the following with your request (* Required):

Headers

  • * Content-type: application/json

Path parameters

  • * taskID: Task ID to update

Request body

JSON object with the following schema:

  • cron: Override the cron Flux task option
  • description: New task description
  • every: Override the every Flux task option
  • flux: New Flux task code
  • name: Override the name Flux task option
  • offset: Override the offset Flux task option
  • status: New Flux task status (active or inactive)
Examples

The following examples use the task ID 000x00xX0xXXx00.

Update Flux task code
curl --request PATCH 'http://localhost:9092/kapacitor/v1/api/v2/tasks/000x00xX0xXXx00' \
  --header 'Content-Type: application/json' \
  --data-raw '{
    "flux": "option task = {name: \"Updated task name\", every: 1h}\n\nhost = \"http://localhost:8086\"\ntoken = \"\"\n\nfrom(bucket: \"db/rp\", host:host, token:token)\n\t|> range(start: -1h)\n\t|> filter(fn: (r) =>\n\t\t(r._measurement == \"cpu\"))\n\t|> filter(fn: (r) =>\n\t\t(r._field == \"usage_system\"))\n\t|> filter(fn: (r) =>\n\t\t(r.cpu == \"cpu-total\"))\n\t|> aggregateWindow(every: 1h, fn: max)\n\t|> to(bucket: \"cpu_usage_user_total_1h\", host:host, token:token)"
}'
Enable or disable a Flux task
curl --request PATCH 'http://localhost:9092/kapacitor/v1/api/v2/tasks/000x00xX0xXXx00' \
  --header 'Content-Type: application/json' \
  --data-raw '{"status": "inactive"}'
Override Flux task options
curl --request PATCH 'http://localhost:9092/kapacitor/v1/api/v2/tasks/000x00xX0xXXx00' \
  --header 'Content-Type: application/json' \
  --data-raw '{
    "every": "1d",
    "name": "New task name",
    "offset": "15m"
}'

Delete a Flux task

Use the following request method and endpoint to delete a Kapacitor Flux task.

DELETE /kapacitor/v1/api/v2/tasks/{taskID}

Provide the following with your request (* Required):

Path parameters

  • * taskID: Task ID to delete
# Delete task ID 000x00xX0xXXx00
curl --request DELETE 'http://localhost:9092/kapacitor/v1/api/v2/tasks/000x00xX0xXXx00'

List Kapacitor Flux task runs

Use the following request method and endpoint to list Kapacitor Flux task runs.

GET /kapacitor/v1/api/v2/tasks/{taskID}/runs

Provide the following with your request (* Required):

Headers

  • * Content-type: application/json

Path parameters

  • * taskID: Task ID

Query parameters

  • after: List task runs after a specific run ID
  • afterTime: Return task runs that occurred after this time (RFC3339 timestamp)
  • beforeTime: Return task runs that occurred before this time (RFC3339 timestamp)
  • limit: Limit the number of task runs returned (default is 100)
Examples

The following examples use the task ID 000x00xX0xXXx00.

List all runs for a Flux task
curl --GET 'http://localhost:9092/kapacitor/v1/api/v2/tasks/000x00xX0xXXx00/runs' \
  --header 'Content-Type: application/json'
List a limited number of runs for a Flux task
curl --GET 'http://localhost:9092/kapacitor/v1/api/v2/tasks/000x00xX0xXXx00/runs' \
  --header 'Content-Type: application/json' \
  --data-urlencode "limit=10"
List Flux task runs after a specific run ID
curl --GET 'http://localhost:9092/kapacitor/v1/api/v2/tasks/000x00xX0xXXx00/runs' \
  --header 'Content-Type: application/json' \
  --data-urlencode "after=XXX0xx0xX00Xx0X"
List Flux task runs that occurred in a time range
curl --GET 'http://localhost:9092/kapacitor/v1/api/v2/tasks/000x00xX0xXXx00/runs' \
  --header 'Content-Type: application/json' \
  --data-urlencode 'afterTime=2021-01-01T00:00:00Z' \
  --data-urlencode 'beforeTime=2021-01-31T00:00:00Z'

Retry a Kapacitor Flux task run

Use the following request method and endpoint to retry a Kapacitor Flux task run.

POST /kapacitor/v1/api/v2/tasks/{taskID}/runs/{runID}/retry

Provide the following with your request (* Required):

Path parameters

  • * taskID: Task ID
  • * runID: Run ID to retry
# Retry run ID XXX0xx0xX00Xx0X for task ID 000x00xX0xXXx00
curl --request POST \
  'http://localhost:9092/kapacitor/v1/api/v2/tasks/000x00xX0xXXx00/runs/XXX0xx0xX00Xx0X'

Show all run logs for a task

Use the following request method and endpoint to show Kapacitor Flux task logs.

GET /kapacitor/v1/api/v2/tasks/{taskID}/log

Provide the following with your request (* Required):

Path parameters

  • * taskID: Task ID
# Get logs for task ID 000x00xX0xXXx00
curl --request GET \
  'http://localhost:9092/kapacitor/v1/api/v2/tasks/000x00xX0xXXx00/logs'

Show logs for a specific Flux task run

Use the following request method and endpoint to show logs for a specific Kapacitor Flux task run.

GET /kapacitor/v1//api/v2/tasks/{taskID}/runs/{runID}/logs

Provide the following with your request (* Required):

Path parameters

# Get logs for task ID 000x00xX0xXXx00, run ID XXX0xx0xX00Xx0X
curl --request GET \
  'http://localhost:9092/kapacitor/v1/api/v2/tasks/000x00xX0xXXx00/runs/XXX0xx0xX00Xx0X/logs'

Logging

The logging API is being release under Technical Preview. Kapacitor allows users to retrieve the Kapacitor logs remotely using HTTP Chunked Transfer Encoding. The logs may be queried using key-value pairs corresponding to the log entry. These key-value are specified as query parameter.

The logging API will return logs in two formats:

To receive logs in JSON format, specify Content-Type: application/json. If Kapacitor receives any content type other than application/json, logs will be returned in logfmt format.

Each chunk returned to the client will contain a single complete log, followed by a \n.

Example

Logs as JSON

GET /kapacitor/v1preview/logs?task=mytask
Content-Type: application/json

returns the following

{"ts":"2017-11-08T17:40:47.183-05:00","lvl":"info","msg":"created log session","service":"sessions","id":"7021fb9d-467e-482f-870c-d811aa9e74b7","content-type":"application/json","tags":"nil"}
{"ts":"2017-11-08T17:40:47.183-05:00","lvl":"info","msg":"created log session","service":"sessions","id":"7021fb9d-467e-482f-870c-d811aa9e74b7","content-type":"application/json","tags":"nil"}
{"ts":"2017-11-08T17:40:47.183-05:00","lvl":"info","msg":"created log session","service":"sessions","id":"7021fb9d-467e-482f-870c-d811aa9e74b7","content-type":"application/json","tags":"nil"}

Logs as logfmt

GET /kapacitor/v1preview/logs?task=mytask

returns the following

ts=2017-11-08T17:42:47.014-05:00 lvl=info msg="created log session" service=sessions id=ce4d7819-1e38-4bf4-ba54-78b0a8769b7e content-type=
ts=2017-11-08T17:42:47.014-05:00 lvl=info msg="created log session" service=sessions id=ce4d7819-1e38-4bf4-ba54-78b0a8769b7e content-type=
ts=2017-11-08T17:42:47.014-05:00 lvl=info msg="created log session" service=sessions id=ce4d7819-1e38-4bf4-ba54-78b0a8769b7e content-type=

Test services

Kapacitor makes use of various service integrations. The following API endpoints provide way for a user to run simple tests to ensure that a service is configured correctly.

List testable services

A list of services that can be tested is available at the /kapacitor/v1/service-tests endpoint

Query ParameterDefaultPurpose
pattern*Filter results based on the pattern. Uses standard shell glob matching on the service name, see this for more details.

Example

GET /kapacitor/v1/service-tests
{
    "link": {"rel":"self", "href": "/kapacitor/v1/service-tests"},
    "services" : [
        {
            "link": {"rel":"self", "href": "/kapacitor/v1/service-tests/influxdb"},
            "name": "influxdb",
            "options": {
                "cluster": ""
            }
        },
        {
            "link": {"rel":"self", "href": "/kapacitor/v1/service-tests/slack"},
            "name": "slack",
            "options": {
                "message": "test slack message",
                "channel": "#alerts",
                "level": "CRITICAL"
            }
        },
        {
            "link": {"rel":"self", "href": "/kapacitor/v1/service-tests/smtp"},
            "name": "smtp",
            "options": {
                "to": ["user@example.com"],
                "subject": "test subject",
                "body": "test body"
            }
        }
    ]
}

Response

CodeMeaning
200Success

Test a service

To test a service, make a POST request to the /kapacitor/v1/service-tests/<service name> endpoint. The contents of the POST body depend on the service in the test. To determine the available options use a GET request to the same endpoint. The returned options are also the defaults.

Example

To see the available and default options for the Slack service, run the following request.

GET /kapacitor/v1/service-tests/slack
{
    "link": {"rel":"self", "href": "/kapacitor/v1/service-tests/slack"},
    "name": "slack"
    "options": {
        "message": "test slack message",
        "channel": "#alerts",
        "level": "CRITICAL"
    }
}

Test the Slack service integration using custom options:

POST /kapacitor/v1/service-tests/slack
{
    "message": "my custom test message",
    "channel": "@user",
    "level": "OK"
}

A successful response looks like:

{
    "success": true,
    "message": ""
}

A failed response looks like:

{
    "success": false,
    "message": "could not connect to slack"
}

Response

CodeMeaning
200Success, even if the service under test fails a 200 is returned as the test complete correctly.

Miscellaneous

Ping

You can ‘ping’ the Kapacitor server to validate you have a successful connection. A ping request does nothing but respond with a 204.

Note: The Kapacitor server version is returned in the X-Kapacitor-Version HTTP header on all requests. Ping is a useful request if you simply need the verify the version of server you are talking to.

Example

GET /kapacitor/v1/ping

Response:

| Code | Meaning |
| ---- | ------- |
| 204  | Success |

Reload sideload

You can trigger a reload of all sideload sources by making an HTTP POST request to kapacitor/v1/sideload/reload, with an empty body.

Example

POST /kapacitor/v1/sideload/reload

Response:

| Code | Meaning |
| ---- | ------- |
| 204  | Success |

/debug/vars HTTP endpoint

Kapacitor exposes statistics and information about its runtime through the /debug/vars endpoint, which can be accessed using the following cURL command:

curl http://localhost:9092/kapacitor/v1/debug/vars

Server statistics and information are displayed in JSON format.

Note: You can use the Telegraf Kapacitor input plugin to collect metrics (using the /debug/vars endpoint) from specified Kapacitor instances. For a list of the measurements and fields, see the plugin README.

/debug/pprof HTTP endpoints

Kapacitor supports the Go net/http/pprof endpoints, which can be useful for troubleshooting. The pprof package serves runtime profiling data in the format expected by the pprof visualization tool.

curl http://localhost:9092/kapacitor/v1/debug/pprof/

The /debug/pprof/ endpoint generates an HTML page with a list of built-in Go profiles and hyperlinks for each.

ProfileDescription
blockStack traces that led to blocking on synchronization primitives.
goroutineStack traces of all current goroutines.
heapSampling of stack traces for heap allocations.
mutexStack traces of holders of contended mutexes.
threadcreateStack traces that led to the creation of new OS threads.

To access one of the the /debug/pprof/ profiles listed above, use the following cURL request, substituting <profile> with the name of the profile. The resulting profile is output to a file, specified for <path/to/output-file>.

curl -o <path/to/output-file> http://localhost:9092/kapacitor/v1/debug/pprof/<profile>

In the following example, the cURL command outputs the resulting heap profile to the file specified in <path/to/output-file>:

curl -o <path/to/output-file> http://9092/kapacitor/v1/debug/pprof/heap

You can also use the Go pprof interactive tool to access the Kapacitor /debug/pprof/ profiles. For example, to look at the heap profile of a Kapacitor instance using this tool, you would use a command like this:

go tool pprof http://localhost:9092/kapacitor/v1/debug/pprof/heap

For more information about the Go /net/http/pprof package and the interactive pprof analysis and visualization tool, see:

Routes

Displays available routes for the API

GET /kapacitor/v1/:routes

Was this page helpful?

Thank you for your feedback!


The future of Flux

Flux is going into maintenance mode. You can continue using it as you currently are without any changes to your code.

Flux is going into maintenance mode and will not be supported in InfluxDB 3.0. This was a decision based on the broad demand for SQL and the continued growth and adoption of InfluxQL. We are continuing to support Flux for users in 1.x and 2.x so you can continue using it with no changes to your code. If you are interested in transitioning to InfluxDB 3.0 and want to future-proof your code, we suggest using InfluxQL.

For information about the future of Flux, see the following: