Documentation

Template tasks

Use templates in the CLI and the API to define and reuse tasks.

To create a task template, do the following:

  1. Create a task template script
  2. Run the define-template command
  3. Define a new task

Some third-party services reject standard json terminated by a new-line character. To remove the new line, replace {{ json . }} with {{ jsonCompact . }} in your templates.

When creating a task template, consider the following:

Chronograf does not display template details, including variable values.

Create a task template script

The following task template script computes the mean of a field and triggers an alert.

Example: generic_alert_template.tick

// Which measurement to consume
var measurement string
// Optional where filter
var where_filter = lambda: TRUE
// Optional list of group by dimensions
var groups = [*]
// Which field to process
var field string
// Warning criteria, has access to 'mean' field
var warn lambda
// Critical criteria, has access to 'mean' field
var crit lambda
// How much data to window
var window = 5m
// The slack channel for alerts
var slack_channel = '#alerts'

stream
    |from()
        .measurement(measurement)
        .where(where_filter)
        .groupBy(groups)
    |window()
        .period(window)
        .every(window)
    |mean(field)
    |alert()
         .warn(warn)
         .crit(crit)
         .slack()
         .channel(slack_channel)
  • Copy
  • Fill window

Notice all fields in the script are declared variables, which lets you customize variable values when using the template to define a task.

Define variables

In a task template, use the following pattern to define variables:

// Required variable pattern
var varName dataType

// Optional variable patterns
var varName = dataType: defaultValue
var varName = [*]
  • Copy
  • Fill window

For information about available data types, see literal value types.

Optional variables

In some cases, a template task may be used for tasks that do not require values for all template variables. To ensure a variable is optional, provide a default value. In most cases, the default can simply be TRUE:

// Pattern
var varName = datatype: defaultValue

// Examples
var where_filter = lambda: TRUE
var warn = lambda: TRUE
var groups = [*]
  • Copy
  • Fill window

Run the define-template command

To define a new template, run the define-template command:

kapacitor define-template generic_mean_alert -tick path/to/template_script.tick
  • Copy
  • Fill window

Use show-template to see more information about the newly created template.

kapacitor show-template generic_mean_alert
  • Copy
  • Fill window

A list of variables declared for the template is returned in the group vars as part of the console output as shown in this example:

Example: The Vars section of kapacitor show-template output

...

Vars:
Name           Type      Default Value  Description
crit           lambda    <required>     Critical criteria, has access to 'mean' field
field          string    <required>     Which field to process
groups         list      [*]            Optional list of group by dimensions
measurement    string    <required>     Which measurement to consume
slack_channel  string    #alerts        The slack channel for alerts
warn           lambda    <required>     Warning criteria, has access to 'mean' field
where_filter   lambda    TRUE           Optional where filter
window         duration  5m0s           How much data to window

...
  • Copy
  • Fill window

Each task will acquire its type and TICKscript structure from the template. Variable descriptions are derived from comments above each variable in the template. The specific values of variables and of the database/retention policy are unique for each task created using the template.

Define a new task

Define a new task using the template to trigger an alert on CPU usage.

  1. Pass variable values into the template using a simple JSON file.

    Example: A JSON variable file

    {
       "measurement": {"type" : "string", "value" : "cpu" },
       "where_filter": {"type": "lambda", "value": "\"cpu\" == 'cpu-total'"},
       "groups": {"type": "list", "value": [{"type":"string", "value":"host"},{"type":"string", "value":"dc"}]},
       "field": {"type" : "string", "value" : "usage_idle" },
       "warn": {"type" : "lambda", "value" : "\"mean\" < 30.0" },
       "crit": {"type" : "lambda", "value" : "\"mean\" < 10.0" },
       "window": {"type" : "duration", "value" : "1m" },
       "slack_channel": {"type" : "string", "value" : "#alerts_testing" }
    }
    
    • Copy
    • Fill window
  2. Pass in the template file and the JSON variable file by running a command with both -template and -vars arguments:

    kapacitor define cpu_alert -template generic_mean_alert -vars cpu_vars.json -dbrp telegraf.autogen
    
    • Copy
    • Fill window
  3. Use the show command to display the variable values associated with the newly created task.

    For Kapacitor instances with authentication enabled, use the following form:

    ./kapacitor -url http://username:password@MYSERVER:9092 show TASKNAME
    
    • Copy
    • Fill window
Example
kapacitor show cpu_alert
  • Copy
  • Fill window
...

Vars:
Name           Type      Value
crit           lambda    "mean" < 10.0
field          string    usage_idle
groups         list      [host,dc]
measurement    string    cpu
slack_channel  string    #alerts_testing
warn           lambda    "mean" < 30.0
where_filter   lambda    "cpu" == 'cpu-total'
window         duration  1m0s

...
  • Copy
  • Fill window

A similar task for a memory based alert can also be created using the same template. Create a mem_vars.json and use this snippet.

Example: A JSON variables file for memory alerts

{
    "measurement": {"type" : "string", "value" : "mem" },
    "groups": {"type": "list", "value": [{"type":"star", "value":"*"}]},
    "field": {"type" : "string", "value" : "used_percent" },
    "warn": {"type" : "lambda", "value" : "\"mean\" > 80.0" },
    "crit": {"type" : "lambda", "value" : "\"mean\" > 90.0" },
    "window": {"type" : "duration", "value" : "10m" },
    "slack_channel": {"type" : "string", "value" : "#alerts_testing" }
}
  • Copy
  • Fill window

The task can now be defined as before, but this time with the new variables file and a different task identifier.

kapacitor define mem_alert -template generic_mean_alert -vars mem_vars.json -dbrp telegraf.autogen
  • Copy
  • Fill window

Running show will display the vars associated with this task which are unique to the mem_alert task.

kapacitor show mem_alert
  • Copy
  • Fill window

And again the vars output:

Example: The Vars section of the mem_alert task

...
Vars:
Name           Type      Value
crit           lambda    "mean" > 90.0
field          string    used_percent
groups         list      [*]
measurement    string    mem
slack_channel  string    #alerts_testing
warn           lambda    "mean" > 80.0
window         duration  10m0s
...
  • Copy
  • Fill window

Any number of tasks can be defined using the same template.

Note: Updates to the template will update all associated tasks and reload them if necessary.

Using Variables

Variables work with normal tasks as well and can be used to overwrite any defaults in the script. Since at any point a TICKscript could come in handy as a template, the recommended best practice is to always use var declarations in TICKscripts. Normal tasks work and, if at a later date you decide another similar task is needed, you can easily create a template from the existing TICKscript and define additional tasks with variable files.

Using the -file flag

Starting with Kapacitor 1.4, tasks may be generated from templates using a task definition file. The task definition file is extended from the variables file of previous releases. Three new fields are made available.

  • The template-id field is used to select the template.
  • The dbrps field is used to define one or more database/retention policy sets that the task will use.
  • The vars field groups together the variables, which were the core of the file in previous releases.

This file may be in either JSON or YAML.

A task for a memory-based alert can be created using the same template that was defined above. Create a mem_template_task.json file using the snippet in Example 7.

Example: A task definition file in JSON

{
  "template-id": "generic_mean_alert",
  "dbrps": [{"db": "telegraf", "rp": "autogen"}],
  "vars": {
    "measurement": {"type" : "string", "value" : "mem" },
    "groups": {"type": "list", "value": [{"type":"star", "value":"*"}]},
    "field": {"type" : "string", "value" : "used_percent" },
    "warn": {"type" : "lambda", "value" : "\"mean\" > 80.0" },
    "crit": {"type" : "lambda", "value" : "\"mean\" > 90.0" },
    "window": {"type" : "duration", "value" : "10m" },
    "slack_channel": {"type" : "string", "value" : "#alerts_testing" }
  }
}
  • Copy
  • Fill window

The task can then be defined with the file parameter which, with the new content of the task definition file, replaces the command-line parameters template, dbrp, and vars.

kapacitor define mem_alert -file mem_template_task.json
  • Copy
  • Fill window

Using YAML, the task definition file mem_template_task.yaml appears as follows:

Example: A task definition file in YAML

template-id: generic_mean_alert
dbrps:
- db: telegraf
  rp: autogen
vars:
  measurement:
    type: string
    value: mem
  groups:
    type: list
    value:
    - type: star
      value: "*"
  field:
    type: string
    value: used_percent
  warn:
    type: lambda
    value: '"mean" > 80.0'
  crit:
    type: lambda
    value: '"mean" > 90.0'
  window:
    type: duration
    value: 10m
  slack_channel:
    type: string
    value: "#alerts_testing"
  • Copy
  • Fill window

The task can then be defined with the file parameter as previously shown.

kapacitor define mem_alert -file mem_template_task.yaml
  • Copy
  • Fill window

Specifying dbrp implicitly

The following is a simple example that defines a template that computes the mean of a field and triggers an alert, where the dbrp is specified in the template.

Example: Defining the database and retention policy in the template

dbrp "telegraf"."autogen"

// Which measurement to consume
var measurement string
// Optional where filter
var where_filter = lambda: TRUE
// Optional list of group by dimensions
var groups = [*]
// Which field to process
var field string
// Warning criteria, has access to 'mean' field
var warn lambda
// Critical criteria, has access to 'mean' field
var crit lambda
// How much data to window
var window = 5m
// The slack channel for alerts
var slack_channel = '#alerts'

stream
    |from()
        .measurement(measurement)
        .where(where_filter)
        .groupBy(groups)
    |window()
        .period(window)
        .every(window)
    |mean(field)
    |alert()
         .warn(warn)
         .crit(crit)
         .slack()
         .channel(slack_channel)
  • Copy
  • Fill window

Define a new template from this template script:

kapacitor define-template implicit_generic_mean_alert -tick path/to/script.tick
  • Copy
  • Fill window

Then define a task in a YAML file, implicit_mem_template_task.yaml

Example: A YAML vars file leveraging a template with a predefined database and retention policy

template-id: implicit_generic_mean_alert
vars:
  measurement:
    type: string
    value: mem
  groups:
    type: list
    value:
    - type: star
      value: "*"
  field:
    type: string
    value: used_percent
  warn:
    type: lambda
    value: '"mean" > 80.0'
  crit:
    type: lambda
    value: '"mean" > 90.0'
  window:
    type: duration
    value: 10m
  slack_channel:
    type: string
    value: "#alerts_testing"
  • Copy
  • Fill window

Create the task:

kapacitor define mem_alert -file implicit_mem_template_task.yaml
  • Copy
  • Fill window

Note: When the dbrp value has already been declared in the template, the dbrps field must not appear in the task definition file, e.g. in implicit_mem_template_task.yaml. Doing so will will cause an error.


Was this page helpful?

Thank you for your feedback!


The future of Flux

Flux is going into maintenance mode. You can continue using it as you currently are without any changes to your code.

Read more

InfluxDB 3 Core and Enterprise are now in Beta

InfluxDB 3 Core and Enterprise are now available for beta testing, available under MIT or Apache 2 license.

InfluxDB 3 Core is a high-speed, recent-data engine that collects and processes data in real-time, while persisting it to local disk or object storage. InfluxDB 3 Enterprise is a commercial product that builds on Core’s foundation, adding high availability, read replicas, enhanced security, and data compaction for faster queries. A free tier of InfluxDB 3 Enterprise will also be available for at-home, non-commercial use for hobbyists to get the full historical time series database set of capabilities.

For more information, check out: