All Products
Search
Document Center

Time Series Database:Query single-value data points

Last Updated:Nov 02, 2022

Request path and method

Request path: /api/query

Request method: POST

Description: You can call this API operation to query data.

Parameters for requests in the JSON format

Parameter

Type

Required

Description

Default value

Example

start

Long

Yes

The beginning of the time range to query. Unit: seconds or milliseconds. For more information about how TSDB determines the unit of a timestamp, see the Timestamp units section of this topic.

None

1499158925

end

Long

No

The end of the time range to query. Unit: seconds or milliseconds. For more information about how TSDB determines the unit of a timestamp, see the Timestamp units section of this topic. The default value is the current time of the TSDB server.

The current time

1499162916

queries

Array

Yes

The subquery array.

None

For more information, see the "Parameters for subqueries in the JSON format" section.

msResolution

boolean

No

Specifies whether to convert the unit of timestamps to millisecond.

false

This parameter takes effect only if the timestamps of the data points that you query are measured in seconds. If you set the value to true, the unit of the timestamps in the query result is milliseconds. Otherwise, the original timestamp unit is retained. If the timestamp of a data point that you query is milliseconds, the timestamp unit in the query result is milliseconds regardless of whether the value of this parameter is true or false.

hint

Map

No

The query hint.

None

For more information, see the "Parameter: hint" section.

Timestamp units

Timestamps can be measured in seconds or milliseconds. TSDB uses the following rules to determine the unit of a timestamp based on the numeric value of the timestamp:

  • If the value falls within the range of [4294968,4294967295], TSDB determines that the timestamp is measured in seconds. In this case, the corresponding date and time range is [1970-02-20 01:02:48, 2106-02-07 14:28:15].

  • If the value falls within the range of [4294967296,9999999999999], the unit is milliseconds. In this case, the date and time range is [1970-02-20 01:02:47.296, 2286-11-21 01:46:39.999].

  • If the value falls within the range of (-∞,4294968) or (9999999999999,+∞), the timestamp is invalid.

Note

These rules apply to the following API operations: /api/put and api/query. The first API operation is used to write data. The other API operation is used to query data.

Query data points at a single point in time

TSDB allows you to query data points at a single point in time. To query data points at a single point in time, set the start time and the end time to the same value. For example, you can set the start and end parameters to 1356998400.

Parameters for subqueries in the JSON format

Parameter

Type

Required

Description

Default value

Example

aggregator

String

Yes

The aggregate function. For more information, see the "Parameter: aggregator" section.

None

sum

metric

String

Yes

The metric name.

None

sys.cpu0

rate

Boolean

No

Specifies whether to calculate the growth rate between the values of a specified metric. The growth rate is calculated based on the following formula: Growth rate = (Vt - Vt-1)/(t1 - t-1).

false

true

delta

Boolean

No

Specifies whether to calculate the delta between the values of a specified metric. This delta is calculated based on the following formula: Delta = Vt - (Vt-1).

false

true

limit

Integer

No

The maximum number of data points in each time series to return on each page for a subquery.

0

1000

offset

Integer

No

The number of data points that you want to skip in each time series on each page for a subquery.

0

500

dpValue

String

No

The filtering conditions based on which the returned data points are filtered. The following operators are supported: >, <, =, <=, >=, and!=.

None

>=1000

preDpValue

String

No

The filtering conditions based on which the raw data points are scanned. The following operators are supported: >, <, =, <=, >=, and!=.

Note

preDpValue differs from dpValue. preDpValue is used to filter the data points to be stored during scanning. dpValue is used to filter the results that are calculated after a query. If you use preDpValue, the data points that do not meet the filtering conditions are not included in queries and calculations.

None

>=1000

downsample

String

No

The downsampling configuration.

None

60m-avg

tags

Map

No

Specifies the tag condition based on which the data is filtered. This parameter is mutually exclusive with the filters parameter.

None

-

filters

List

No

Specifies the filtering condition. This parameter is mutually exclusive with the tags parameter.

None

-

hint

Map

No

The query hint.

None

For more information, see the "Parameter: hint" section.

forecasting

String

No

Forecasts data in a time series.

None

For more information, see the "Parameter: forecasting" section.

abnormaldetect

String

No

Forecasts data in a time series for anomaly detection.

None

For more information, see the "Parameter: abnormaldetect" section.

Note

  • A query contains a maximum of 200 subqueries.

  • If you specify both the tags and filters parameters, the parameter that you specify at the latter position in the JSON-formatted data takes effect.

  • For more information about the limit, dpValue, downsample, tags, and filters parameters, see the following descriptions.

Sample requests

Request line: POST/api/query

{
  "start": 1356998400,
  "end": 1356998460,
  "queries": [
    {
      "aggregator": "sum",
      "metric": "sys.cpu.0"
    },
    {
      "aggregator": "sum",
      "metric": "sys.cpu.1"
    }
  ]
}

Parameters: limit and offset

The limit parameter specifies the maximum number of data points to return in each time series for a subquery. The default value of the limit parameter is 0. The default value 0 indicates that no limits are placed on the number of returned data points.

The offset parameter specifies the number of the data points that you want to skip in each time series for a subquery. The default value of the offset parameter is 0. The default value 0 indicates that no data points are skipped.

Important

You cannot set the limit or offset parameter to a negative number.

Example

If you want to obtain the data points whose rankings are 1001 to 1500, set the limit parameter to 500 and the offset parameter to 1000.

    {
       "start":1346046400,
       "end":1347056500,
       "queries":[
          {
             "aggregator":"avg",
             "downsample":"2s-sum",
             "metric":"sys.cpu.0",
             "limit":"500",
             "offset":"1000",
             "tags":{
                "host":"localhost",
                "appName":"hitsdb"
             }
          }
       ]
    }

Parameter: dpValue

The dpValue parameter specifies the limit for data values. You can specify this parameter to filter data points that are to be returned. Valid values are >, <, =, <=, >=,and !=.

Important

If this parameter is set to a string, the string can contain only the operators: =,and !=.

Example

    {
       "start":1346046400,
       "end":1347056500,
       "queries":[
          {
             "aggregator":"avg",
             "downsample":"2s-sum",
             "metric":"sys.cpu.0",
             "dpValue":">=500",
             "tags":{
                "host":"localhost",
                "appName":"hitsdb"
             }
          }
       ]
    }

Parameter: delta

If you specify a delta operator in a subquery, the value of a key-value pair in the dps data point returned by TSDB is the calculated delta. If n key-value pairs are contained in the dps data point that is returned when the delta parameter is not specified, only n-1 key-value pairs are contained in the dps data point that is returned after the delta is calculated. The first key-value pair is not used because the delta of this pair cannot be calculated. The delta operator also applies to the values after downsampling.

After you specify a delta operator, you can configure deltaOptions in the subquery to further control how the delta is calculated. The following table describes the parameters that can be used for deltaOptions.

Parameter

Type

Required

Description

Default value

Example

counter

Boolean

No

If you specify this marker bit, you can regard the metric values assumed for calculating deltas as the cumulative values that monotonically increase or decrease. The cumulative values are similar to values in a counter. The server does not check the metric values.

false

true

counterMax

Integer

No

If the counter parameter is set to true, the counterMax parameter specifies the threshold of the delta. If the absolute value of the delta exceeds the threshold, the delta is abnormal. If you do not specify the value of the counterMax parameter, the delta does not have a threshold.

None

100

dropReset

Boolean

No

You must use this marker bit together with counterMax. If the counterMax parameter is specified and a calculated delta is abnormal, you can use the dropReset parameter to specify whether to discard the abnormal delta. If the dropReset parameter is set to true, the abnormal delta is discarded. If this parameter is set to false, the abnormal delta is reset to 0. The default value of this parameter is false.

false

true

Example

    {
       "start":1346046400,
       "end":1347056500,
       "queries":[
          {
             "aggregator":"none",
             "downsample":"5s-avg",
             "delta":true,
             "deltaOptions":{
                 "counter":true,
                 "counterMax":100
             }
             "metric":"sys.cpu.0",
             "dpValue":">=50",
             "tags":{
                "host":"localhost",
                "appName":"hitsdb"
             }
          }
       ]
    }

Parameter: downsample

Use this parameter if you need to query data generated in a long time range and aggregate the data based on a specified time interval. A timeline is divided into multiple time ranges based on the specified time interval for downsampling. Each timestamp returned indicates the start of each time range. The following sample code provides an example on the format of the query:

<interval><units>-<aggregator>[-fill policy]

The query format is the downsampling expression.

Important

After the downsample parameter is specified, a time window that has the same length as the specified interval for data aggregation is automatically added to the start and the end of the specified time range. For example, if the specified timestamp range is [1346846401,1346846499] and the specified interval is 5 minutes, the actual timestamp range for the query is [1346846101,1346846799].

The following list describes the fields in the downsampling expression:

  • interval: specifies a numeric value, such as 5 or 60. The value 0all indicates that the data points in the time range are aggregated into a single value.

  • units: the unit. s represents seconds. m represents minute. h represents hour. d represents day. n represents month. y represents year.

    Note

    • By default, the modulo and truncation operation is performed to align timestamps. Timestamps are aligned by using the following formula: Aligned timestamp = Data timestamp - (Data timestamp % Time interval).

    • You can downsample data based on a calendar time interval. To use the calendar time interval, add c to the end of the value of the units parameter. For example, 1dc indicates the 24-hour period from 00:00 of the current day to 00:00 of the next day.

  • aggregator: the aggregation settings. The following table describes the operators that are used for downsampling.

    Operator

    Description

    avg

    Returns the average value.

    count

    Returns the number of data points.

    first

    Returns the first value.

    last

    Returns the last value.

    min

    Returns the minimum value.

    max

    Returns the maximum value.

    median

    Returns the median.

    sum

    Returns the sum of values.

    zimsum

    Returns the sum of values.

    rfirst

    Returns the same data point as that returned by the first operator. However, the timestamp returned is the original timestamp instead of the aligned timestamp. The timestamps of data points are aligned after the data is downsampled.

    rlast

    Returns the same data point as that returned by the last operator. However, the timestamp returned is the original timestamp instead of the aligned timestamp. The timestamps of data points are aligned after the data is downsampled.

    rmin

    Returns the same data point as that data point returned by the min operator. However, the timestamp returned is the original timestamp instead of the aligned timestamp. The timestamps of data points are aligned after the data is downsampled.

    rmax

    Returns the same data point as the data point returned by the max operator. However, the timestamp returned is the original timestamp instead of the aligned timestamp. The timestamps of data points are aligned after the data is downsampled.

    Note

    If you set the aggregator to the rfirst, rlast, rmin, or rmax operator in a downsampling expression, you cannot configure the fill policy parameter in the downsampling expression.

    Fill policy

    You can specify fill policies to determine how to fill missing values with pre-defined values. During downsampling, all timelines are split based on a specified time interval, and data points in each time range are aggregated. If no value exists during a time range in the downsampling result, you can specify a fill policy to fill the missing value with a pre-defined value. An example is used to explain fill policies. In this example, the timestamps of a timeline after downsampling are t+0, t+20, and t+30. If you do not specify a fill policy, only three values are reported. If you set the fill policy to null, four values are reported. The missing value at the point in time t+10 is filled with null.

    The following table describes the fill policies and the values that are to be filled.

    Fill Policy

    Fill missing values with

    none

    No values are filled. This is the default value.

    nan

    null

    null

    null

    zero

    0

    linear

    The value that is calculated based on linear interpolation.

    previous

    The previous value.

    near

    The adjacent value.

    after

    The next value.

    fixed

    A user-specified fixed value. For more information, see the description in the "Fill policy (fixed value)" section of this topic.

    Fixed Fill Policy

    To fill missing values with a fixed value, add the fixed value to the end of the number sign (#). You can set the fixed value to a positive or negative number. The following sample code provides an example on the valid format:

    <interval><units>-<aggregator>-fixed#<number>

    Two examples are 1h-sum-fixed#6 and 1h-avg-fixed#-8.

    Downsampling examples

    Three downsampling examples are 1m-avg, 1h-sum-zero, and 1h-sum-near.

    Important

    The downsample parameter is optional for queries. You can set this parameter to null or leave this parameter empty: {"downsample": null} or {"downsample": ""}. In this case, data is not downsampled.

Parameter: aggregator

After data is downsampled, values along multiple timelines are obtained and timestamps of these timelines are aligned. You can perform aggregation to merge these timelines into one by aggregating the values at each aligned timestamp. Aggregation is not performed if only one timeline exists. During aggregation, each timeline must have a value at each aligned timestamp. If no value can be found at an aligned timestamp, interpolation is performed. For more information, see the following "Interpolation" section.

Interpolation

If a timeline has no value at a timestamp, a value is interpolated to the timeline at the timestamp. This occurs only if you do not specify a fill policy and one of the other timelines to be aggregated has a value at this timestamp. An example is used to explain interpolation. In this example, you want to merge two timelines by using the sum operator. The downsampling and aggregation settings are {"downsample": "10s-avg", "aggregator": "10s-avg", "aggregator": "sum"}. After the data is downsampled, values can be found on the following timestamps along the two timelines:

In timeline 1, values can be found at the timestamps t+0, t+10, t+20, and t+30. In timeline 2, values can be found at the timestamps t+0, t+20, and t+30.

Along timeline 2, the value at the t+10 timestamp is missing. Before the data is aggregated, a value is interpolated for timeline 2 at this timestamp. The interpolation method varies based on aggregation operators. The following table lists the operators and interpolation methods.

Operator

Description

Interpolation method

avg

Returns the average value.

Linear interpolation is used. Linear interpolation is performed based on linear slopes.

count

Returns the number of data points.

The value 0 is interpolated.

mimmin

Returns the minimum value.

The maximum value is interpolated.

mimmax

Returns the maximum value.

The minimum value is interpolated.

min

Returns the minimum value.

Linear interpolation is used.

max

Returns the maximum value.

Linear interpolation is used.

none

Skips data aggregation.

The value 0 is interpolated.

sum

Returns the sum of values.

Linear interpolation is used.

zimsum

Returns the sum of values.

The value 0 is interpolated.

Parameter: filters

You can use the following methods to configure the filters parameter:

  • Specify the tag key.

    • tagk = *: You can perform a GROUP BY operation on the tag values of a tag key to aggregate the same tag values.

    • tagk = tagv1|tagv2: You can aggregate the tagv1 values of the tag key and aggregate the tagv2 values of the tag key.

  • Specify filters in the JSON format. The following table describes the parameters.

    Parameter

    Type

    Required

    Description

    Default value

    Example

    type

    String

    Yes

    The type of the filter. For more information, see the "Filter types" section.

    None

    literal_or

    tagk

    String

    Yes

    The key of the tag.

    None

    host

    filter

    String

    Yes

    The filter expression.

    None

    web01|web02

    groupBy

    Boolean

    No

    Specifies whether to perform a GROUP BY operation on the tag values.

    false

    false

    Filter types

    Filter type

    Example

    Description

    literal_or

    web01|web02

    The values of each tagv are aggregated. This filter is case-sensitive.

    wildcard

    *.example.com

    The tag values that contain the specified wildcard for each tagv is aggregated. This filter is case-sensitive.

    Sample requests

    Sample requests without filters

    Request body:

        {
            "start": 1356998400,
            "end": 1356998460,
            "queries": [
                {
                    "aggregator": "sum",
                    "metric": "sys.cpu.0",
                    "rate": "true",
                    "tags": {
                        "host": "*",
                        "dc": "lga"
                    }
                }
            ]
        }

    Sample request with filters specified

    Request body:

    {
      "start": 1356998400,
      "end": 1356998460,
      "queries": [
        {
          "aggregator": "sum",
          "metric": "sys.cpu.0",
          "rate": "true",
          "filters": [
            {
              "type": "wildcard",
              "tagk": "host",
              "filter": "*",
              "groupBy": true
            },
            {
              "type": "literal_or",
              "tagk": "dc",
              "filter": "lga|lga1|lga2",
              "groupBy": false
            }
          ]
        }
      ]
    }

    Query result

    If a query is successful, the HTTP status code is 200 and the response is returned in the JSON format. The following table describes the response parameters.

    Parameter

    Description

    metric

    The metric name.

    tags

    The tags whose values were not aggregated.

    aggregateTags

    The tags whose values were aggregated.

    dps

    The tag key-value pairs that correspond to the data points.

    Sample responses:

    [
        {
            "metric": "tsd.hbase.puts",
            "tags": {"appName": "hitsdb"},
            "aggregateTags": [
                "host"
            ],
            "dps": {
                "1365966001": 25595461080,
                "1365966061": 25595542522,
                "1365966062": 25595543979,
                "1365973801": 25717417859
            }
        }
    

Parameter: hint

Scenarios

In most cases, a query hint is used to reduce the response time of queries. For example, Tags A and Tags B are specified and the time series hit by Tags B are obviously included by the time series hit by Tags A. In this case, data is not read from the time series hit by Tag A. The intersection between the set of time series hit by Tag A and the set of time series hit by Tag B is equal to the set of time series hit by Tag B.

Format description

  • The current TSDB version allows you to use only the tagk parameter in a hint to limit query indexes.

  • In the tag key-value pairs specified by the tagk parameter, the tag values of the tag keys must be the same. Valid values: 0 and 1. If the tag values are 0, the indexes corresponding to the tag keys are not used. If the tag values are 1, the indexes corresponding to the tag keys are used.

Version description

The query hint feature is supported by TSDB V2.6.1 and later.

Sample requests

Hint that applies to a subquery

{
  "start": 1346846400,
  "end": 1346846400,
  "queries": [
    {
      "aggregator": "none",
      "metric": "sys.cpu.nice",
      "tags": {
        "dc": "lga",
        "host": "web01"
      },
      "hint": {
        "tagk": {
          "dc": 1
        }
      }
    }
  ]
}

Hint that applies to the entire query

{
  "start": 1346846400,
  "end": 1346846400,
  "queries": [
    {
      "aggregator": "none",
      "metric": "sys.cpu.nice",
      "tags": {
        "dc": "lga",
        "host": "web01"
      }
    }
  ],
  "hint": {
    "tagk": {
      "dc": 1
    }
  }
}

Exceptions

An error is returned when the tag values in the key-value pairs specified by the tagk parameter contain both 0 and 1.

{
  "start": 1346846400,
  "end": 1346846400,
  "queries": [
    {
      "aggregator": "none",
      "metric": "sys.cpu.nice",
      "tags": {
        "dc": "lga",
        "host": "web01"
      }
    }
  ],
  "hint": {
    "tagk": {
      "dc": 1,
      "host": 0
    }
  }
}

The following error message is returned:

{
    "error": {
        "code": 400,
        "message": "The value of hint should only be 0 or 1, and there should not be both 0 and 1",
        "details": "TSQuery(start_time=1346846400, end_time=1346846400, subQueries[TSSubQuery(metric=sys.cpu.nice, filters=[filter_name=literal_or, tagk=dc, literals=[lga], group_by=true, filter_name=literal_or, tagk=host, literals=[web01], group_by=true], tsuids=[], agg=none, downsample=null, ds_interval=0, rate=false, rate_options=null, delta=false, delta_options=null, top=0, granularity=null, granularityDownsample=null, explicit_tags=explicit_tags, index=0, realTimeSeconds=-1, useData=auto, limit=0, offset=0, dpValue=null, preDpValue=null, startTime=1346846400000, endTime=1346846400000, Query_ID=null)] padding=false, no_annotations=false, with_global_annotations=false, show_tsuids=false, ms_resolution=false, options=[])"
    }
}                

An error is returned when a tag value in the key-value pairs specified by the tagk parameter is not 0 or 1.

{
  "start": 1346846400,
  "end": 1346846400,
  "queries": [
    {
      "aggregator": "none",
      "metric": "sys.cpu.nice",
      "tags": {
        "dc": "lga",
        "host": "web01"
      }
    }
  ],
  "hint": {
    "tagk": {
      "dc": 100
    }
  }
}

The following error message is returned:

{
    "error": {
        "code": 400,
        "message": "The value of hint can only be 0 or 1, and it is detected that '100' is passed in",
        "details": "TSQuery(start_time=1346846400, end_time=1346846400, subQueries[TSSubQuery(metric=sys.cpu.nice, filters=[filter_name=literal_or, tagk=dc, literals=[lga], group_by=true, filter_name=literal_or, tagk=host, literals=[web01], group_by=true], tsuids=[], agg=none, downsample=null, ds_interval=0, rate=false, rate_options=null, delta=false, delta_options=null, top=0, granularity=null, granularityDownsample=null, explicit_tags=explicit_tags, index=0, realTimeSeconds=-1, useData=auto, limit=0, offset=0, dpValue=null, preDpValue=null, startTime=1346846400000, endTime=1346846400000, Query_ID=null)] padding=false, no_annotations=false, with_global_annotations=false, show_tsuids=false, ms_resolution=false, options=[])"
    }
}

Parameter: forecasting

You can perform AI training to predict the data points in a time series in a future period. During AI training, use the existing data of the time series as the training set to identify the data trends and data cycle. The following sample code provides an example on the format of the query:

<AlgorithmName>-<ForecastPointCount>[-<ForecastPolicy>]                         

The following list describes the fields in the query:

  • AlgorithmName: The name of the algorithm. The arima and holtwinters algorithms are supported.

  • ForecastPointCount: the number of data points to be forecasted. Specify an integer. For example, the value 2 specifies that two data points are forecasted.

  • ForecastPolicy: the forecasting policy. The forecasting policy varies based on the specified algorithm.

    • If the AlgorithmName parameter is set to arima, the value of the ForecastPolicy parameter is in the following format:

      Note
      • The delta parameter specifies the difference between two values. The default delta is 1. You can increase the delta to reduce data fluctuations..

      • The seasonality parameter specifies the cycle of fluctuations. The default value is 1. If the data fluctuates on a regular basis, you can specify the seasonality parameter to adjust the forecasting cycle. For example, if the data fluctuates once every 10 data points, set the seasonality parameter to 10.

    • If the AlgorithmName parameter is set to holtwinter, the value of the ForecastPolicy parameter is in the following format:

      Note

      The seasonality parameter specifies the cycle of fluctuations. The default value is 1. If the data fluctuates on a regular basis, you can specify the seasonality parameter to adjust the forecasting cycle. For example, if the data fluctuates once every 10 data points, set the seasonality parameter to 10.

Forecasting examples

Examples: arima-1, arima-48-1-48, and holtwinters-1-1 The following code provides an example on the existing data in a series:

[
  {
      "metric": "sys.cpu.nice",
      "tags": {
          "dc": "lga",
          "host": "web00"
      },
      "aggregateTags": [],
      "dps": {
          "1346837400": 1,
          "1346837401": 2,
          "1346837402": 3,
          "1346837403": 4,
          "1346837404": 5,
          "1346837405": 6,
          "1346837406": 7,
          "1346837407": 8,
          "1346837408": 9,
          "1346837409": 10,
          "1346837410": 11,
          "1346837411": 12
      }
  }
]

The following code shows the query criteria:

{
     "start":1346837400,
     "end": 1346847400,
     "queries":[
        {
           "aggregator":"none",
           "metric":"sys.cpu.nice",
           "forecasting" : "arima-1"
        }
     ]
}

The forecast result shows the forecasting result.

[
    {
        "metric": "sys.cpu.nice",
        "tags": {
            "dc": "lga",
            "host": "web00"
        },
        "aggregateTags": [],
        "dps": {
            "1346837400": 1,
            "1346837401": 2,
            "1346837402": 3,
            "1346837403": 4,
            "1346837404": 5,
            "1346837405": 6,
            "1346837406": 7,
            "1346837407": 8,
            "1346837408": 9,
            "1346837409": 10,
            "1346837410": 11,
            "1346837411": 12,
            "1346837412": 13
        }
    }
]

Parameter: abnormaldetect

You can perform AI training to predict the data points in a time series in a future period. During AI training, use the existing data of the time series as the training set to identify the data trends and data cycle. The following sample code provides an example on the format of the query:

<AlgorithmName>[-<Sigma>-<NP>-<NS>-<NT>-<NL>]

Only the Standard Template Library (STL) algorithms are supported for anomaly detection. If you are not familiar with parameter tuning, we recommend that you select the default parameter values. If you are familiar with STL algorithms, you can tune parameters to enable more accurate forecasting. The following six parameters are provided for anomaly detection: AlgorithmName-Sigma-NP-NS-NT-NL. For example, you can set the parameters to stl-5-5-7-0-0. Separate these parameters with a hyphen (-). The following list describes the six parameters:

  • Sigma: If the absolute value of the difference between the value of a data point and the average value of all values in the time series is three times larger than the standard deviation of the time series, this data point is considered as an abnormal point. In most cases, the value of this parameter is 3.0.

  • NP: the number of data points in each cycle. The number of cycles is used to determine the number of data points.

  • NS: the seasonal smoothing parameter.

  • NT: the trend smoothing parameter.

  • NL: the low-pass filter smoothing parameter.

Forecasting examples

Example 1

"abnormaldetect": "stl",

Example 2

"abnormaldetect": "stl-5-5-7-0-0",

Query examples

{
       "start":1346836400,
       "end":1346946400,
       "queries":[
        {
             "aggregator": "none",
             "metric":     "sys.cpu.nice",
             "abnormaldetect":  "stl-5-5-7-0-0",
             "filters": [
             {
                "type":   "literal_or",
                "tagk":   "dc",
                "filter": "lga",
                "groupBy": false
             },
             {
                "type":   "literal_or",
                "tagk":   "host",
                "filter": "web00",
                "groupBy": false
             }
             ]
          }
       ]
}

Output examples

The result of anomaly detection in TSDB is a list in the following format:

[srcValue, upperValue, lowerValue, predictValue, isAbnormal]

  • srcValue: the value of the original data point.

  • upperValue: the maximum value of the data point.

  • lowerValue: the minimum value of the data point.

  • predictValue: the data point value that is forecasted by an STL algorithm.

  • isAbnormal: specifies whether the original value is abnormal. The 0 value indicates the original value is normal. The 1 value indicates that the original value is abnormal.

    [
        {
            "metric": "sys.cpu.nice",
            "tags": {
                "dc": "lga",
                "host": "web00"
            },
            "aggregateTags": [],
            "dps": {
                "1346837400": [
                    1,
                    1.0000000000000049,
                    0.9999999999999973,
                    1.0000000000000013,
                    0
                ],
                "1346837401": [
                    2,
                    2.0000000000000036,
                    1.9999999999999958,
                    1.9999999999999998,
                    0
                ],
                "1346837402": [
                    3,
                    3.0000000000000036,
                    2.9999999999999956,
                    3,
                    0
                ],
                "1346837403": [
                    4,
                    4.0000000000000036,
                    3.9999999999999956,
                    4,
                    1
                ],
                "1346837404": [
                    5,
                    5.0000000000000036,
                    4.9999999999999964,
                    5,
                    0
                ],
                "1346837405": [
                    6,
                    6.000000000000002,
                    5.999999999999995,
                    5.999999999999998,
                    0
                ],
                "1346837406": [
                    7,
                    7.0000000000000036,
                    6.9999999999999964,
                    7,
                    1
                ],
                "1346837407": [
                    8,
                    8.000000000000004,
                    7.9999999999999964,
                    8,
                    0
                ],
                "1346837408": [
                    9,
                    9.000000000000004,
                    8.999999999999996,
                    9,
                    0
                ],
                "1346837409": [
                    10,
                    10.000000000000004,
                    9.999999999999996,
                    10,
                    0
                ],
                "1346837410": [
                    11,
                    11.000000000000005,
                    10.999999999999998,
                    11.000000000000002,
                    0
                ],
                "1346837411": [
                    12,
                    12.000000000000004,
                    11.999999999999996,
                    12,
                    0
                ]
            }
        }
    ]