Request path and method
Path | Method | Description |
---|---|---|
/api/query | POST | Queries the data |
Request content
Request content in JSON format
Name | Type | Required | Description | Default value | Example |
---|---|---|---|---|---|
start | Long | Yes | Start time in seconds or milliseconds. For details about how to judge the unit, see Timestamp unit judge below. | None | 1499158925 |
end | Long | No | End time in seconds or milliseconds. For details about how to learn about the unit, see Timestamp unit judge below. The default value is the current time of the TSDB server. | Current time | 1499162916 |
queries | Array | Yes | Subquery array | None | See the subquery description. |
msResolution | boolean | No | Subquery array | false | The timestamps of returned data points are in seconds or milliseconds. You are recommended to set msResolution ture, otherwise, the data points in a second conduct downsampling according to the downsampling function requested. Data points without specifying downsampling operator returns the original values with the same timestamps. |
Timestamp unit judge
The timestamp unit is second or millisecond. TSDB judges the time unit by the numeric value according to the following rules:
- When the timestamp range is [4284768,9999999999]: the unit is second, and the time range is [1970-02-20 00:59:28, 2286-11-21 01:46:39].
- When the timestamp range is [10000000000,9999999999999]: the unit is millisecond, and the time range is [1970-04-27 01:46:40.000, 2286-11-21 01:46:39.999].
- When the timestamp range is (-∞,4284768) or (9999999999999,+∞): the time range is [1970-04-27 01:46:40.000, 2286-11-21 01:46:39.999].
Note: This description is only applicable to the data write (/api/put
) and data query (api/query
) APIs.
Single timestamp data query
TSDB supports single timestamp data query by setting the start time same as the end time. For example, “start”:1356998400, “end”:1356998400。
Supports millisecond accuracy
TSDB timestamp supports millisecond accuracy. When the timestamps of the data written in are in milliseconds, reading data needs passing parameter msResolution:true.
Subquery in JSON format
Name | Type | Required | Description | Default value | Example |
---|---|---|---|---|---|
aggregator | String | Yes | Aggregate function. For details, see Aggregation description below. | None | sum |
metric | String | Yes | Metric name | None | sys.cpu0 |
rate | Boolean | No | Specifies whether to calculate the growth rate of a metric value; calculation: Vt-Vt-1/t1-t-1 | false | true |
limit | Integer | Yes | data subpage, subquery returns maximum data points | 0 | 1000 |
offset | Integer | Yes | data subpage, subquery returns data points offset | 0 | 500 |
dpValue | String | No | filter returned data by conditions: “>”, “<”, “=”, “<=”, “>=”, “!=”. | None | >=1000 |
downsample | String | No | Downsamples time series | None | 60m-avg |
tags | Map | No | The specified tag | None | - |
filters | List | None | The filter | None | - |
Note:
- A query can contain no more than 200 subqueries.
- In the scenario where both tags and filters are specified, the post specified filter conditions take effect.
- For details about “downsample”, “tags”, and “filters”, see the corresponding descriptions below.
Query example
Request: POST/api/query
Body:
{
"start": 1356998400,
"end": 1356998460,
"queries": [
{
"aggregator": "sum",
"metric": "sys.cpu.0"
},
{
"aggregator": "sum",
"metric": "sys.cpu.1"
}
]
}
Data subpage query (Limit and Offset) description
Limit: The maximum number of returned subquery data points. The default value is 0, which indicates no limit on the number of returned data points.
Offset: The offset of returned subquery data points. The default value is 0, which indicates no offset of returned data points.
Note: Negative limit and offset are not allowed.
Example
If you want to return data points from 1001 to 1500, set limit for 500 and offset for 1000.
{
"start":1346046400,
"end":1347056500,
"queries":[
{
"aggregator":"avg",
"downsample":"2s-sum",
"index":0,
"metric":"sys.cpu.0",
"limit":"500",
"offset":"1000",
"tags":{
"host":"localhost",
"appName":"hitsdb"
}
}
]
}
dpValue description
Filter the final returned data points according to the value limit conditions set by users. Filter conditions of “>”, “<”, “=”, “<=”, “>=”, “!=” are available.
Example
{
"start":1346046400,
"end":1347056500,
"queries":[
{
"aggregator":"avg",
"downsample":"2s-sum",
"metric":"sys.cpu.0",
"dpValue":">=500",
"tags":{
"host":"localhost",
"appName":"hitsdb"
}
}
]
}
Downsample description
When the query time scope is large, only data of certain precision are needed to be returned. The query format is as follows:
<interval><units>-<aggregator>[-fill policy]
Wherein:
interval: It is a numeric value, such as 5 and 60. “0all” indicates that the time dimension is aggregated to a point.
units: s indicates second, m indicates minute, d indicates day, n indicates month, y indicates year.
Note: supports calendar based downsampling. To use calendar boundaries, append a
c
to the interval time units. For example,1 dc
indicates a calendar day from 00:00 to 24:00.aggregator: the specifications of downsampling operators are as follows.
Operator | Description |
---|---|
avg | Average value. |
count | Number of data points. |
first | get the first value |
last | get the last value |
mimmin | The maximum value is interpolated. |
mimmax | The minimum value is interpolated. |
median | Find the median. |
sum | Summation. |
zimsum | Summation. |
rfirst | The function is the same as first But the time stamp of the result returned after downsampling is the timestamp of the original data; Instead of downsampling the aligned timestamps. |
rlast | The function is the same as last But the time stamp of the result returned after downsampling is the timestamp of the original data; Instead of downsampling the aligned timestamps. |
rmin | The function is the same as mimmin But the time stamp of the result returned after downsampling is the timestamp of the original data; Instead of downsampling the aligned timestamps. |
rmax | The function is the same as mimmax But the time stamp of the result returned after downsampling is the timestamp of the original data; Instead of downsampling the aligned timestamps. |
Note:When the aggregation operator of downsampling is specified as rfirst , rlast , Rmin or Rmax , fill policy cannot be specified in the down sampling expression.
Fill policy
Through downsample, all the time series are split up according to the specified precision and data in each downsampled range is calculated for one time. If a downsampled range has no value, the fill policy can be used to enter a specific value at this time point. For example, when the timestamp of a time series after downsample is t+0, t+20, t+30, if fill policy is not specified, the time series has only three values. If fill policy is specified to null, the time series has four values, and the value at the time point t+10 is null.
The following table lists the relationship between the fill policy and specific filling value.
Fill Policy | Filling value |
---|---|
none | No value is filled by default. |
nan | NaN |
null | null |
zero | 0 |
linear | linear interpolation |
previous | the previous value |
near | the nearest value |
after | the after value |
fixed | designate a fixed value (please see the following example) |
Fixed Fill Policy
Method: Fill the fixed value after “#”. Allows positive and negative values. The format is as follow:
<interval><units>-<aggregator>[-fill policy]
Example: 1h-sum-fixed#6, 1h-avg-fixed#-8
Downsampling example
Example: 1m-avg, 1h-sum-zero, 1h-sum-near
Note: Downsampling is not a must when querying. You can specify the value as null or (“”), for example, {“downsample”: null} or {“downsample”: “”}, so that data point dowmsampling is not triggered。
Aggregation description
After downsampling, multiple time series are generated and their timestamps are aligned. Aggregation is the action of combining these multiple time series into one by aggregating the values at each timestamp. During aggregation, each time series must have a value at each timestamp. If a time series has no value at a timestamp, interpolation is performed. The details about interpolation are as follows.
Interpolation
If a time series has no value at a timestamp and the fill policy is not used in this case to fill a value, and meanwhile, one of other time series to be aggregated has a value at the timestamp, then a value is interpolated to this time series at the timestamp.
For example, when the downsampling and aggregation conditions are {“downsample”: “10s-avg”, “aggregator”: “sum”}, and there are two time series that need to use “sum” for aggregation, after downsampling by “10s-avg”, the timestamps of the two time series that have values are:
line 1: t+0, t+10, t+20, t+30
line 2: t+0, t+20, t+30
The line 2 lacks the value at the time point “t+10”. Therefore, before aggregation, a value is interpolated for line 2 at “t+10”. The interpolation method is determined by the aggregation operator. For details, see the following operator table.
Operator | Description | Interpolation method |
---|---|---|
avg | Average value | Linear interpolation (slope fitting) |
count | Number of data points | 0 is interpolated |
mimmin | Minimum value | The maximum value is interpolated. |
mimmax | Maximum value | The minimum value is interpolated. |
min | Minimum value | Linear interpolation |
max | Maximum value | Linear interpolation |
none | No operation | 0 is interpolated. |
sum | Summation | Linear interpolation |
zimsum | Summation | 0 is interpolated. |
Filters description
You can specify a filter in any of the following ways:
Specify a filter when specifying the TagKey:
tagk = *: Perform “groupBy” for the Tagv of the Tagk to aggregate the values under the same Tagv.
tagk = tagv1|tagv2: Aggregate the values under Tagv 1 and Tagv 2 of the TagKey respectively.
- Specify a filter in JSON format:
Name | Type | Required | Description | Default value | Example |
---|---|---|---|---|---|
type | String | Yes | filter. For details, see the description below. | None | literal_or |
tagk | String | Yes | Specify the TagKey name | Name | host |
filter | String | Yes | filter expression | None | web01|web02 |
groupBy | Boolean | No | Specify whether to perform “groupBy” for TagValue | false | false |
Filter type description
Name | Example | Description |
---|---|---|
literal_or | web01|web02 | Perform aggregation for multiple TagValues respectively. It is case sensitive. |
wildcard | *mysite.com | Perform aggregation for TagValues that contain the specified wildcard respectively. It is case sensitive. |
Query example
Example of queries without filter
Body:
{
"start": 1356998400,
"end": 1356998460,
"queries": [
{
"aggregator": "sum",
"metric": "sys.cpu.0"
},
{
"aggregator": "sum",
"metric": "sys.cpu.1"
}
]
}
Example of queries with filter
Body:
{
"start": 1356998400,
"end": 1356998460,
"queries": [
{
"aggregator": "sum",
"metric": "sys.cpu.0",
"rate": "true",
"filters": [
{
"type": "wildcard",
"tagk": "host",
"filter": "*",
"groupBy": true
},
{
"type": "literal_or",
"tagk": "dc",
"filter": "lga|lga1|lga2",
"groupBy": false
}
]
}
]
}
Query result description
For a successful query, the HTTP response code is “200” and the response is returned in JSON format, as shown in the following table:
Dame | Description |
---|---|
metric | Metric name |
tags | The tag of which TagValues are not aggregated |
aggregateTags | The tag of which TagValues are aggregated |
dps | Data point pair |
Response example:
[
{
"metric": "tsd.hbase.puts",
"tags": {"appName": "hitsdb"},
"aggregateTags": [
"host"
],
"dps": {
"1365966001": 25595461080,
"1365966061": 25595542522,
"1365966062": 25595543979,
"1365973801": 25717417859
}
}
]