Queries data points from one or more time series over a specified time range. Submit a POST request to /api/query with a JSON body that defines the time range and one or more subqueries.
Request syntax
POST /api/query
Request parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
start |
Long | Yes | — | The start of the time range. Unit: seconds or milliseconds. TSDB determines the unit based on the numeric value. See Timestamp units. |
end |
Long | No | Current server time | The end of the time range. Unit: seconds or milliseconds. Defaults to the current time on the TSDB server. See Timestamp units. |
queries |
Array | Yes | — | An array of subqueries. See Subquery parameters. |
msResolution |
Boolean | No | false |
Specifies whether to return timestamps in milliseconds. Applies only when the queried data points use second-precision timestamps. If true, all timestamps in the response are converted to milliseconds. If false, the original unit is preserved. Data points stored with millisecond timestamps are always returned in milliseconds, regardless of this setting. |
hint |
Map | No | — | A query hint to reduce response time. See Parameter: hint. |
Timestamp units
TSDB determines the timestamp unit from the numeric value:
| Range | Unit | Date range |
|---|---|---|
[4294968, 4294967295] |
Seconds | 1970-02-20 01:02:48 – 2106-02-07 14:28:15 |
[4294967296, 9999999999999] |
Milliseconds | 1970-02-20 01:02:47.296 – 2286-11-21 01:46:39.999 |
(-∞, 4294968) or (9999999999999, +∞) |
Invalid | — |
These rules apply to both/api/put(write) and/api/query(query).
To query data points at a single point in time, set start and end to the same value—for example, 1356998400.
Subquery parameters
Each object in the queries array supports the following parameters:
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
aggregator |
String | Yes | — | The aggregation function. See Parameter: aggregator. Example: sum. |
metric |
String | Yes | — | The metric name. Example: sys.cpu0. |
rate |
Boolean | No | false |
Specifies whether to calculate the growth rate between consecutive values. Formula: (Vt − Vt-1) / (t − t-1). |
delta |
Boolean | No | false |
Specifies whether to calculate the delta between consecutive values. Formula: Vt − Vt-1. See Parameter: delta. |
limit |
Integer | No | 0 |
The maximum number of data points to return per time series per page. 0 means no limit. |
offset |
Integer | No | 0 |
The number of data points to skip per time series per page. 0 means no data points are skipped. |
dpValue |
String | No | — | Filters returned data points by value. Supported operators: >, <, =, <=, >=, !=. If the value is a string, only = and != are supported. Applied after aggregation. |
preDpValue |
String | No | — | Filters raw data points during scanning before aggregation. Same operators as dpValue. Data points that fail this filter are excluded from all calculations. |
downsample |
String | No | — | The downsampling configuration. See Parameter: downsample. Example: 60m-avg. |
tags |
Map | No | — | Tag-based filter conditions. Mutually exclusive with filters. If both are specified, the one that appears later in the JSON takes effect. |
filters |
List | No | — | Filter conditions in JSON format. Mutually exclusive with tags. If both are specified, the one that appears later in the JSON takes effect. See Parameter: filters. |
hint |
Map | No | — | A subquery-level query hint. See Parameter: hint. |
forecasting |
String | No | — | Forecasts future data points using AI training. See Parameter: forecasting. |
abnormaldetect |
String | No | — | Detects anomalies in a time series using AI training. See Parameter: abnormaldetect. |
A single request supports a maximum of 200 subqueries.
If bothtagsandfiltersare specified, the parameter that appears later in the JSON takes effect.
Example request
POST /api/query{
"start": 1356998400,
"end": 1356998460,
"queries": [
{
"aggregator": "sum",
"metric": "sys.cpu.0"
},
{
"aggregator": "sum",
"metric": "sys.cpu.1"
}
]
}
Response elements
A successful request returns HTTP 200 with a JSON array. Each element corresponds to one subquery result.
| Parameter | Description |
|---|---|
metric |
The metric name. |
tags |
The tags whose values were not aggregated. |
aggregateTags |
The tags whose values were aggregated. |
dps |
The data points as timestamp-value pairs. |
Example response
[
{
"metric": "tsd.hbase.puts",
"tags": {"appName": "hitsdb"},
"aggregateTags": ["host"],
"dps": {
"1365966001": 25595461080,
"1365966061": 25595542522,
"1365966062": 25595543979,
"1365973801": 25717417859
}
}
]
Parameter: limit and offset
Use limit and offset together to paginate results within a subquery.
-
limit: the maximum number of data points to return per time series. Default0means no limit. -
offset: the number of data points to skip per time series. Default0means no data points are skipped.
Neither limit nor offset can be set to a negative number.
To retrieve data points ranked 1001 to 1500, set limit to 500 and offset to 1000:
{
"start": 1346046400,
"end": 1347056500,
"queries": [
{
"aggregator": "avg",
"downsample": "2s-sum",
"metric": "sys.cpu.0",
"limit": "500",
"offset": "1000",
"tags": {
"host": "localhost",
"appName": "hitsdb"
}
}
]
}
Parameter: dpValue
Filters data points by value after aggregation. Supported operators: >, <, =, <=, >=, !=.
When the filter value is a string, only = and != are supported.
Example
{
"start": 1346046400,
"end": 1347056500,
"queries": [
{
"aggregator": "avg",
"downsample": "2s-sum",
"metric": "sys.cpu.0",
"dpValue": ">=500",
"tags": {
"host": "localhost",
"appName": "hitsdb"
}
}
]
}
Parameter: delta
When delta is set to true, each value in the dps response is the calculated delta between consecutive data points. If the original result contains n key-value pairs, the delta result contains n−1 pairs—the first pair is dropped because no previous value exists to calculate against. The delta operator also applies to values after downsampling.
deltaOptions
Configure deltaOptions in the subquery to control how deltas are calculated.
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
counter |
Boolean | No | false |
Treats metric values as monotonically increasing or decreasing cumulative counts (similar to a counter). The server does not validate the metric values. |
counterMax |
Integer | No | — | The threshold for delta values when counter is true. If the absolute delta exceeds this threshold, the delta is considered abnormal. When not set, no threshold is applied. |
dropReset |
Boolean | No | false |
Controls the behavior for abnormal deltas when counterMax is set. If true, abnormal deltas are discarded. If false, abnormal deltas are reset to 0. |
Example
{
"start": 1346046400,
"end": 1347056500,
"queries": [
{
"aggregator": "none",
"downsample": "5s-avg",
"delta": true,
"deltaOptions": {
"counter": true,
"counterMax": 100
},
"metric": "sys.cpu.0",
"dpValue": ">=50",
"tags": {
"host": "localhost",
"appName": "hitsdb"
}
}
]
}
Parameter: downsample
Downsampling aggregates data over a specified time interval, which is useful when querying data over long time ranges. The time range is divided into windows of the specified interval, and each returned timestamp marks the start of a window.
Format
<interval><units>-<aggregator>[-<fill policy>]
When downsample is specified, TSDB automatically extends the query range by one interval on both sides. For example, a range of [1346846401, 1346846499] with a 5-minute interval becomes [1346846101, 1346846799].
Fields
`interval`
A numeric value such as 5 or 60. Use 0all to aggregate all data points in the range into a single value.
`units`
| Value | Unit |
|---|---|
s |
Seconds |
m |
Minutes |
h |
Hours |
d |
Days |
n |
Months |
y |
Years |
By default, timestamps are aligned using modulo truncation: Aligned timestamp = Data timestamp − (Data timestamp % Time interval).
To align by calendar interval (for example, from 00:00 to 00:00 of the next day), appendcto the unit:1dc.
`aggregator`
| Operator | Description |
|---|---|
avg |
Average value |
count |
Number of data points |
first |
First value |
last |
Last value |
min |
Minimum value |
max |
Maximum value |
median |
Median value |
sum |
Sum of values |
zimsum |
Sum of values (zero-interpolated) |
rfirst |
Same as first, but returns the original timestamp instead of the aligned timestamp |
rlast |
Same as last, but returns the original timestamp instead of the aligned timestamp |
rmin |
Same as min, but returns the original timestamp instead of the aligned timestamp |
rmax |
Same as max, but returns the original timestamp instead of the aligned timestamp |
Therfirst,rlast,rmin, andrmaxoperators cannot be used with a fill policy in the same downsampling expression.
Fill policies
Fill policies define how TSDB fills missing values in downsampled results. When no data exists in a time window, TSDB applies the fill policy to produce a value.
| Fill policy | Fills missing values with |
|---|---|
none |
No fill (default) |
nan |
null |
null |
null |
zero |
0 |
linear |
A value calculated by linear interpolation |
previous |
The previous value |
near |
The nearest adjacent value |
after |
The next value |
fixed |
A user-specified fixed value |
Fixed fill policy
To fill missing values with a fixed number, use the format:
<interval><units>-<aggregator>-fixed#<number>
The fixed value can be positive or negative. Examples: 1h-sum-fixed#6, 1h-avg-fixed#-8.
Downsampling examples
1m-avg, 1h-sum-zero, 1h-sum-near
downsample is optional. To disable downsampling explicitly, set it to null or an empty string: {"downsample": null} or {"downsample": ""}.
Parameter: aggregator
After downsampling, TSDB merges multiple timelines into one by aggregating values at each aligned timestamp. Aggregation is skipped when only one timeline matches the query.
Interpolation
When aggregating multiple timelines, a timeline missing a value at a given timestamp receives an interpolated value—provided no fill policy is specified and at least one other timeline has a value at that timestamp.
For example, merging two timelines with {"downsample": "10s-avg", "aggregator": "sum"}:
-
Timeline 1 has values at
t+0,t+10,t+20,t+30. -
Timeline 2 has values at
t+0,t+20,t+30.
Before aggregation, TSDB interpolates a value for Timeline 2 at t+10. The interpolation method depends on the aggregator:
| Operator | Description | Interpolation method |
|---|---|---|
avg |
Average value | Linear interpolation |
count |
Number of data points | 0 interpolated |
mimmin |
Minimum value | Maximum value interpolated |
mimmax |
Maximum value | Minimum value interpolated |
min |
Minimum value | Linear interpolation |
max |
Maximum value | Linear interpolation |
none |
Skips aggregation | 0 interpolated |
sum |
Sum of values | Linear interpolation |
zimsum |
Sum of values | 0 interpolated |
Parameter: filters
The filters parameter accepts a list of JSON filter objects for tag-based filtering. It is mutually exclusive with the tags parameter.
Filter object parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
type |
String | Yes | — | The filter type. See Filter types. Example: literal_or. |
tagk |
String | Yes | — | The tag key to filter on. Example: host. |
filter |
String | Yes | — | The filter expression. Example: web01|web02. |
groupBy |
Boolean | No | false |
Specifies whether to perform a GROUP BY operation on the matching tag values. |
You can also use shorthand tag expressions directly in the subquery:
-
tagk = *: performs a GROUP BY on all values of the tag key. -
tagk = tagv1|tagv2: aggregatestagv1values andtagv2values separately.
Filter types
| Filter type | Example | Description |
|---|---|---|
literal_or |
web01|web02 |
Matches exact tag values separated by |. Case-sensitive. |
wildcard |
*.example.com |
Matches tag values using a wildcard pattern. Case-sensitive. |
Example without filters
{
"start": 1356998400,
"end": 1356998460,
"queries": [
{
"aggregator": "sum",
"metric": "sys.cpu.0",
"rate": "true",
"tags": {
"host": "*",
"dc": "lga"
}
}
]
}
Example with filters
{
"start": 1356998400,
"end": 1356998460,
"queries": [
{
"aggregator": "sum",
"metric": "sys.cpu.0",
"rate": "true",
"filters": [
{
"type": "wildcard",
"tagk": "host",
"filter": "*",
"groupBy": true
},
{
"type": "literal_or",
"tagk": "dc",
"filter": "lga|lga1|lga2",
"groupBy": false
}
]
}
]
}
Parameter: hint
A query hint reduces response time by controlling which tag-key indexes TSDB uses during query execution. For example, if the time series matched by tag set B are a subset of those matched by tag set A, you can hint TSDB to use only the index for B—avoiding the overhead of reading from A.
The hint parameter can be set at the top level (applies to the entire query) or inside a subquery (applies to that subquery only).
The current TSDB version supports only the tagk parameter in a hint.
Format
In the tagk map, set each tag key to 1 to use its index, or 0 to skip it. All values must be either 0 or 1—mixing both in the same hint causes an error.
Version requirement: TSDB V2.6.1 and later.
Hint applied to a subquery
{
"start": 1346846400,
"end": 1346846400,
"queries": [
{
"aggregator": "none",
"metric": "sys.cpu.nice",
"tags": {
"dc": "lga",
"host": "web01"
},
"hint": {
"tagk": {
"dc": 1
}
}
}
]
}
Hint applied to the entire query
{
"start": 1346846400,
"end": 1346846400,
"queries": [
{
"aggregator": "none",
"metric": "sys.cpu.nice",
"tags": {
"dc": "lga",
"host": "web01"
}
}
],
"hint": {
"tagk": {
"dc": 1
}
}
}
Error cases
Mixed `0` and `1` values
The following request causes an error because dc is 1 and host is 0:
{
"start": 1346846400,
"end": 1346846400,
"queries": [
{
"aggregator": "none",
"metric": "sys.cpu.nice",
"tags": {
"dc": "lga",
"host": "web01"
}
}
],
"hint": {
"tagk": {
"dc": 1,
"host": 0
}
}
}
Error response:
{
"error": {
"code": 400,
"message": "The value of hint should only be 0 or 1, and there should not be both 0 and 1",
"details": "TSQuery(start_time=1346846400, end_time=1346846400, subQueries[TSSubQuery(metric=sys.cpu.nice, filters=[filter_name=literal_or, tagk=dc, literals=[lga], group_by=true, filter_name=literal_or, tagk=host, literals=[web01], group_by=true], tsuids=[], agg=none, downsample=null, ds_interval=0, rate=false, rate_options=null, delta=false, delta_options=null, top=0, granularity=null, granularityDownsample=null, explicit_tags=explicit_tags, index=0, realTimeSeconds=-1, useData=auto, limit=0, offset=0, dpValue=null, preDpValue=null, startTime=1346846400000, endTime=1346846400000, Query_ID=null)] padding=false, no_annotations=false, with_global_annotations=false, show_tsuids=false, ms_resolution=false, options=[])"
}
}
Value other than `0` or `1`
{
"start": 1346846400,
"end": 1346846400,
"queries": [
{
"aggregator": "none",
"metric": "sys.cpu.nice",
"tags": {
"dc": "lga",
"host": "web01"
}
}
],
"hint": {
"tagk": {
"dc": 100
}
}
}
Error response:
{
"error": {
"code": 400,
"message": "The value of hint can only be 0 or 1, and it is detected that '100' is passed in",
"details": "TSQuery(start_time=1346846400, end_time=1346846400, subQueries[TSSubQuery(metric=sys.cpu.nice, filters=[filter_name=literal_or, tagk=dc, literals=[lga], group_by=true, filter_name=literal_or, tagk=host, literals=[web01], group_by=true], tsuids=[], agg=none, downsample=null, ds_interval=0, rate=false, rate_options=null, delta=false, delta_options=null, top=0, granularity=null, granularityDownsample=null, explicit_tags=explicit_tags, index=0, realTimeSeconds=-1, useData=auto, limit=0, offset=0, dpValue=null, preDpValue=null, startTime=1346846400000, endTime=1346846400000, Query_ID=null)] padding=false, no_annotations=false, with_global_annotations=false, show_tsuids=false, ms_resolution=false, options=[])"
}
}
Parameter: forecasting
Uses AI training on historical time series data to forecast future data points. TSDB trains on the existing data to identify trends and cycles, then projects values forward.
Format
<AlgorithmName>-<ForecastPointCount>[-<ForecastPolicy>]
Supported algorithms
-
`arima` — AutoRegressive Integrated Moving Average (ARIMA). The
ForecastPolicyfor ARIMA has two fields:-
delta: the difference order. Default:1. Increase to reduce data fluctuations. -
seasonality: the cycle length in number of data points. Default:1. Set to match the data's natural cycle (for example,10if the data fluctuates every 10 data points).
-
-
`holtwinters` — Holt-Winters exponential smoothing. The
ForecastPolicyfor Holt-Winters has one field:-
seasonality: same as ARIMA. Default:1.
-
Examples: arima-1, arima-48-1-48, holtwinters-1-1
Example
Given existing data in the series:
[
{
"metric": "sys.cpu.nice",
"tags": {"dc": "lga", "host": "web00"},
"aggregateTags": [],
"dps": {
"1346837400": 1,
"1346837401": 2,
"1346837402": 3,
"1346837403": 4,
"1346837404": 5,
"1346837405": 6,
"1346837406": 7,
"1346837407": 8,
"1346837408": 9,
"1346837409": 10,
"1346837410": 11,
"1346837411": 12
}
}
]
Query:
{
"start": 1346837400,
"end": 1346847400,
"queries": [
{
"aggregator": "none",
"metric": "sys.cpu.nice",
"forecasting": "arima-1"
}
]
}
Forecast result (one additional data point is appended):
[
{
"metric": "sys.cpu.nice",
"tags": {"dc": "lga", "host": "web00"},
"aggregateTags": [],
"dps": {
"1346837400": 1,
"1346837401": 2,
"1346837402": 3,
"1346837403": 4,
"1346837404": 5,
"1346837405": 6,
"1346837406": 7,
"1346837407": 8,
"1346837408": 9,
"1346837409": 10,
"1346837410": 11,
"1346837411": 12,
"1346837412": 13
}
}
]
Parameter: abnormaldetect
Uses the STL (Seasonal-Trend decomposition using Loess) algorithm to detect anomalies in a time series. TSDB trains on existing data to identify patterns, then flags data points that deviate significantly from the expected range.
Format
<AlgorithmName>[-<Sigma>-<NP>-<NS>-<NT>-<NL>]
Only the STL algorithm is supported. If you are not familiar with STL parameter tuning, use the default values by omitting the parameter fields.
Parameters
| Parameter | Description |
|---|---|
AlgorithmName |
The algorithm name. Use stl. |
Sigma |
The anomaly threshold. A data point is flagged as abnormal if the absolute difference between its value and the series average exceeds Sigma × standard deviation. Typical value: 3.0. |
NP |
The number of data points per cycle. |
NS |
The seasonal smoothing parameter. |
NT |
The trend smoothing parameter. |
NL |
The low-pass filter smoothing parameter. |
Examples
"abnormaldetect": "stl""abnormaldetect": "stl-5-5-7-0-0"
Query example
{
"start": 1346836400,
"end": 1346946400,
"queries": [
{
"aggregator": "none",
"metric": "sys.cpu.nice",
"abnormaldetect": "stl-5-5-7-0-0",
"filters": [
{
"type": "literal_or",
"tagk": "dc",
"filter": "lga",
"groupBy": false
},
{
"type": "literal_or",
"tagk": "host",
"filter": "web00",
"groupBy": false
}
]
}
]
}
Anomaly detection output
Each value in dps is an array in the format:
[srcValue, upperValue, lowerValue, predictValue, isAbnormal]
| Field | Description |
|---|---|
srcValue |
The original data point value. |
upperValue |
The upper bound of the expected range. |
lowerValue |
The lower bound of the expected range. |
predictValue |
The value predicted by the STL algorithm. |
isAbnormal |
0 — normal. 1 — abnormal. |
Example output
[
{
"metric": "sys.cpu.nice",
"tags": {"dc": "lga", "host": "web00"},
"aggregateTags": [],
"dps": {
"1346837400": [1, 1.0000000000000049, 0.9999999999999973, 1.0000000000000013, 0],
"1346837401": [2, 2.0000000000000036, 1.9999999999999958, 1.9999999999999998, 0],
"1346837402": [3, 3.0000000000000036, 2.9999999999999956, 3, 0],
"1346837403": [4, 4.0000000000000036, 3.9999999999999956, 4, 1],
"1346837404": [5, 5.0000000000000036, 4.9999999999999964, 5, 0],
"1346837405": [6, 6.000000000000002, 5.999999999999995, 5.999999999999998, 0],
"1346837406": [7, 7.0000000000000036, 6.9999999999999964, 7, 1],
"1346837407": [8, 8.000000000000004, 7.9999999999999964, 8, 0],
"1346837408": [9, 9.000000000000004, 8.999999999999996, 9, 0],
"1346837409": [10, 10.000000000000004, 9.999999999999996, 10, 0],
"1346837410": [11, 11.000000000000005, 10.999999999999998, 11.000000000000002, 0],
"1346837411": [12, 12.000000000000004, 11.999999999999996, 12, 0]
}
}
]