All Products
Search
Document Center

Elasticsearch:Overview of APIs supported by aliyun-timestream

Last Updated:Mar 26, 2026

The aliyun-timestream plug-in extends Alibaba Cloud Elasticsearch with APIs for managing the full lifecycle of time series indexes — create, update, delete, and query indexes, write time series data, and query stored metrics.

For background on the plug-in and its capabilities, see Overview of aliyun-timestream.

Prerequisites

Before you begin, ensure that you have:

  • An Alibaba Cloud Elasticsearch cluster that meets one of the following version requirements:

    • Cluster version V7.10, kernel version V1.8.0 or later

    • Cluster version V7.16 or later, kernel version V1.7.0 or later

For instructions on creating a cluster, see Create an Alibaba Cloud Elasticsearch cluster.

Create a time series index

Request syntax

PUT _time_stream/{name}

To apply a custom index template, include it in the request body:

PUT _time_stream/{name}
{
  --- index template ---
}

Usage notes

  • You do not need to configure an index pattern. Specify an exact name for the index. Wildcards are not supported in the name.

  • Leave the request body blank to use the default template, or include a custom index template. For the request body format, see Index templates in the Elasticsearch documentation.

Example

Request:

PUT _time_stream/test_stream
{
  "template": {
    "settings": {
      "index.number_of_shards": "10"   // Override index settings (optional)
    }
  }
}

Response:

{
  "acknowledged" : true
}

To verify the index was created, query it:

GET _time_stream/test_stream

Update the configurations of a time series index

Request syntax

POST _time_stream/{name}/_update

To apply a custom index template, include it in the request body:

POST _time_stream/{name}/_update
{
  --- index template ---
}

Usage notes

  • The request body format is the same as for Create a time series index.

  • Updated configurations do not take effect immediately. Roll over the time series index for changes to take effect.

Example

Request:

POST _time_stream/test_stream/_update
{
  "template": {
    "settings": {
      "index.number_of_shards": "10"
    }
  }
}

Response:

{
  "acknowledged" : true
}

Delete a time series index

Warning

Deleting a time series index permanently deletes all data stored in it. Confirm that the deletion will not affect your business before proceeding.

Request syntax

DELETE _time_stream/{name}

Usage notes

  • Use a wildcard to match and delete multiple indexes at once.

  • Specify multiple index names separated by commas (,) to delete them in a single request.

Example

Request:

DELETE _time_stream/test_stream

Response:

{
  "acknowledged" : true
}

Query time series indexes

Request syntax

Query all time series indexes:

GET _time_stream

Query specific time series indexes:

GET _time_stream/{name}

Usage notes

  • Use a wildcard to match multiple indexes by name pattern.

  • Specify multiple index names separated by commas (,) to query them in a single request.

Example

Request:

GET _time_stream

Response:

{
  "time_streams" : {
    "test_stream" : {
      "name" : "test_stream",
      "datastream_name" : "test_stream",
      "template_name" : ".timestream_test_stream",
      "template" : {
        "index_patterns" : [
          "test_stream"
        ],
        "template" : {
          "settings" : {
            "index" : {
              "number_of_shards" : "10"
            }
          }
        },
        "composed_of" : [
          ".system.timestream.template"
        ],
        "data_stream" : {
          "hidden" : true
        }
      }
    }
  }
}

Query metrics of time series indexes

Request syntax

Query metrics of all time series indexes:

GET _time_stream/_stats

Query metrics of a specific time series index:

GET _time_stream/{name}/_stats

Usage notes

The _stats endpoint returns metrics including time_stream_count, which indicates the total number of time series across the index.

How time_stream_count is calculated:

  1. The metric counts the number of time series on each primary shard. Each primary shard has its own distinct set of time series, so the total for an index is the sum across all primary shards.

  2. The response identifies the index with the largest number of time series.

Performance and caching: Counting time series reads doc values from the _tsid field, which generates high query costs. To reduce overhead:

  • For read-only indexes, the count is cached after the first query.

  • For other indexes, the cache refreshes every 5 minutes by default. Change this interval with the index.time_series.stats.refresh_interval parameter. The minimum value is 1 minute.

Example

Request:

GET _time_stream/_stats

Response:

{
  "_shards" : {
    "total" : 4,
    "successful" : 4,
    "failed" : 0
  },
  "time_stream_count" : 2,
  "indices_count" : 2,
  "total_store_size_bytes" : 1278822,
  "time_streams" : [
    {
      "time_stream" : "test_stream",
      "indices_count" : 1,
      "store_size_bytes" : 31235,
      "tsidCount" : 1
    },
    {
      "time_stream" : "prom_index",
      "indices_count" : 1,
      "store_size_bytes" : 1247587,
      "tsidCount" : 317
    }
  ]
}

Write time series data to a time series index

aliyun-timestream uses standard Elasticsearch APIs — the bulk API and index API — to write data to a time series index.

Important

Data writes are append-only. The plug-in cannot index, update, or delete existing data.

Data model

Each document written to a time series index must follow the time series data model:

FieldDescription
labelsDimension fields that uniquely identify a time series. Elasticsearch generates a time series ID (_tsid) from these fields.
metricsMetric values. Must be of type LONG or DOUBLE.
@timestampThe time when the metric data was collected. Defaults to a millisecond-precision Unix timestamp.

Example document:

{
  "labels": {
    "namespce": "cn-hanzhou",      // Dimension fields — uniquely identify the time series and generate _tsid
    "clusterId": "cn-xxx-xxxxxx",
    "nodeId": "node-xxx",
    "label": "test-cluster",
    "disk_type": "cloud_ssd",
    "cluster_type": "normal"
  },
  "metrics": {
    "cpu.idle": 10.0,              // Metric values — must be LONG or DOUBLE
    "mem.free": 100.1,
    "disk_ioutil": 5.2
  },
  "@timestamp": 1624873606000      // Millisecond-precision Unix timestamp
}

Configure dimension and metric fields

When creating a time series index, you can define custom dimension fields (labels_fields) and metric fields (metrics_fields). The plug-in automatically creates dynamic mappings and sets time_series_dimension: true on dimension fields. Metric fields store only doc values by default.

Wildcards (*) are supported in field name patterns.

Upload a single custom field pattern:

PUT _time_stream/{name}
{
  --- index template ---
  "time_stream": {
    "labels_fields": "@label.*",    // Dimension field pattern — sets time_series_dimension: true
    "metrics_fields": "@metrics.*"  // Metric field pattern — stores doc values only
  }
}

Upload multiple field patterns:

PUT _time_stream/{name}
{
  --- index template ---
  "time_stream": {
    "labels_fields": ["label.*", "dim*"],
    "metrics_fields": ["@metrics.*", "metrics.*"]
  }
}
ParameterRequiredDefaultDescription
labels_fieldsNolabel.*Field name patterns for dimension fields
metrics_fieldsNometrics.*Field name patterns for metric fields

Example

Request:

POST test_stream/_doc
{
  "labels": {
    "namespce": "cn-hanzhou",
    "clusterId": "cn-xxx-xxxxxx",
    "nodeId": "node-xxx",
    "label": "test-cluster",
    "disk_type": "cloud_ssd",
    "cluster_type": "normal"
  },
  "metrics": {
    "cpu.idle": 10,
    "mem.free": 100.1,
    "disk_ioutil": 5.2
  },
  "@timestamp": 1624873606000
}

Response:

{
  "_index" : ".ds-test_stream-2021.09.03-000001",
  "_id" : "suF_qnsBGKH6s8C_OuFS",
  "_version" : 1,
  "result" : "created",
  "_shards" : {
    "total" : 1,
    "successful" : 1,
    "failed" : 0
  },
  "_seq_no" : 0,
  "_primary_term" : 1
}

Query data in a time series index

aliyun-timestream uses standard Elasticsearch APIs — the search API and get API — to query data. The plug-in uses Prometheus Querying Language (PromQL) statements instead of domain-specific language (DSL) statements to query stored metric data, which simplifies query operations and improves query efficiency.

Example

Request:

GET test_stream/_search

Response:

{
  "took" : 172,
  "timed_out" : false,
  "_shards" : {
    "total" : 10,
    "successful" : 10,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : {
      "value" : 1,
      "relation" : "eq"
    },
    "max_score" : 1.0,
    "hits" : [
      {
        "_index" : ".ds-test_stream-2021.09.03-000001",
        "_id" : "suF_qnsBGKH6s8C_OuFS",
        "_score" : 1.0
      }
    ]
  }
}

Configure downsampling

Downsampling reduces storage costs for time series data by aggregating it at a lower resolution.

Configure a downsampling rule when creating a time series index. After configuration, reads and writes to the index work normally — downsampling runs automatically in the background at rollover time, acting on indexes that are no longer receiving writes.

After downsampling, metric field values are converted to the aggregate_metric_double type and split into four sub-fields: max, min, sum, and count. When querying the index, it automatically selects the appropriate downsampled data based on the interval parameter of the aggregation.

Each downsampled index inherits the settings of its source index by default. Override settings (such as the number of primary shards or an index lifecycle management (ILM) policy) in the downsampling rule itself.

Example configuration

PUT _time_stream/{name}
{
  "time_stream": {
    "downsample": [
      {
        "interval": "1m",           // Required — data is rolled up at this granularity
        "settings": {               // Optional — override settings for the downsampling index
          "index.lifecycle.name": "my-rollup-ilm-policy_60m",
          "index.number_of_shards": "1"
        }
      },
      {
        "interval": "10m"           // A second downsampling tier at 10-minute granularity
      }
    ]
  }
}

Downsampling parameters

ParameterRequiredDescription
intervalYesThe granularity at which data is rolled up. Specify up to five intervals. If you specify more than one, you must make sure that the intervals are multiples — for example, 1m, 10m, and 60m.
settingsNoSettings for the downsampling index, such as ILM policy and the number of primary shards.