All Products
Search
Document Center

Time Series Database:From collection to analysis: Generate value from your data with TSDB for InfluxDB®

Last Updated:Mar 30, 2026

TSDB for InfluxDB® is a managed time series database service on Alibaba Cloud, compatible with the InfluxDB API. It handles large volumes of write and query requests, and stores DevOps monitoring metrics, application metrics, and Internet of Things (IoT) sensor data in real time.

Key capabilities:

  • High-performance time series storage using the Time-Structured Merge Tree (TSM) engine, with fast read/write and efficient compression

  • HTTP-based API for writing and querying data

  • SQL-like query language for aggregating time series data

  • Tag-based indexes for faster queries

  • Retention policies to automatically expire old data

Data model

Data in TSDB for InfluxDB® is organized around three core concepts:

  • Measurement: A logical container for time series data, analogous to a table in a relational database.

  • Tag: Indexed metadata attached to a data point (for example, host=server01). Use tags for fields you filter or group by frequently.

  • Field: The actual measured value (for example, value=0.64). Fields are not indexed.

A single data record is called a point, and consists of a measurement, zero or more tags, one or more fields, and a timestamp.

Data is written using the Line Protocol format:

<measurement>[,<tag-key>=<tag-value>...] <field-key>=<field-value>[,...] [timestamp]

Examples:

cpu,host=serverA,region=us_west value=0.64
cpu_load_short,host=server01,region=us-west value=0.64 1434055562000000000

Prerequisites

Before you begin, make sure you have:

  • A TSDB for InfluxDB® instance. After purchase, note the Virtual Private Cloud (VPC) endpoint and public endpoint on the instance details page.

  • A database account with read and write permissions. Go to Instance Management > Account Management to create accounts and manage permissions.

  • A database and retention policy configured. If you skip the retention policy, the autogen default is used.

Important

All connections must use HTTPS. The service listens on port 3242. The endpoint format is xxx.influxdata.rds.aliyuncs.com.

Note The public endpoint must be enabled at purchase time if you need to connect over the Internet. A VPC endpoint is available for connections within the same VPC.

Write data

TSDB for InfluxDB® supports multiple write methods. Choose the one that fits your workflow:

Method Best for
Telegraf Collecting system and application metrics automatically
Go SDK Programmatic writes from Go applications
Java SDK Programmatic writes from Java applications
Influx CLI Interactive testing and ad-hoc writes
HTTP API (curl) Scripting and quick integrations

Collect metrics with Telegraf

Telegraf is an agent written in Go that collects system and service metrics and writes them to TSDB for InfluxDB® over HTTP. The Telegraf configuration file requires at least one input plugin (where metrics come from) and one output plugin (where metrics go).

Step 1: Install Telegraf

Follow the Telegraf installation guide for your operating system.

Step 2: Configure the output plugin

In your Telegraf configuration file, set up the [[outputs.influxdb]] section to write to TSDB for InfluxDB®:

[[outputs.influxdb]]
  urls = ["https://xxx.influxdata.rds.aliyuncs.com:3242"]
  database = "test"
  retention_policy = ""   # Leave blank to use the default retention policy
  username = "test"
  password = "test"

Replace xxx.influxdata.rds.aliyuncs.com with your VPC or public endpoint from the instance details page. Set username and password to an account that has write permissions on the database.

Step 3: Start Telegraf

Run the command for your operating system:

  • Ubuntu, Debian, RedHat, or CentOS:

    sudo service telegraf start
  • Ubuntu 15.04+, Debian 8+, CentOS 7+, or RHEL 7+:

    sudo systemctl start telegraf

Write data with the Go SDK

Download the Go SDK from GitHub. For Go installation instructions, see the Go documentation.

Connect over HTTPS using the instance endpoint and credentials, then write data in batches. Write 100–1,000 points per batch based on your data size to maximize throughput.

package main

import (
    "fmt"
    "log"
    "math/rand"
    "net/url"
    "time"

    client "github.com/influxdata/influxdb1-client"
)

func main() {
    // Connect to the instance endpoint over HTTPS
    host, err := url.Parse(fmt.Sprintf("https://%s:%d", "xxx.influxdata.rds.aliyuncs.com", 3242))
    if err != nil {
        log.Fatal(err)
    }
    config := client.Config{
        URL:      *host,
        Username: "test",
        Password: "test",
    }
    con, err := client.NewClient(config)
    if err != nil {
        log.Fatal(err)
    }

    _, _, err = con.Ping()
    if err != nil {
        log.Fatal(err)
    }

    // Build a batch of 1,000 points
    var (
        shapes     = []string{"circle", "rectangle", "square", "triangle"}
        colors     = []string{"red", "blue", "green"}
        sampleSize = 1000
        pts        = make([]client.Point, sampleSize)
    )

    rand.Seed(42)
    for i := 0; i < sampleSize; i++ {
        pts[i] = client.Point{
            Measurement: "shapes",
            Tags: map[string]string{
                "color": colors[rand.Intn(len(colors))],
                "shape": shapes[rand.Intn(len(shapes))],
            },
            Fields: map[string]interface{}{
                "value": rand.Intn(sampleSize),
            },
            Time: time.Now(),
        }
    }

    // Write the batch to the target database
    bps := client.BatchPoints{
        Points:          pts,
        Database:        "test",
        RetentionPolicy: "autogen",
    }
    _, err = con.Write(bps)
    if err != nil {
        log.Fatal(err)
    }
}

To compile and run:

go run main.go

Write data with the Java SDK

Download the Java SDK from GitHub.

package main;

import java.util.concurrent.TimeUnit;
import org.influxdb.InfluxDB;
import org.influxdb.BatchOptions;
import org.influxdb.InfluxDBFactory;
import org.influxdb.dto.Point;

public class StartMain {

    public static void main(String[] args) throws Exception {
        StartMain startMain = new StartMain();
        try {
            startMain.run();
        } catch (Exception e) {
            System.out.println(e);
        }
    }

    public void run() throws Exception {
        // Connect to the instance endpoint over HTTPS
        InfluxDB influxDB = InfluxDBFactory.connect(
            "https://xxx.influxdata.rds.aliyuncs.com:3242", "test", "test");

        String dbName = "test";
        influxDB.setDatabase(dbName);
        String rpName = "autogen";
        influxDB.setRetentionPolicy(rpName);
        influxDB.enableBatch(BatchOptions.DEFAULTS);

        // Write CPU metrics
        influxDB.write(Point.measurement("cpu")
            .time(System.currentTimeMillis(), TimeUnit.MILLISECONDS)
            .addField("idle", 90L)
            .addField("user", 9L)
            .addField("system", 1L)
            .build());

        // Write disk metrics
        influxDB.write(Point.measurement("disk")
            .time(System.currentTimeMillis(), TimeUnit.MILLISECONDS)
            .addField("used", 80L)
            .addField("free", 1L)
            .build());

        influxDB.close();
    }
}

To handle write errors, pass a custom exception handler to enableBatch():

influxDB.enableBatch(BatchOptions.DEFAULTS.exceptionHandler(
    (failedPoints, throwable) -> { /* custom error handling here */ })
);

Write data with the Influx CLI

Connect using the -ssl flag along with your host, port, username, and password:

influx -ssl -host xxx.influxdata.rds.aliyuncs.com -port 3242 -username admin -password admin

You can follow the Line Protocol tutorial to write data to TSDB For InfluxDB® from a client:

Write a data point using Line Protocol:

> INSERT cpu,host=serverA,region=us_west value=0.64

Write data with the HTTP API

Use curl to post data in Line Protocol format directly to the write endpoint:

curl -i -XPOST -u test:test \
  'https://xxx.influxdata.rds.aliyuncs.com:3242/write?db=test' \
  --data-binary 'cpu_load_short,host=server01,region=us-west value=0.64 1434055562000000000'

Migrate data from self-managed InfluxDB

If you have InfluxDB instances running in self-managed environments or on other platforms, use the migration tool provided by TSDB for InfluxDB® to move your data.

The migration tool automatically creates the destination database. Do not create the destination database manually before running the migration — doing so can cause write conflicts.

After migration, TSDB for InfluxDB® manages the data so you can write, query, and analyze it using the same interfaces described above.

Query and analyze data

TSDB for InfluxDB® supports SQL-like queries. Run queries through the Influx CLI or the HTTP API.

The following example groups data by the color tag and calculates the mean value for each group:

SELECT MEAN("value") FROM "shapes" GROUP BY "color"

Output:

name: shapes
tags: color=blue
time  mean
----  ----
0     216.25

name: shapes
tags: color=green
time  mean
----  ----
0     434.25

name: shapes
tags: color=red
time  mean
----  ----
0     540.25

InfluxDB® is a trademark registered by InfluxData, which is not affiliated with, and does not endorse, TSDB for InfluxDB®.