LindormTSDB organizes data into databases and time series tables. This topic explains how to create both, and how to make the key decisions that affect query performance and storage costs.
Create a Lindorm time series database
Every Lindorm instance includes a database named default. Data written without specifying a database goes into default, which cannot be deleted.
Create additional databases to physically isolate datasets, or to apply different retention and storage settings per dataset. See CREATE DATABASE for the full syntax reference.
Each database supports three independent settings:
| Setting | Parameter | Description |
|---|---|---|
| Data TTL (time to live) | ttl | Number of days before data expires and is deleted. Omit to keep data indefinitely. |
| Partition interval | partition_interval | Time span covered by each partition. Default is 30 days. |
| Hot and cold data boundary | cold_boundary | Number of days after which data moves to cold storage. Must be less than ttl. |
Examples
Create a database with no expiry:
CREATE DATABASE DB1;Create a database with a 60-day TTL:
CREATE DATABASE DB2 WITH (ttl=60);Create a database with a 60-day TTL and a 30-day hot and cold data boundary:
CREATE DATABASE DB3 WITH (cold_boundary=30, ttl=60);Both ttl and cold_boundary are measured in days. cold_boundary must be smaller than ttl.
Partition a database by time
LindormTSDB stores each partition's time series data and indexes together. When you query a time range, only the partitions that overlap that range are scanned—the rest are skipped automatically.
If you create periodic time series and a large number of time series are generated, we recommend that you partition your database by time to avoid oversized indexes for the same partition.
The default partition interval is 30 days. We recommend that you set the partition interval to 7 or 30 days:
| Daily write volume | Recommended interval |
|---|---|
| High throughput or finer time-range pruning needed | 7 days |
| Typical workloads | 30 days |
Create a database with a 30-day partition interval:
CREATE DATABASE DB4 WITH (partition_interval=30);Change the partition interval of an existing database:
ALTER DATABASE DB4 WITH (partition_interval=60);Changing the partition interval does not restructure existing partitions. LindormTSDB closes the current partition on the last day of active writes, then applies the new interval to all subsequent partitions. Existing partitions keep their original interval. Example: DB4 starts with a 30-day interval. Partition1 holds day 1–30 and Partition2 holds day 31–50. After changing to 60 days, data from day 51 onward goes into a new 60-day partition. Partition1 and Partition2 are unaffected.
Configure hot and cold storage
LindormTSDB can automatically move data older than a threshold to cost-effective cold storage. Set the threshold with the cold_boundary parameter (in days).
Before configuring a boundary, enable cold storage for the database.
Create a database with a 30-day boundary:
CREATE DATABASE DB5 WITH (cold_boundary=30);Update the boundary on an existing database:
ALTER DATABASE DB5 WITH (cold_boundary=30);Data moved to cold storage cannot be moved back to hot storage.
The transfer happens asynchronously in the background. Storage size metrics may take time to reflect the change after you configure or update the boundary.
Create a time series table
A time series table has three types of columns:
| Column type | Purpose | Indexing |
|---|---|---|
Tag (VARCHAR TAG) | Identifies a time series; used in WHERE filters | LindormTSDB creates an inverted index for each tag column |
Timestamp (TIMESTAMP) | Records when a data point was observed | Required; used for time-range queries and partition pruning |
Field (DOUBLE, etc.) | Stores the measured value | Not indexed |
Before creating a table, determine which columns should be tags and which should be fields.
Choose tags vs. fields
Use a tag when the value:
Identifies or groups a time series (device ID, region, sensor type)
Appears in filter conditions (
WHERE tag = 'value')Has a bounded, predictable set of distinct values
Use a field when the value:
Is a numeric measurement that changes continuously (temperature, CPU usage)
Is unique or highly variable per data point
Avoid putting high-cardinality data—unique IDs, random strings, hashes—into tag columns. LindormTSDB creates an inverted index for each tag column, so an unbounded number of distinct tag values leads to index bloat and increased memory usage. Recommended: Use a bounded identifier such as region or sensor_type as a tag. Not recommended: Use a UUID or random hash as a tag value.
Choose a primary key
The primary key doubles as the partition key. LindormTSDB hashes the primary key to route data to a specific storage node. Queries that filter on the primary key go directly to that node instead of scanning all nodes.
Use a column with unique values per device or entity as the primary key—device IDs are a common choice in Internet of Things (IoT) scenarios.
Examples
Create a sensor table without a primary key:
CREATE TABLE sensor (
device_id VARCHAR TAG,
region VARCHAR TAG,
time TIMESTAMP,
temperature DOUBLE,
humidity DOUBLE);Create a sensor table with device_id as the primary key:
CREATE TABLE sensor (
device_id VARCHAR TAG,
region VARCHAR TAG,
time TIMESTAMP,
temperature DOUBLE,
humidity DOUBLE,
PRIMARY KEY(device_id));For more background on data modeling, see Data model and Design a time series table.
Automatic table creation
When you write data using the Lindorm native SDK for Java, the InfluxDB line protocol, or the multi-value data model compatible with TSDB APIs, specify one of three schema constraint policies:
| Policy | Behavior |
|---|---|
| Strong constraint | Rejects writes to tables or columns that do not exist |
| Weak constraint | Creates missing tables or columns automatically; validates column types when columns exist |
| No constraint | No schema enforcement |
With the weak constraint policy, LindormTSDB creates the table or column on the first write. See Constraint policies for schemas for details.
Single-value and multi-value data models
Data written via the OpenTSDB protocol or TSDB APIs uses the single-value model. LindormTSDB does not create a table for single-value data.
Use the multi-value model instead. LindormTSDB optimizes its underlying storage for multi-value data and automatically creates the table even under the weak constraint policy, giving you better read performance and simpler schema management.