All Products
Search
Document Center

Lindorm:Results of write tests

Last Updated:Mar 28, 2026

These benchmark results show the write throughput LindormTSDB achieves across four cluster sizes. Use them to estimate whether a given cluster specification can handle your expected write volume, and to tune batch size and concurrency before going to production.

Important

Results were measured in a dedicated write-only environment with no concurrent queries. Throughput drops when queries run alongside writes or when the volume of time series data grows large. Treat these numbers as a reference ceiling, not a production guarantee. Test with your own workload before sizing.

Cluster sizing reference

Use this table to select the cluster specification that meets your target TPS. Numbers are peak TPS at batch=500 and optimal concurrency in a write-only environment.

Cluster specificationPeak TPS (approx.)Recommended for
3 nodes x 4 cores, 16 GB~544,000Up to ~500K field values/s
3 nodes x 8 cores, 32 GB~3,440,000Up to ~3M field values/s
3 nodes x 16 cores, 64 GB~7,290,000Up to ~7M field values/s
3 nodes x 32 cores, 128 GB~12,400,000Up to ~12M field values/s
Note

These figures assume no concurrent query load. Size up by at least one tier if your workload mixes reads and writes.

How to read these results

Three variables determine write throughput: batch size, worker count, and cluster size. Understanding how each affects TPS helps you tune your client before running your own benchmark.

Batch size has the largest impact. Going from batch=1 to batch=100 typically increases TPS by 10–50x, depending on the cluster. Returns diminish above batch=100–200; doubling from 200 to 500 adds only 10–30% more throughput. A batch size of 100–200 rows is a practical starting point for most workloads.

Worker count saturates quickly on smaller clusters. On the 4-core cluster, TPS peaks at 16–50 workers and stays flat (or slightly drops) beyond that. On the 8-core cluster, throughput at smaller batch sizes peaks around 16–50 workers, but at larger batch sizes (batch=100 or more) throughput continues growing through worker=100–200. On the 16-core and 32-core clusters, throughput continues to grow up to 100–200 workers because the larger CPU pool can absorb more parallelism. Start with 16–50 workers and increase only after confirming CPU headroom is available.

Test setup

Data model

The test simulates a server monitoring scenario: 10,000 devices, each reporting 101 field values across nine subsystems every 10 seconds, covering 15 days of data.

One table is created per subsystem. Each table uses hostname as the primary tag (the unique device identifier) plus additional tags for location and service metadata. For details on tags, fields, and data points, see Data model.

The nine subsystems and their tables:

SubsystemTableField count
CPUcpu10 (double)
Memorymem9 (BIGINT + double)
Diskdisk7 (BIGINT)
Disk I/Odiskio7 (BIGINT)
Kernelkernel6 (BIGINT)
Networknet8 (BIGINT)
Redisredis31 (BIGINT)
PostgreSQLpostgresl16 (BIGINT)
NGINXnginx7 (BIGINT)
  • CREATE TABLE statements

CREATE table cpu(hostname VARCHAR primary TAG,region VARCHAR TAG,datacenter VARCHAR TAG,rack VARCHAR TAG,os VARCHAR TAG,arch VARCHAR TAG,team VARCHAR TAG,service VARCHAR TAG,service_version VARCHAR TAG,service_environment VARCHAR TAG,time BIGINT,usage_user double,usage_system double,usage_idle double,usage_nice double,usage_iowait double,usage_irq double,usage_softirq double,usage_steal double,usage_guest double,usage_guest_nice double);

CREATE table mem(hostname VARCHAR primary TAG, region VARCHAR TAG, datacenter VARCHAR TAG, rack VARCHAR TAG, os VARCHAR TAG, arch VARCHAR TAG, team VARCHAR TAG, service VARCHAR TAG, service_version VARCHAR TAG, service_environment VARCHAR TAG, time BIGINT,total BIGINT, available BIGINT, used BIGINT, `free` BIGINT, cached BIGINT, buffered BIGINT, used_percent double, available_percent double, buffered_percent double);

CREATE table disk(hostname VARCHAR primary TAG, region VARCHAR TAG, datacenter VARCHAR TAG, rack VARCHAR TAG, os VARCHAR TAG, arch VARCHAR TAG, team VARCHAR TAG, service VARCHAR TAG, service_version VARCHAR TAG, service_environment VARCHAR TAG, path VARCHAR TAG, fstype VARCHAR TAG,time BIGINT,total BIGINT, `free` BIGINT, used BIGINT, used_percent BIGINT, inodes_total BIGINT, inodes_free BIGINT, inodes_used BIGINT);

CREATE table diskio(hostname VARCHAR primary TAG, region VARCHAR TAG, datacenter VARCHAR TAG, rack VARCHAR TAG, os VARCHAR TAG, arch VARCHAR TAG, team VARCHAR TAG, service VARCHAR TAG, service_version VARCHAR TAG, service_environment VARCHAR TAG, serial VARCHAR TAG,time BIGINT,`reads` BIGINT, writes BIGINT, read_bytes BIGINT, write_bytes BIGINT, read_time BIGINT, write_time BIGINT, io_time BIGINT);

CREATE table kernel(hostname VARCHAR primary TAG, region VARCHAR TAG, datacenter VARCHAR TAG, rack VARCHAR TAG, os VARCHAR TAG, arch VARCHAR TAG, team VARCHAR TAG, service VARCHAR TAG, service_version VARCHAR TAG, service_environment VARCHAR TAG,time BIGINT,boot_time BIGINT, interrupts BIGINT, context_switches BIGINT, processes_forked BIGINT, disk_pages_in BIGINT, disk_pages_out BIGINT);

CREATE table net(hostname VARCHAR primary TAG, region VARCHAR TAG, datacenter VARCHAR TAG, rack VARCHAR TAG, os VARCHAR TAG, arch VARCHAR TAG, team VARCHAR TAG, service VARCHAR TAG, service_version VARCHAR TAG, service_environment VARCHAR TAG, interface VARCHAR TAG,time BIGINT, bytes_sent BIGINT, bytes_recv BIGINT, packets_sent BIGINT, packets_recv BIGINT, err_in BIGINT, err_out BIGINT, drop_in BIGINT, drop_out BIGINT );

CREATE table redis(hostname VARCHAR primary TAG, region VARCHAR TAG, datacenter VARCHAR TAG, rack VARCHAR TAG, os VARCHAR TAG, arch VARCHAR TAG, team VARCHAR TAG, service VARCHAR TAG, service_version VARCHAR TAG, service_environment VARCHAR TAG, port VARCHAR TAG, server VARCHAR TAG,time BIGINT, uptime_in_seconds BIGINT, total_connections_received BIGINT, expired_keys BIGINT, evicted_keys BIGINT, keyspace_hits BIGINT, keyspace_misses BIGINT, instantaneous_ops_per_sec BIGINT, instantaneous_input_kbps BIGINT, instantaneous_output_kbps BIGINT, connected_clients BIGINT, used_memory BIGINT, used_memory_rss BIGINT, used_memory_peak BIGINT, used_memory_lua BIGINT, rdb_changes_since_last_save BIGINT, sync_full BIGINT, sync_partial_ok BIGINT, sync_partial_err BIGINT, pubsub_channels BIGINT, pubsub_patterns BIGINT, latest_fork_usec BIGINT, connected_slaves BIGINT, master_repl_offset BIGINT, repl_backlog_active BIGINT, repl_backlog_size BIGINT, repl_backlog_histlen BIGINT, mem_fragmentation_ratio BIGINT, used_cpu_sys BIGINT, used_cpu_user BIGINT, used_cpu_sys_children BIGINT, used_cpu_user_children BIGINT);

CREATE table postgresl(hostname VARCHAR primary TAG, region VARCHAR TAG, datacenter VARCHAR TAG, rack VARCHAR TAG, os VARCHAR TAG, arch VARCHAR TAG, team VARCHAR TAG, service VARCHAR TAG, service_version VARCHAR TAG, service_environment VARCHAR TAG, time BIGINT,numbackends BIGINT, xact_commit BIGINT, xact_rollback BIGINT, blks_read BIGINT, blks_hit BIGINT, tup_returned BIGINT, tup_fetched BIGINT, tup_inserted BIGINT, tup_updated BIGINT, tup_deleted BIGINT, conflicts BIGINT, temp_files BIGINT, temp_bytes BIGINT, deadlocks BIGINT, blk_read_time BIGINT, blk_write_time BIGINT );

CREATE table nginx(hostname VARCHAR primary TAG, region VARCHAR TAG, datacenter VARCHAR TAG, rack VARCHAR TAG, os VARCHAR TAG, arch VARCHAR TAG, team VARCHAR TAG, service VARCHAR TAG, service_version VARCHAR TAG, service_environment VARCHAR TAG, port VARCHAR TAG, server VARCHAR TAG,time BIGINT,accepts BIGINT, active BIGINT, handled BIGINT, reading BIGINT, requests BIGINT, waiting BIGINT, writing BIGINT );
  • Sample INSERT statements

INSERT INTO cpu(hostname,region,datacenter,rack,os,arch,team,service,service_version,service_environment, time ,usage_user,usage_system,usage_idle,usage_nice,usage_iowait,usage_irq,usage_softirq,usage_steal,usage_guest,usage_guest_nice) VALUES ('host_0','ap-northeast-1','ap-northeast-1a','72','Ubuntu16.10','x86','CHI','10','0','test',1514764800000,60.4660287979619540,94.0509088045012476,66.4560053218490481,43.7714187186980155,42.4637497071265670,68.6823072867109374,6.5637019217476222,15.6519254732791246,9.6969518914484567,30.0911860585287059);
INSERT INTO diskio(hostname,region,datacenter,rack,os,arch,team,service,service_version,service_environment,serial, time ,`reads`,writes,read_bytes,write_bytes,read_time,write_time,io_time) VALUES ('host_0','ap-northeast-1','ap-northeast-1a','72','Ubuntu16.10','x86','CHI','10','0','test','694-511-162',1514764800000,0,0,3,0,0,7,0);
INSERT INTO disk(hostname,region,datacenter,rack,os,arch,team,service,service_version,service_environment,path,fstype, time ,total,`free`,used,used_percent,inodes_total,inodes_free,inodes_used) VALUES ('host_0','ap-northeast-1','ap-northeast-1a','72','Ubuntu16.10','x86','CHI','10','0','test','/dev/sda9','ext4',1514764800000,1099511627776,549755813888,549755813888,50,268435456,134217728,134217728);
INSERT INTO kernel(hostname,region,datacenter,rack,os,arch,team,service,service_version,service_environment, time ,boot_time,interrupts,context_switches,processes_forked,disk_pages_in,disk_pages_out) VALUES ('host_0','ap-northeast-1','ap-northeast-1a','72','Ubuntu16.10','x86','CHI','10','0','test',1514764800000,233,0,1,0,0,0);
INSERT INTO mem(hostname,region,datacenter,rack,os,arch,team,service,service_version,service_environment, time ,total,available,used,`free`,cached,buffered,used_percent,available_percent,buffered_percent) VALUES ('host_0','ap-northeast-1','ap-northeast-1a','72','Ubuntu16.10','x86','CHI','10','0','test',1514764800000,8589934592,6072208808,2517725783,5833292948,1877356426,2517725783,29.3101857336815748,70.6898142663184217,78.1446947407235797);
INSERT INTO net(hostname,region,datacenter,rack,os,arch,team,service,service_version,service_environment,interface, time ,bytes_sent,bytes_recv,packets_sent,packets_recv,err_in,err_out,drop_in,drop_out) VALUES ('host_0','ap-northeast-1','ap-northeast-1a','72','Ubuntu16.10','x86','CHI','10','0','test','eth3',1514764800000,0,0,0,2,0,0,0,0);
INSERT INTO nginx(hostname,region,datacenter,rack,os,arch,team,service,service_version,service_environment,port,server, time ,accepts,active,handled,reading,requests,waiting,writing) VALUES ('host_0','ap-northeast-1','ap-northeast-1a','72','Ubuntu16.10','x86','CHI','10','0','test','12552','nginx_65466',1514764800000,0,0,11,0,0,0,0);
INSERT INTO postgresl(hostname,region,datacenter,rack,os,arch,team,service,service_version,service_environment, time ,numbackends,xact_commit,xact_rollback,blks_read,blks_hit,tup_returned,tup_fetched,tup_inserted,tup_updated,tup_deleted,conflicts,temp_files,temp_bytes,deadlocks,blk_read_time,blk_write_time) VALUES ('host_0','ap-northeast-1','ap-northeast-1a','72','Ubuntu16.10','x86','CHI','10','0','test',1514764800000,0,0,0,0,3,0,0,0,0,0,0,0,12,0,0,0);
INSERT INTO redis(hostname,region,datacenter,rack,os,arch,team,service,service_version,service_environment,port,server, time ,uptime_in_seconds,total_connections_received,expired_keys,evicted_keys,keyspace_hits,keyspace_misses,instantaneous_ops_per_sec,instantaneous_input_kbps,instantaneous_output_kbps,connected_clients,used_memory,used_memory_rss,used_memory_peak,used_memory_lua,rdb_changes_since_last_save,sync_full,sync_partial_ok,sync_partial_err,pubsub_channels,pubsub_patterns,latest_fork_usec,connected_slaves,master_repl_offset,repl_backlog_active,repl_backlog_size,repl_backlog_histlen,mem_fragmentation_ratio,used_cpu_sys,used_cpu_user,used_cpu_sys_children,used_cpu_user_children) VALUES ('host_0','ap-northeast-1','ap-northeast-1a','72','Ubuntu16.10','x86','CHI','10','0','test','19071','redis_86258',1514764800000,0,0,0,5,0,0,0,0,0,8589934592,8589934592,8589934592,8589934592,0,0,0,0,36,0,0,0,0,0,0,0,0,0,16,0,0);

All writes use batch INSERT with PreparedStatement to maximize throughput. For guidance on efficient write patterns, see Write data in an efficient manner.

Metrics

MetricDefinition
tpsAverage field values written per second. In LindormTSDB, TPS counts individual field values (not rows or transactions).
workerNumber of concurrent write threads.
batchNumber of rows included in each write request.
max_cpuPeak CPU utilization across all nodes during the test, as a percentage.

Test results

All four clusters use 3 nodes. Results are from a write-only environment with no concurrent queries.

Figure 1: Write TPS at batch=500 across cluster sizes and concurrency levels

2023-02-20_15-13-23

Table 1: Cluster 1 — 3 nodes x 4 cores, 16 GB memory

batchworkertpsmax_cpu
114,846.4836.68
11636,862.1578.50
15031,653.4473.46
110031,521.7174.32
120031,651.0373.03
501126,462.8565.43
5016460,032.7579.89
5050457,791.7881.50
50100457,956.5382.69
50200434,573.4781.18
1001168,643.8074.14
10016468,008.2584.09
10050470,608.3184.34
100100451,384.4483.32
100200457,740.2284.61
2001205,046.3174.77
20016480,309.5684.74
20050489,903.3486.73
200100484,745.4486.77
200200475,824.9786.55
5001239,847.3474.76
50016511,989.5087.86
50050544,544.7588.23
500100543,131.5688.12
500200528,027.1288.57

Table 2: Cluster 2 — 3 nodes x 8 cores, 32 GB memory

batchworkertpsmax_cpu
113,601.7219.88
11646,701.0546.97
15069,892.6659.77
110070,219.3360.32
120070,187.8160.54
501114,062.0122.88
50161,123,739.8864.66
50501,416,314.0070.99
501001,421,701.7570.94
502001,422,040.1271.56
1001183,456.9822.38
100161,651,046.2565.09
100502,029,514.7574.16
1001002,040,670.3873.39
1002002,025,066.1273.98
2001254,914.2324.27
200162,172,662.2571.47
200502,670,999.2576.47
2001002,674,582.2576.95
2002002,693,531.5076.41
5001332,250.7823.86
500162,820,651.5072.56
500503,429,375.0080.98
5001003,442,593.7580.62
5002003,440,201.5081.12

Table 3: Cluster 3 — 3 nodes x 16 cores, 64 GB memory

batchworkertpsmax_cpu
113,897.798.97
11658,217.4427.30
150127,110.8950.49
1100165,754.0962.31
1200202,844.2070.72
501136,378.3911.77
50161,634,203.8841.92
50502,773,785.7558.96
501003,363,458.2567.87
502003,703,033.0074.14
1001198,375.6710.35
100162,494,268.0045.86
100504,007,320.2560.87
1001004,753,680.5069.29
1002005,095,771.0075.17
2001278,253.5310.57
200163,368,596.5045.48
200505,214,060.5064.57
2001006,040,166.5072.35
2002006,283,312.0077.07
5001352,744.7810.80
500164,281,761.5047.17
500506,544,214.0071.73
5001007,267,295.5077.15
5002007,290,116.0083.91

Table 4: Cluster 4 — 3 nodes x 32 cores, 128 GB memory

batchworkertpsmax_cpu
113,405.324.52
11651,460.8711.04
150134,289.6227.32
1100201,014.7540.45
1200255,692.8451.99
501113,644.645.33
50161,596,669.8819.13
50503,676,491.5038.75
501005,217,282.5050.84
502006,345,112.0062.49
1001188,352.085.05
100162,624,622.5021.15
100505,740,561.5040.49
1001007,521,672.0055.85
1002008,507,855.0061.68
2001249,571.775.05
200163,637,803.5021.23
200508,141,380.5045.39
20010010,289,145.0057.85
20020010,462,525.0060.48
5001334,678.315.47
500164,657,772.5024.23
5005010,098,200.0046.90
50010012,405,648.0064.57
50020012,136,903.0066.01

What's next