All Products
Search
Document Center

Tablestore:What is Tablestore?

Last Updated:Mar 31, 2026

Tablestore is a serverless table storage service for large volumes of structured data. It supports four data models — Wide Column, AI (Agent Memory), TimeSeries, and Timeline — for AI, metadata, messaging, and spatio-temporal workloads. Built-in features include search index, vector search, and SQL query. Tablestore integrates with compute ecosystems such as MaxCompute, Flink, Spark, and Presto, and is compatible with AI frameworks including Dify, LangChain, and LlamaIndex. It also provides the MCP intelligent Agent architecture for AI chat applications and the IoTstore solution for IoT use cases.

Key concepts

TermDescription
RegionA physical data center. Tablestore is deployed across multiple Alibaba Cloud regions. See Regions and zones.
Read/write throughputMeasured in read capacity units (RCUs) and write capacity units (WCUs). A capacity unit (CU) is the smallest billing unit for read and write operations. See Read/write throughput.
InstanceAn entity for managing and using the Tablestore service. Each instance is equivalent to a database. Access control and resource metering are performed at the instance level. See Instances.
EndpointEach instance has an endpoint. Applications must specify the endpoint when performing operations on tables and data. See Endpoints.
Data lifecycleTime to live (TTL) is a property of a data table that specifies the data retention period in seconds. Tablestore clears expired data in the background, reducing storage space and costs. See Data versions and TTL.

Data storage models

Tablestore provides three data storage models. Select a model based on your use case. For a feature comparison across models, see Features.

ModelDescription
Wide table modelSimilar to the Bigtable and HBase models. Suitable for metadata and big data scenarios. Supports data versions, TTL, auto-increment primary key columns, conditional updates, local transactions, atomic counters, and filters. See Wide Column model.
TimeSeries modelDesigned for time series data. Suitable for IoT device monitoring, device data collection, and machine monitoring. Supports automatic creation of time series metadata indexes and a range of time series query capabilities. See TimeSeries model.
Message modelDesigned for messaging scenarios such as instant messaging (IM) and feed streams. Meets message ordering, massive storage, and real-time synchronization requirements. Also supports full-text search and multi-dimensional composite queries. See Timeline model.

Use cases

Use caseDescription
AI applicationsStore and retrieve AI agent memory and contextual data using the AI (Agent Memory) model and vector search. The MCP intelligent Agent architecture supports AI chat applications built with Dify, LangChain, or LlamaIndex.
IoT data managementCollect and analyze time series data from IoT devices at scale using the TimeSeries model and the IoTstore solution.
Metadata storageStore and query large-scale metadata — such as file indexes, user profiles, and product catalogs — using the Wide Column model with search index and SQL query.
Messaging and social feedsBuild instant messaging systems and feed streams using the Timeline model, with support for message ordering and real-time synchronization across massive data volumes.
Spatio-temporal dataQuery and analyze location and event data with multi-dimensional composite queries and the search index.

Access methods

MethodDescription
ConsoleManage Tablestore through the Tablestore console.
SDKDevelop applications using SDKs for Java, Go, Python, Node.js, .NET, and PHP. See SDK reference.
Command line interfaceRun Tablestore operations using simple commands. See Command line interface.

Quick start

Get started with the Wide Column model or the TimeSeries model using the console or CLI:

Computing and analytics

Tablestore integrates with multiple compute and analytics engines. Select an engine based on your scenario.

Analytical toolApplicable modelDescription
MaxComputeWide table modelCreate a foreign table via the MaxCompute client to access Tablestore data. See Use MaxCompute.
SparkWide table modelAccess Tablestore through E-MapReduce SQL or DataFrame programming. See Use the Spark compute engine.
Hive or HadoopMRWide table modelAccess Tablestore data with Hive or Hadoop MapReduce. See Use Hive or HadoopMR.
Function ComputeWide table modelRun real-time computing on Tablestore incremental data. See Use Function Compute.
FlinkWide table model, TimeSeries modelUse Real-time Compute Flink to access source tables, dimension tables, or sink tables in Tablestore for real-time big data analysis. See Use Flink.
PrestoDBWide table modelQuery, write, and import Tablestore data using SQL through PrestoDB. See Use Tablestore with PrestoDB.
Tablestore search indexWide table modelUse inverted indexes and columnar storage for complex multidimensional queries and statistical analysis on big data. Supports non-primary key column queries, multi-column combined queries, fuzzy queries, and aggregations such as max/min, count, and grouping. See Search index.
Tablestore SQL queryWide table model, TimeSeries modelAccess Tablestore data through a unified SQL interface for complex queries and efficient analysis. See Use SQL query.

Migration and synchronization

Migrate and synchronize heterogeneous data to Tablestore, or export Tablestore data to other services.

CategoryOperationDescription
Data importSynchronize data from KafkaUse the Tablestore Sink Connector to batch import data from Apache Kafka into a data table or time series table.
Data importTablestore Data SynchronizationUse Tunnel Service, DataWorks, or DataX to synchronize data between Tablestore tables.
Data exportExport full data to MaxComputeUse DataWorks to export full data from Tablestore to MaxCompute.
Data exportSync to OSSUse DataWorks to export full or incremental data to Object Storage Service (OSS).
Data exportSynchronizing to a local fileUse the CLI or DataX to download data to a local file directly, or use DataWorks to export to OSS and then download from OSS.

Category

Data synchronization

Description

Data import

Synchronize data from Kafka

Use the Tablestore Sink Connector to batch import data from Apache Kafka into a Tablestore data table or time series table.

Tablestore Data Synchronization

Use Tunnel Service, DataWorks, or DataX to synchronize data from one Tablestore data table to another.

Data export

Export full data to MaxCompute

Use DataWorks to export full data from Tablestore to MaxCompute.

Sync to OSS

Use DataWorks to export full or incremental data from Tablestore to OSS.

Synchronizing to a local file

Use the command line interface or DataX to download data directly to a local file. You can also use DataWorks to synchronize data to OSS and then download the data from OSS to a local file.

More features

FeatureDescription
Access controlUse Resource Access Management (RAM) to configure custom permissions. See Grant permissions to a RAM user using a RAM policy. Further restrict access with control policies in a resource directory, Tablestore network ACLs, and instance policies. See Authorization management.
SecurityProtect data at rest and in transit using data table encryption and VPC network access. See Data encryption and Network security management.
BackupBack up important data periodically to prevent accidental deletion. See Data backup.
Monitoring and alertsConfigure alert notifications for monitoring metrics using Cloud Monitor. See Data monitoring and alerts.
VisualizationVisualize Tablestore data as charts using DataV or Grafana. See Data visualization.

FAQ

What are the differences between Tablestore and traditional relational databases such as MySQL and SQL Server?