All Products
Search
Document Center

Tablestore:Get started with the Wide Column model

Last Updated:Feb 27, 2026

Use the Tablestore CLI to create a data table, write and read data, update and delete rows, and export data. The Wide Column model is similar to the Bigtable or HBase data model. A single data table supports petabyte-level data and tens of millions of queries per second (QPS), suitable for metadata storage and big data scenarios.

Prerequisites

Before you begin, ensure that you have:

  • A Tablestore instance. For more information, see Create an instance.

  • The Tablestore CLI installed. For more information, see Download the Tablestore CLI.

  • Your instance endpoint, instance name, AccessKey ID, and AccessKey secret.

Step 1: Configure instance access

Run the config command to set the endpoint, instance name, and credentials for the instance you want to access.

Replace the endpoint, instance name, AccessKey ID, and AccessKey secret with your actual values.
config --endpoint https://myinstance.cn-hangzhou.ots.aliyuncs.com --instance myinstance --id NTSVL************************** --key 7NR2****************************************
ParameterDescription
--endpointThe endpoint of the Tablestore instance
--instanceThe instance name
--idYour AccessKey ID
--keyYour AccessKey secret

Step 2: Create and select a data table

Each row in a data table has a primary key that uniquely identifies it, plus any number of attribute columns.

  1. Create a data table named order with a single primary key column id of type string: In the --pk parameter, "c" specifies the column name and "t" specifies the data type.

       create -t order --pk '[{"c":"id","t":"string"}]'
  2. Select the order table for subsequent operations. The --wc flag indicates the Wide Column model:

       use --wc -t order

For more information, see Operations on data tables.

Step 3: Write data

Insert a single row

Use the put command to insert a row. Specify the primary key value with --pk and the attribute columns with --attr.

For integer values, add "isint":true to the column definition. Without this flag, numeric values default to the double type.
put --pk '["000000114d884ca1dbd6b9a58e8d0d94"]' --attr '[{"c":"pBrand","v":"brand1"},{"c":"pPrice","v":1599.0},{"c":"payTime","v":1509615334404,"isint":true},{"c":"totalPrice","v":2498.99},{"c":"sName","v":"Peter"},{"c":"pId","v":"p0003004"},{"c":"oId","v":"o0039248410"},{"c":"hasPaid","v":true},{"c":"sId","v":"s0015"},{"c":"orderTime","v":1509614885965,"isint":true},{"c":"pName","v":"brand1 type"},{"c":"cName","v":"Mary"},{"c":"pType","v":"Mobile phone"},{"c":"pCount","v":1,"isint":true},{"c":"cId","v":"c0018"}]'

This inserts an order row with the following 15 attribute columns:

CategoryColumns
Product informationpBrand, pPrice, pName, pType, pId, pCount
Order detailsoId, orderTime, payTime, totalPrice, hasPaid
Customer and sellercId, cName, sId, sName

Import data in batch

For bulk loading, download the sample data package, decompress it, and use the import command.

The sample data file contains 1,000,000 rows of order data. Control the number of imported rows with the -l parameter.

The following command imports 50,000 rows and uses the current time as the timestamp. Replace yourFilePath with the path where you decompressed the sample data (for example, D:\order_demo_data_1000000\order_demo_data_1000000):

import -i yourFilePath --ignore_version -l 50000
ParameterDescription
-iInput file path
--ignore_versionUse the current time as the timestamp
-lMaximum number of rows to import

Expected output:

Current speed is: 15800 rows/s. Total succeed count 15800, failed count 0.
Current speed is: 27400 rows/s. Total succeed count 43200, failed count 0.
Import finished, total count is 50000, failed 0 rows.

Step 4: Read data

You can also use SQL statements to query and analyze table data. For more information, see SQL query.

Read a single row

Use the get command with the primary key value to read a specific row:

get --pk '["000000114d884ca1dbd6b9a58e8d0d94"]'

Expected output:

+----------------------------------+-------+--------+---------+-------------+---------------+--------+--------+----------+-------------+--------+-------+---------------+-------+--------+------------+
| id                               | cId   | cName  | hasPaid | oId         | orderTime     | pBrand | pCount | pId      | pName       | pPrice | pType | payTime       | sId   | sName  | totalPrice |
+----------------------------------+-------+--------+---------+-------------+---------------+--------+--------+----------+-------------+--------+-------+---------------+-------+--------+------------+
| 000000114d884ca1dbd6b9a58e8d0d94 | c0018 | Mary   | true    | o0039248410 | 1509614885965 | brand1 | 1      | p0003004 | brand1 type | 1599   | Mobile phone  | 1509615334404 | s0015 | Peter  | 2498.99    |
+----------------------------------+-------+--------+---------+-------------+---------------+--------+--------+----------+-------------+--------+-------+---------------+-------+--------+------------+

Scan multiple rows

Use the scan command to read multiple rows. The --limit parameter controls the maximum number of rows returned:

scan --limit 10

Step 5: Update data

Use the update command to modify a row. The --condition ignore flag inserts the row regardless of whether it already exists. If the row exists, the new data overwrites the existing data.

update --pk '["000000114d884ca1dbd6b9a58e8d0d94"]' --attr '[{"c":"pBrand","v":"brand2"},{"c":"pPrice","v":1599.0},{"c":"payTime","v":1509615334404,"isint":true},{"c":"totalPrice","v":2498.99},{"c":"sName","v":"Peter"},{"c":"pId","v":"p0003004"},{"c":"oId","v":"o0039248410"},{"c":"hasPaid","v":true},{"c":"sId","v":"s0015"},{"c":"orderTime","v":1509614885965,"isint":true},{"c":"pName","v":"brand2 type"},{"c":"cName","v":"Mary"},{"c":"pType","v":"Mobile phone"},{"c":"pCount","v":1,"isint":true},{"c":"cId","v":"c0018"}]' --condition ignore

In this example, pBrand changes from brand1 to brand2 and pName changes from brand1 type to brand2 type.

To verify the update, read the row again:

get --pk '["000000114d884ca1dbd6b9a58e8d0d94"]'

Step 6: Delete data

Use the delete command with the primary key value to remove a row:

delete --pk '["000000114d884ca1dbd6b9a58e8d0d94"]'

Step 7: Export data

Use the scan command with the -o and -c parameters to export data from the data table to a local JSON file.

The following command exports the pId, oId, and cName columns to /tmp/mydata.json:

scan -o /tmp/mydata.json -c pId,oId,cName
ParameterDescription
-oOutput file path (JSON format)
-cColumns to export (comma-separated)

For more information, see Operations on data.

What to do next

To accelerate data queries, create secondary indexes or search indexes: