Use the Tablestore CLI to create a data table, write and read data, update and delete rows, and export data. The Wide Column model is similar to the Bigtable or HBase data model. A single data table supports petabyte-level data and tens of millions of queries per second (QPS), suitable for metadata storage and big data scenarios.
Prerequisites
Before you begin, ensure that you have:
A Tablestore instance. For more information, see Create an instance.
The Tablestore CLI installed. For more information, see Download the Tablestore CLI.
Your instance endpoint, instance name, AccessKey ID, and AccessKey secret.
Step 1: Configure instance access
Run the config command to set the endpoint, instance name, and credentials for the instance you want to access.
Replace the endpoint, instance name, AccessKey ID, and AccessKey secret with your actual values.
config --endpoint https://myinstance.cn-hangzhou.ots.aliyuncs.com --instance myinstance --id NTSVL************************** --key 7NR2****************************************| Parameter | Description |
|---|---|
--endpoint | The endpoint of the Tablestore instance |
--instance | The instance name |
--id | Your AccessKey ID |
--key | Your AccessKey secret |
Step 2: Create and select a data table
Each row in a data table has a primary key that uniquely identifies it, plus any number of attribute columns.
Create a data table named
orderwith a single primary key columnidof type string: In the--pkparameter,"c"specifies the column name and"t"specifies the data type.create -t order --pk '[{"c":"id","t":"string"}]'Select the
ordertable for subsequent operations. The--wcflag indicates the Wide Column model:use --wc -t order
For more information, see Operations on data tables.
Step 3: Write data
Insert a single row
Use the put command to insert a row. Specify the primary key value with --pk and the attribute columns with --attr.
For integer values, add "isint":true to the column definition. Without this flag, numeric values default to the double type.put --pk '["000000114d884ca1dbd6b9a58e8d0d94"]' --attr '[{"c":"pBrand","v":"brand1"},{"c":"pPrice","v":1599.0},{"c":"payTime","v":1509615334404,"isint":true},{"c":"totalPrice","v":2498.99},{"c":"sName","v":"Peter"},{"c":"pId","v":"p0003004"},{"c":"oId","v":"o0039248410"},{"c":"hasPaid","v":true},{"c":"sId","v":"s0015"},{"c":"orderTime","v":1509614885965,"isint":true},{"c":"pName","v":"brand1 type"},{"c":"cName","v":"Mary"},{"c":"pType","v":"Mobile phone"},{"c":"pCount","v":1,"isint":true},{"c":"cId","v":"c0018"}]'This inserts an order row with the following 15 attribute columns:
| Category | Columns |
|---|---|
| Product information | pBrand, pPrice, pName, pType, pId, pCount |
| Order details | oId, orderTime, payTime, totalPrice, hasPaid |
| Customer and seller | cId, cName, sId, sName |
Import data in batch
For bulk loading, download the sample data package, decompress it, and use the import command.
The sample data file contains 1,000,000 rows of order data. Control the number of imported rows with the -l parameter.The following command imports 50,000 rows and uses the current time as the timestamp. Replace yourFilePath with the path where you decompressed the sample data (for example, D:\order_demo_data_1000000\order_demo_data_1000000):
import -i yourFilePath --ignore_version -l 50000| Parameter | Description |
|---|---|
-i | Input file path |
--ignore_version | Use the current time as the timestamp |
-l | Maximum number of rows to import |
Expected output:
Current speed is: 15800 rows/s. Total succeed count 15800, failed count 0.
Current speed is: 27400 rows/s. Total succeed count 43200, failed count 0.
Import finished, total count is 50000, failed 0 rows.Step 4: Read data
You can also use SQL statements to query and analyze table data. For more information, see SQL query.
Read a single row
Use the get command with the primary key value to read a specific row:
get --pk '["000000114d884ca1dbd6b9a58e8d0d94"]'Expected output:
+----------------------------------+-------+--------+---------+-------------+---------------+--------+--------+----------+-------------+--------+-------+---------------+-------+--------+------------+
| id | cId | cName | hasPaid | oId | orderTime | pBrand | pCount | pId | pName | pPrice | pType | payTime | sId | sName | totalPrice |
+----------------------------------+-------+--------+---------+-------------+---------------+--------+--------+----------+-------------+--------+-------+---------------+-------+--------+------------+
| 000000114d884ca1dbd6b9a58e8d0d94 | c0018 | Mary | true | o0039248410 | 1509614885965 | brand1 | 1 | p0003004 | brand1 type | 1599 | Mobile phone | 1509615334404 | s0015 | Peter | 2498.99 |
+----------------------------------+-------+--------+---------+-------------+---------------+--------+--------+----------+-------------+--------+-------+---------------+-------+--------+------------+Scan multiple rows
Use the scan command to read multiple rows. The --limit parameter controls the maximum number of rows returned:
scan --limit 10Step 5: Update data
Use the update command to modify a row. The --condition ignore flag inserts the row regardless of whether it already exists. If the row exists, the new data overwrites the existing data.
update --pk '["000000114d884ca1dbd6b9a58e8d0d94"]' --attr '[{"c":"pBrand","v":"brand2"},{"c":"pPrice","v":1599.0},{"c":"payTime","v":1509615334404,"isint":true},{"c":"totalPrice","v":2498.99},{"c":"sName","v":"Peter"},{"c":"pId","v":"p0003004"},{"c":"oId","v":"o0039248410"},{"c":"hasPaid","v":true},{"c":"sId","v":"s0015"},{"c":"orderTime","v":1509614885965,"isint":true},{"c":"pName","v":"brand2 type"},{"c":"cName","v":"Mary"},{"c":"pType","v":"Mobile phone"},{"c":"pCount","v":1,"isint":true},{"c":"cId","v":"c0018"}]' --condition ignoreIn this example, pBrand changes from brand1 to brand2 and pName changes from brand1 type to brand2 type.
To verify the update, read the row again:
get --pk '["000000114d884ca1dbd6b9a58e8d0d94"]'Step 6: Delete data
Use the delete command with the primary key value to remove a row:
delete --pk '["000000114d884ca1dbd6b9a58e8d0d94"]'Step 7: Export data
Use the scan command with the -o and -c parameters to export data from the data table to a local JSON file.
The following command exports the pId, oId, and cName columns to /tmp/mydata.json:
scan -o /tmp/mydata.json -c pId,oId,cName| Parameter | Description |
|---|---|
-o | Output file path (JSON format) |
-c | Columns to export (comma-separated) |
For more information, see Operations on data.
What to do next
To accelerate data queries, create secondary indexes or search indexes: