Tablestore command-line interface (CLI) provides simple and clear commands that you can run in the Windows, Linux, and macOS operating systems. This topic describes how to use Tablestore CLI to manage data stored in Tablestore by using command lines.

Download Tablestore CLI

The following table provides the download URLs of Tablestore CLI for different operating systems.
Operating system Download link
Windows Windows10
Linux
macOS macOS

Prerequisites

Quick start

You can run the following commands in Tablestore CLI to perform operations on data stored in Tablestore.

  1. Run the ./ts command to start Tablestore CLI.

    If you do not have execute permissions to run the command in Linux or macOS, run the chmod 755 ts command to obtain permissions and start Tablestore CLI.

  2. Run the following command to configure the connection information of a Tablestore instance:
    config --endpoint https://myinstance.cn-hangzhou.ots.aliyuncs.com --instance myinstance --id NTSVLeBHzgX2iZfcaXXPJ**** --key 7NR2DiotscDbauohSq9kSHX8BDp99bjs7eNpCR7o****
  3. (Optional) Run the lt command to view all tables in the current instance.
  4. Run the following command to create a data table.

    Create a data table named mytable. The data table contains the uid and pid primary key columns. The uid column is of the STRING type. The pid column is of the INTEGER type. The time to live (TTL) is set to 864000 seconds (10 days). The max versions is set to 1.

    create -t mytable --pk '[{"c":"uid","t":"string"}, {"c":"pid","t":"integer"}]' --ttl 864000 --version 1
  5. (Optional) Run the desc -t mytable command to view the information about the data table.
  6. Run the use -t mytable command to use the data table.
  7. Run the following commands to write and read data.
    • Write data
      put --pk '["86", 6771]' --attr '[{"c":"name", "v":"redchen1"}, {"c":"country", "v":"china"}]'
      put --pk '["86", 6772]' --attr '[{"c":"name", "v":"redchen2"}, {"c":"country", "v":"china"}]'
      put --pk '["86", 6773]' --attr '[{"c":"name", "v":"redchen3"}, {"c":"country", "v":"china"}]'
      put --pk '["86", 6774]' --attr '[{"c":"name", "v":"redchen4"}, {"c":"country", "v":"china"}]'
    • Read data
      get --pk '["86",6771]'
      get --pk '["86",6772]'
      get --pk '["86",6773]'
      get --pk '["86",6774]'
  8. (Optional) Run the drop -t mytable -y command to delete the data table.

View supported options

To view the options that Tablestore CLI supports, you can use the help option.

  • Command syntax
    ./ts help
  • Output
    Commands:
      alter                  Alter table
      clear                  Clear the screen
      config                 Config the Tablestore access information
      create                 Create a new table
      del                    Delete the specify row from Tablestore
      desc                   Show table meta
      drop                   Drop the table
      exit                   Exit the program 
      export                 Export the data of table to disk from Tablestore, not support multi version
      get                    Get specify row from Tablestore
      help                   Display help
      import                 Load the data to TableStore, not support multi version
      list                   List all tables
      points                 Logically divide the data of the full table into several shards close to the specified size
      press_check            Check data for press
      press_input            Input data for press
      put                    Insert a row to Tablestore
      quit                   Quit the program
      update                 Insert a row to Tablestore
      use                    Select table

Initialize Tablestore CLI

Configure the connection information and modify the fields based on your actual conditions.

  • Command syntax
    config --endpoint endpoint --instance instanceName --id accessKeyId --key accessKeySecret
    The following table describes the parameters you can configure for the connection information.
    Parameter Example Description
    --endpoint https://myinstance.cn-hangzhou.ots.aliyuncs.com The endpoint of the instance. For more information, see Endpoint.
    --instance myinstance The name of the instance.
    --id NTSVLeBHzgX2iZfcaXXPJ**** The AccessKey ID and AccessKey secret of the RAM user.
    --key 7NR2DiotscDbauohSq9kSHX8BDp99bjs7eNpCR7o****
  • Examples
    config --endpoint https://myinstance.cn-hangzhou.ots.aliyuncs.com --instance myinstance --id NTSVLeBHzgX2iZfcaXXPJ**** --key 7NR2DiotscDbauohSq9kSHX8BDp99bjs7eNpCR7o****

Table operations

You can use Tablestore CLI to perform the following operations on a table:

  • Create a data table
    • Command syntax
      create -t tableName --pk '[{"c":"primaryKeyName", "t":"primaryKeyType"}, {"c":"primaryKeyName", "t":"primaryKeyType", "opt":"options"}]'--ttl timeToLive --version maxVersion 

      The following table describes the parameters you can configure to create a data table.

      Parameter Example Description
      -t, --table mytable The name of the data table.
      -p, --pk [{"c":"uid","t":"string"}, {"c":"pid","t":"integer"}] The primary key columns of the data table. The configuration information of the primary key columns is included in JSON arrays. The following fields are supported:
      • c: required. The name of the primary key column.
      • t: required. The type of the primary key column. Valid values: string, integer, and binary.
      • opt: optional. The optional configuration items. Valid values: none and auto. Default value: none. If this parameter is set to auto, the primary key column is the auto-increment primary key column.

        For more information about auto-increment primary key columns, see Auto-increment of primary key columns.

      You can add one to four primary key columns. The first primary key column is called the partition key. After a data table is created, the configurations and the order of primary key columns cannot be modified.

      --ttl 864000 The period during which data in the data table can be retained. When the retention period exceeds the TTL value, Tablestore automatically deletes expired data. Unit: seconds.

      If you do not specify this parameter, the default value -1 is used. The value of -1 specifies that data never expires.

      The minimum TTL value is 86400 seconds (one day). A value of -1 specifies that data never expires.

      --version 1 The maximum number of versions that can be retained for data in attribute columns of the data table. When the number of versions of data in attribute columns exceeds the max versions value, the system deletes data of earlier versions.

      If you do not specify this parameter, the default value 1 is used. The value of 1 specifies that only data of the latest version is retained.

      The max versions value of an attribute column is an integer other than 0.

      --read_cu 0 The reserved read throughput and reserved write throughput of the data table. The default value 0 specifies that all throughput is billed on a pay-as-you-go basis. Unit: capacity unit (CU).

      The reserved read throughput and reserved write throughput can be set to 0 for data tables only in capacity instances. Reserved throughput does not apply to these instances.

      • If reserved read throughput or reserved write throughput is set to a value greater than 0 for a data table, Tablestore allocates and reserves related resources for the data table. After the data table is created, Tablestore charges reserved throughput resources. Additional throughput is billed on a pay-as-you-go basis.
      • If reserved read throughput or reserved write throughput is set to 0, Tablestore does not allocate or reserve related resources for the data table.
      --write_cu 0
      -i, --input /tmp/create_table_meta.json The path of the configuration file that is used to create a data table. The configuration file must be in the JSON format.
      You can also run the following commands to create a configuration file:
      • Windows
        create -i D:\\localpath\\filename.json
      • Linux and macOS
        create -i /localpath/filename.json
      The following example shows the content of a configuration file:
      {
          "Name": "mytable",
          "Meta": {
              "Pk": [
                  {
                      "C": "uid",
                      "T": "string",
                      "Opt": "none"
                  },
                  {
                      "C": "pid",
                      "T": "integer",
                      "Opt": "none"
                  }
              ]
          },
          "Option": {
              "TTL": 864000,
              "Version": 3
          },
          "CU": {
              "Read": 0,
              "Write": 0
          },
          "Stream": {
              "Enable": true,
              "Expire": 24
          }
      }
    • Examples

      Create a data table named mytable. The data table contains the uid and pid primary key columns. Data in the data table never expires.

      create -t mytable --pk '[{"c":"uid", "t":"string"}, {"c":"pid", "t":"integer"}]'

      Create a data table named mytable. The data table contains the uid and pid primary key columns. The uid column is of the STRING type. The pid column is of the INTEGER type. The pid column is set to the auto-increment primary key column. Data in the data table never expires.

      create -t mytable --pk '[{"c":"uid", "t":"string"}, {"c":"pid", "t":"integer", "opt":"auto"}]'

      Create a data table named mytable. The data table contains the uid and pid primary key columns. The uid column is of the STRING type. The pid column is of the INTEGER type. The time to live (TTL) is set to 864000 seconds (10 days). Only the data of the latest version is retained.

      create -t mytable --pk '[{"C":"uid","t":"string"}, {"c":"pid","t":"integer"}]' --ttl 864000 --version 1
  • Select a data table for data operations
    • Command syntax
      use -t tableName
    • Examples

      Use the mytable data table.

      use -t mytable
  • List the names of tables
    • Command syntax
      ./ts list
    • Examples

      List the names of all tables in the instance that is initialized in this topic.

      ./ts list
  • Update the configurations of a table
    • Command syntax
      alter -t tableName --ttl timeToLive --version maxVersion --read_cu readCU --write_cu writeCU
    • Examples

      Change the TTL of the mytable data table to 86400 seconds (one day). Set the max versions to 1, reserved read CU to 0, and reserved write CU to 0.

      alter -t mytable --ttl 86400 --version 1 --read_cu 0 --write_cu 0
  • Query the description of a table
    • Command syntax
      desc -t tableName

      The following table describes the parameters you can configure to query the description of a table.

      Parameter Example Description
      --t, --table mytable The name of the data table.
      -o, --output /tmp/describe_table_meta.json The local path of the JSON file to which the description of the data table is exported.
    • Examples

      Query the description of the mytable data table.

      desc -t mytable
      Query the description of the mytable data table and save the description to the describe_table_meta.json local file.
      desc -t mytable -o  /tmp/describe_table_meta.json
  • Delete a table
    • Command syntax
      drop -t tableName -y

      The following table describes the parameters you can configure to delete a table.

      Parameter Example Description
      --t, --table mytable The name of the data table.
      -y, --yes N/A Required. This parameter specifies that confirmation information appears.
    • Examples

      Delete the mytable data table.

      drop -t mytable -y

Single-row operations

You can use Tablestore CLI to perform the following single-row operations:

  • Insert a row of data into a table
    • Command syntax
      put --pk '[primaryKeyValue, primaryKeyValue]' --attr '[{"c":"attributeColumnName", "v":"attributeColumnValue"}, {"c":"attributeColumnName", "v":"attributeColumnValue", "ts":timestamp}]' --condition condition

      The following table describes the parameters you can configure to insert a row of data into a table.

      Parameter Example Description
      -p, --pk ["86", 6771] The values of the primary key columns of the data table. The values are included in arrays.
      Notice
      • The specified number and types of the primary key columns must be consistent with the actual number and types of primary key columns in the data table.
      • If a primary key column is an auto-increment primary key column, you need only to set the value of the auto-increment primary key column to a placeholder null.
      --attr [{"c":"name", "v":"redchen"}, {"c":"country", "v":"china", "ts":1626860469000}] The attribute columns of the data table. The configuration information of the attribute columns is included in JSON arrays. Each attribute column is configured with the following fields:
      • c: required. The name of the attribute column.
      • v: required. The value of the attribute column.
      • t: optional. The type of the attribute column. Valid values: integer, string, binary, boolean, and double. If you set this field to string, the value of the attribute column is a string encoded in UTF-8. If you want to set the type of the attribute column to binary, this field is required.
      • ts: optional. The timestamp is the version number of the data. The timestamp can be automatically generated or customized. If you do not specify this parameter, Tablestore automatically generates a timestamp. For more information, see Max versions and TTL.
      --condition ignore The existence condition used to update the row. Default value: ignore. Valid values:
      • ignore: Data is inserted regardless of whether the row exists. If the row exists, existing data is overwritten when data is written.
      • exist: Data is inserted only when the row exists. Existing data is overwritten when data is written.
      • not_exist: Data is inserted only when the row does not exist.

      For more information about conditional update, see Conditional update.

      -i, --input /temp/inputdata.json The path of the configuration file that is used to insert data. The configuration file must be in the JSON format.
      You can also use the configuration file to insert data. Command syntax varies with operating systems.
      • Windows
        put -i D:\\localpath\\filename.json
      • Linux and macOS
        put -i /localpath/filename.json
      The following example shows the content of a configuration file:
      {
          "PK":{
              "Values":[
                  "86",
                  6771
              ]
          },
          "Attr":{
              "Values":[
                  {
                      "C":"age",
                      "V":32,
                      "TS":1626860801604,
                      "IsInt":true
                  }
              ]
          }
      }
    • Examples
      Insert a row of data into a data table. The value of the first primary key column in the row is "86". The value of the second primary key column in the row is 6771. The row contains two attribute columns, which are name and country. The name and country columns are of the STRING type.
      put --pk '["86", 6771]' --attr '[{"c":"name", "v":"redchen"}, {"c":"country", "v":"china"}]'
      Insert a row of data into the data table. The value of the first primary key column in the row is "86". The value of the second primary key column in the row is 6771. The row contains two attribute columns, which are name and country. The name and country columns are of the STRING type. Data is inserted regardless of whether the row exists. If the row exists, written data overwrites the existing data.
      put --pk '["86", 6771]' --attr '[{"c":"name", "v":"redchen"}, {"c":"country", "v":"china"}]'  --condition ignore
      Insert a row of data into the data table. The value of the first primary key column in the row is "86". The value of the second primary key column in the row is 6771. The row contains two attribute columns, which are name and country. The name and country columns are of the STRING type. The timestamp of the country column is 15327798534.
      put --pk '["86", 6771]' --attr '[{"c":"name", "v":"redchen"}, {"c":"country", "v":"china", "ts":15327798534}]'
      If the second primary key column of the data table is an auto-increment primary key column, a row of data is inserted into the data table. The value of the first primary key column in the row is "86". The value of the second primary key column in the row is null. The row contains two attribute columns, which are name and country. The name and country columns are of the STRING type.
      put --pk '["86", null]' --attr '[{"c":"name", "v":"redchen"}, {"c":"country", "v":"china"}]'
  • Read a row of data
    • Command syntax
      get --pk '[primaryKeyValue,primaryKeyValue]'

      The following table describes the parameters you can configure to read a row of data.

      Parameter Example Description
      -p, --pk ["86",6771] The values of the primary key columns in the data table. The values are included in arrays.
      Notice The specified number and types of the primary key columns must be consistent with the actual number and types of primary key columns in the data table.
      --columns name,uid The set of columns to read. The name of a column can be the name of a primary key column or an attribute column. If you do not specify this parameter, all data in the row is returned.
      --max_version 1 The maximum number of versions that can be read.
      --time_range_start 1626860469000 Read data within a specified range. time_range_start specifies the start timestamp. time_range_end specifies the end timestamp. The specified range includes the start value and excludes the end value.
      --time_range_end 1626865270000
      --time_range_specific 1626862870000 Data of the specific version to read.
      -o, --output /tmp/querydata.json The local path of the JSON file to which the query results are exported.
    • Examples

      Read a row of data where the value of the first primary key column is "86" and the value of the second primary key column is 6771.

      get --pk '["86",6771]'
  • Update a row of data
    • Command syntax
      update --pk '[primaryKeyValue, primaryKeyValue]' --attr '[{"c":"attributeColumnName", "v":"attributeColumnValue"}, {"c":"attributeColumnName", "v":"attributeColumnValue", "ts":timestamp}]' --condition condition
      The following table describes the parameters you can configure to update a row of data.
      Parameter Example Description
      -p, --pk ["86", 6771] The values of the primary key columns of the data table. The values are included in arrays.
      Notice The specified number and types of the primary key columns must be consistent with the actual number and types of primary key columns in the data table.
      --attr [{"c":"name", "v":"redchen"}, {"c":"country", "v":"china", "ts":15327798534}] The attribute columns of the data table. The configuration information of the attribute columns is included in JSON arrays. Each attribute column is configured with the following fields:
      • c: required. The name of the attribute column.
      • v: required. The value of the attribute column.
      • t: optional. The type of the attribute column. Valid values: integer, string, binary, boolean, and double. If you set this field to string, the value of the attribute column is a string encoded in UTF-8. If you want to set the type of the attribute column to binary, this field is required.
      • ts: optional. The timestamp is the version number of the data. The timestamp can be automatically generated or customized. If you do not specify this parameter, Tablestore automatically generates a timestamp.
      --condition ignore The existence condition used to update the row. Default value: ignore. Valid values:
      • ignore: Data is inserted regardless of whether the row exists. If the row exists, existing data is overwritten when data is written.
      • exist: Data is inserted only when the row exists. Existing data is overwritten when data is written.
      • not_exist: Data is inserted only when the row does not exist.

      For more information about conditional update, see Conditional update.

      -i, --input /tmp/inputdata.json The path of the configuration file that is used to create a data table. The configuration file must be in the JSON format.
      You can also use the configuration file to update data. Command syntax varies with operating systems.
      • Windows
        update -i D:\\localpath\\filename.json
      • Linux and macOS
        update -i /localpath/filename.json
      The following example shows the content of a configuration file:
      {
          "PK":{
              "Values":[
                  "86",
                  6771
              ]
          },
          "Attr":{
              "Values":[
                  {
                      "C":"age",
                      "V":32,
                      "TS":1626860801604,
                      "IsInt":true
                  }
              ]
          }
      }
    • Examples

      Update a row of data where the value of the first primary key column is "86" and the value of the second primary key column is 6771. Data is inserted regardless of whether the row exists. If the row exists, written data overwrites the existing data.

      update --pk '["86", 6771]' --attr '[{"c":"name", "v":"redchen"}, {"c":"country", "v":"china"}]'  --condition ignore
  • Delete a row of data
    • Command syntax
      del --pk '[primaryKeyValue,primaryKeyValue]'
    • Examples

      Delete a row of data where the value of the first primary key column is "86" and the value of the second primary key column is 6771.

      del --pk '["86", 6771]'

Simple stress testing

You can use Tablestore CLI to perform the following operations for simple stress testing:

  • Enable stress testing
    • Command syntax
      press_input --part partitionKeyValue --count rowCount
      The following table describes the parameters you can configure to enable stress testing.
      Parameter Example Description
      --begin 0 The serial number (SN) of the row from which data is inserted. Default value: 0.
      --part redchen The value of the partition key. Default value: redchen.
      --count 10 The number of rows where data is inserted. The size of each row is 1 KB.
    • Examples

      Insert 10 rows of data to the partition whose partition key value is redchen. The size of each row is 1 KB.

      press_input --part redchen --count 10
  • Check the stress testing status
    • Command syntax
      press_check --part partitionKeyValue --begin begin --count rowCount
      The following table describes the parameters you can configure to check the stress testing status.
      Parameter Example Description
      --begin 0 The SN of the row from which data is inserted. Default value: 0.
      --part redchen The value of the partition key. Default value: redchen.
      --count 1000 The number of rows used to check the stress testing status. The size of each row is 1 KB
      -y, --yes N/A Specifies whether to display the time taken to check the stress testing status.
    • Examples

      Check the stress testing status when you insert 1,000 rows of data from the first row.

      press_check --part redchen --begin 0 --count 1000
To implement simple stress testing, perform the following operations:
  1. Create a data table.

    Create a data table named mytable. The first primary key column of the data table is the partition key. The second primary key column contains row SNs.

    create -t mytable --pk '[{"c":"uid", "t":"string"}, {"c":"pid", "t":"integer"}]'
  2. Run the use -t mytable command to create a data table.
  3. Run the press_input --part redchen --count 10 command to enable stress testing.
  4. Run the press_check --part redchen --begin 0 --count 1000 command to check the stress testing status.

Data backup

You can use Tablestore CLI to perform the following operations to back up data:

  • Export data from a data table to a local file
    • Command syntax
      export -o /localpath/filename.json -c attributeColumnName,attributeColumnName,attributeColumnName

      The following table describes the parameters you can configure to export data from a data table to a local file.

      Parameter Example Description
      -c, --columns uid,name The set of columns to export. The name of a column can be the name of a primary key column or an attribute column. If you do not specify column names, all data in the row is exported.
      --max_version 1 The maximum number of versions of data that can be exported.
      --time_range_start 1626865596000 The range of versions within which to export data. time_range_start specifies the start timestamp. time_range_end specifies the end timestamp. The specified range includes the start value and excludes the end value.
      --time_range_end 1626869196000
      --time_range_specific 1626867396000 Data of a specific version to export.
      --backward true Specifies whether to sort exported data in descending order of primary keys. Default value: false. Valid values:
      • false: Data is sorted in ascending order of primary keys.
      • true: Data is sorted in descending order of primary keys.
      -o, --output /tmp/mydata.json The local path of the JSON file to which the query results are exported.
    • Examples

      Export all data from the current table to the mydata.json local file.

      export -o /tmp/mydata.json

      Export data from the uid and name columns of the current table to the mydata.json local file.

      export -o /tmp/mydata.json -c uid,name
  • Import data from a local file to the current table
    • Command syntax
      import -i /localpath/filename.json --ignore_ts

      The following table describes the parameters you can configure to import data from a local file to the current data table.

      Parameter Example Description
      -i, --input /tmp/inputdata.json The path of the local file from which data is imported to the current table.
      --ignore_ts N/A Tablestore CLI ignores timestamp checks. The current time is used as the timestamp.
      The following example shows the configurations in a local file:
      {
       "PK":{
              "Values":[
                  "redchen",
                  0
              ]
          },
          "Attr":{
              "Values":[
                  {
                      "C":"country",
                      "V":"china0"
                  },
                  {
                      "C":"name",
                      "V":"redchen0"
                  }
              ]
          }
      }
      {
          "PK":{
              "Values":[
                  "redchen",
                  1
              ]
          },
          "Attr":{
              "Values":[
                  {
                      "C":"country",
                      "V":"china1"
                  },
                  {
                      "C":"name",
                      "V":"redchen1"
                  }
              ]
          }
      }                              
    • Examples

      Import data from the mydata.json file to the current table.

      import -i /tmp/mydata.json

      Import data from the mydata.json file to the current table. The current time is used as the timestamp.

      import -i /tmp/mydata.json --ignore_ts