You can use the Tablestore CLI to insert a row of data into a data table, update a row of data in a data table, read data and delete a row of data from a data table, scan data in a data table, import data to a data table, and export data from a data table.

Insert a row of data

You can run the following command to insert a row of data into a data table. Alternatively, you can import a JSON configuration file to insert a row of data into a data table.

  • Command syntax
    put --pk '[primaryKeyValue, primaryKeyValue]' --attr '[{"c":"attributeColumnName", "v":"attributeColumnValue"}, {"c":"attributeColumnName", "v":"attributeColumnValue", "ts":timestamp}]' --condition condition

    The following table describes the parameters you can configure to insert a row of data into a data table.

    Parameter Required Example Description
    -p, --pk Yes ["86", 6771] The values of the primary key columns of the data table. The values are included in an array.
    Notice
    • The number and types of the primary key columns that you specify must be the same as the actual number and types of primary key columns in the data table.
    • If a primary key column is an auto-increment primary key column, you need only to set the value of the auto-increment primary key column to the placeholder null.
    --attr Yes [{"c":"name", "v":"redchen"}, {"c":"country", "v":"china", "ts":1626860469000}] The settings of the attribute column of the data table. The value of this parameter is a JSON array. Each attribute column is configured with the following fields:
    • c: required. The name of the attribute column.
    • v: required. The value of the attribute column.
    • t: optional. The type of the attribute column. Valid values: integer, string, binary, boolean, and double. If you set this field to string, the value of the attribute column is a string encoded in UTF-8. If you set the type of the attribute column to binary, this field is required.
    • ts: optional. The timestamp is the version number of the data. The timestamp can be automatically generated or customized. If you do not configure this field, Tablestore automatically generates a timestamp. For more information, see Max versions and TTL.
    --condition No ignore The row existence condition of conditional update that is used to determine whether to insert a row of data. Default value: ignore. Valid values:
    • ignore: Data is inserted regardless of whether the row exists. If the row exists, existing data is overwritten by the inserted data.
    • exist: Data is inserted only when the row exists. Existing data is overwritten by the inserted data.
    • not_exist: Data is inserted only when the row does not exist.

    For more information about conditional update, see Conditional update.

    -i, --input No /temp/inputdata.json The path of the JSON configuration file that is used to insert data.
    You can also use the configuration file to insert data. The command syntax varies based on the operating system.
    • Windows
      put -i D:\\localpath\\filename.json
    • Linux and macOS
      put -i /localpath/filename.json
    The following example shows the content of a configuration file:
    {
        "PK":{
            "Values":[
                "86",
                6771
            ]
        },
        "Attr":{
            "Values":[
                {
                    "C":"age",
                    "V":32,
                    "TS":1626860801604,
                    "IsInt":true
                }
            ]
        }
    }
  • Examples
    Insert a row of data into a data table. The value of the first primary key column in the row is "86". The value of the second primary key column in the row is 6771. The row contains the following two attribute columns: name and country. The name and country columns are of the string type.
    put --pk '["86", 6771]' --attr '[{"c":"name", "v":"redchen"}, {"c":"country", "v":"china"}]'
    Insert a row of data into a data table. The value of the first primary key column in the row is "86". The value of the second primary key column in the row is 6771. The row contains the following two attribute columns: name and country. The name and country columns are of the string type. Data is inserted regardless of whether the row exists. If the row exists, the inserted data overwrites the existing data.
    put --pk '["86", 6771]' --attr '[{"c":"name", "v":"redchen"}, {"c":"country", "v":"china"}]'  --condition ignore
    Insert a row of data into a data table. The value of the first primary key column in the row is "86". The value of the second primary key column in the row is 6771. The row contains the following two attribute columns: name and country. The name and country columns are of the string type. The timestamp of data in the country column is 15327798534.
    put --pk '["86", 6771]' --attr '[{"c":"name", "v":"redchen"}, {"c":"country", "v":"china", "ts":15327798534}]'
    Insert a row of data into a data table where the second primary key column is an auto-increment primary key column. The value of the first primary key column in the row is "86". The value of the second primary key column in the row is null. The row contains the following two attribute columns: name and country. The name and country columns are of the string type.
    put --pk '["86", null]' --attr '[{"c":"name", "v":"redchen"}, {"c":"country", "v":"china"}]'

Read data

You can read data from a data table and export the data to a local JSON file.

Note If the row that you want to read does not exist, an empty result is returned.
  • Command syntax
    get --pk '[primaryKeyValue,primaryKeyValue]'

    The following table describes the parameters you can configure to read a row of data.

    Parameter Required Example Description
    -p, --pk Yes ["86",6771] The values of the primary key columns of the data table. The values are included in an array.
    Notice The number and types of the primary key columns that you specify must be the same as the actual number and types of primary key columns in the data table.
    --columns No name,uid The set of columns that you want to read. You can specify the name of a primary key column or an attribute column. If you do not specify a column name, all data in the row is returned.
    --max_version No 1 The maximum number of data versions that you want to read.
    --time_range_start No 1626860469000 The version range of data that you want to read. time_range_start specifies the start timestamp and time_range_end specifies the end timestamp. The range includes the start value and excludes the end value.
    --time_range_end No 1626865270000
    --time_range_specific No 1626862870000 The specific version of data that you want to read.
    -o, --output No /tmp/querydata.json The local path of the JSON file to which the query results are exported.
  • Example
    Read a row of data in which the value of the first primary key column is "86" and the value of the second primary key column is 6771.
    get --pk '["86",6771]'

Update a row of data

You can run the following command to update a row of data in a data table. Alternatively, you can import a JSON configuration file to update a row of data in a data table.

  • Command syntax
    update --pk '[primaryKeyValue, primaryKeyValue]' --attr '[{"c":"attributeColumnName", "v":"attributeColumnValue"}, {"c":"attributeColumnName", "v":"attributeColumnValue", "ts":timestamp}]' --condition condition
    The following table describes the parameters you can configure to update a row of data.
    Parameter Required Example Description
    -p, --pk Yes ["86", 6771] The values of the primary key columns of the data table. The values are included in an array.
    Notice The number and types of the primary key columns that you specify must be the same as the actual number and types of primary key columns in the data table.
    --attr Yes [{"c":"name", "v":"redchen"}, {"c":"country", "v":"china", "ts":15327798534}] The settings of the attribute column of the data table. The value of this parameter is a JSON array. Each attribute column is configured with the following fields:
    • c: required. The name of the attribute column.
    • v: required. The value of the attribute column.
    • t: optional. The type of the attribute column. Valid values: integer, string, binary, boolean, and double. If you set this field to string, the value of the attribute column is a string encoded in UTF-8. If you set the type of the attribute column to binary, this field is required.
    • ts: optional. The timestamp is the version number of the data. The timestamp can be automatically generated or customized. If you do not configure this field, Tablestore automatically generates a timestamp.
    --condition No ignore The row existence condition of conditional update that is used to determine whether to update a row of data. Default value: ignore. Valid values:
    • ignore: Data is inserted regardless of whether the row exists. If the row exists, existing data is overwritten by the inserted data.
    • exist: Data is inserted only when the row exists. Existing data is overwritten by the inserted data.
    • not_exist: Data is inserted only when the row does not exist.

    For more information about conditional update, see Conditional update.

    -i, --input No /tmp/inputdata.json The path of the JSON configuration file that is used to update data.
    You can also use the configuration file to update data. The command syntax varies based on the operating system.
    • Windows
      update -i D:\\localpath\\filename.json
    • Linux and macOS
      update -i /localpath/filename.json
    The following example shows the content of a configuration file:
    {
        "PK":{
            "Values":[
                "86",
                6771
            ]
        },
        "Attr":{
            "Values":[
                {
                    "C":"age",
                    "V":32,
                    "TS":1626860801604,
                    "IsInt":true
                }
            ]
        }
    }
  • Example
    Update a row of data in which the value of the first primary key column is "86" and the value of the second primary key column is 6771. Data is inserted regardless of whether the row exists. If the row exists, the inserted data overwrites the existing data.
    update --pk '["86", 6771]' --attr '[{"c":"name", "v":"redchen"}, {"c":"country", "v":"china"}]'  --condition ignore

Delete a row of data

You can delete a row of data with a specified primary key.

  • Command syntax
    delete --pk '[primaryKeyValue,primaryKeyValue]'
  • Example
    Delete a row of data in which the value of the first primary key column is "86" and the value of the second primary key column is 6771.
    delete --pk '["86", 6771]'

Scan data

You can scan a data table to obtain all data or a specified number of rows of data in the data table.

  • Command syntax
    scan --limit limit
    The following table describes parameters that you can configure to scan data in a data table.
    Parameter Required Example Description
    --limit No 10 Optional. The maximum number of rows that you want to return. If you do not configure this parameter, all data in the data table is scanned.
  • Example
    The following sample code shows how to scan data in a data table to return up to 10 rows of data:
    scan --limit 10

Export data

You can export data from a data table to a local JSON file.

  • Command syntax
    scan -o /localpath/filename.json -c attributeColumnName,attributeColumnName,attributeColumnName

    The following table describes the parameters you can configure to export data from a data table to a local file.

    Parameter Required Example Description
    -c, --columns Yes uid,name The set of columns that you want to export. You can specify the name of a primary key column or an attribute column. If you do not specify a column name, all data in the row is exported.
    --max_version No 1 The maximum number of data versions that can be exported.
    --time_range_start No 1626865596000 The version range of data that you want to export. time_range_start specifies the start timestamp and time_range_end specifies the end timestamp. The range includes the start value and excludes the end value.
    --time_range_end No 1626869196000
    --time_range_specific No 1626867396000 The specific version of data that you want to export.
    --backward No N/A Specifies that the system sorts the exported data in descending order of primary keys.
    -o, --output Yes /tmp/mydata.json The local path of the JSON file to which the query results are exported.
    -l, --limit No 10 The maximum number of rows that you want to return in the query.
    -b, --begin No 1000 The value range of data that you want to export.
    -e, --end No 2000
  • Examples
    The following sample code shows how to export all data from the current table to the mydata.json local file:
    scan -o /tmp/mydata.json
    The following sample code shows how to export data from the uid and name columns of the current table to the mydata.json local file:
    scan -o /tmp/mydata.json -c uid,name

Import data

You can import data from a local JSON file to a data table.

  • Command syntax
    import -i /localpath/filename.json --ignore_version

    The following table describes the parameters you can configure to import data from a local file to a data table.

    Parameter Required Example Description
    -a, --action No put The mode in which data is imported. Default value: put. Valid values:
    • put: If the row exists, all versions of data in all columns of the existing row are deleted and a new row of data is written to the data table.
    • update: If the row exists, attribute columns can be added to or deleted from the row, a specified version of data in an attribute column can be deleted, or the existing data in an attribute column can be updated. If the row does not exist, a new row of data is added.
    -i, --input Yes /tmp/inputdata.json The path of the local JSON file from which data is imported to the current data table.
    --ignore_version No N/A The Tablestore CLI ignores timestamp checks. The current time is used as the timestamp.
    The following example shows the content of a local configuration file:
    {"PK":{"Values":["redchen",0]},"Attr":{"Values":[{"C":"country","V":"china0"},{"C":"name","V":"redchen0"}]}}
    {"PK":{"Values":["redchen",1]},"Attr":{"Values":[{"C":"country","V":"china1"},{"C":"name","V":"redchen1"}]}}                              
  • Examples
    The following sample code shows how to import data from the mydata.json file to the current data table:
    import -i /tmp/mydata.json
    The following sample code shows how to import data from the mydata.json file to the current data table with the current time used as the timestamp:
    import -i /tmp/mydata.json --ignore_version