All Products
Search
Document Center

Tablestore:Operations on data

Last Updated:Feb 02, 2024

You can use the Tablestore CLI to insert, read, update, delete, scan, export, and import data.

Insert data

You can insert a row of data into a data table. Alternatively, you can import a JSON configuration file to insert a row of data into a data table.

Command syntax

put --pk '[primaryKeyValue, primaryKeyValue]' --attr '[{"c":"attributeColumnName", "v":"attributeColumnValue"}, {"c":"attributeColumnName", "v":"attributeColumnValue", "ts":timestamp}]' --condition condition

The following table describes the parameters of the command.

Parameter

Required

Example

Description

-p, --pk

Yes

["86", 6771]

The values of the primary key columns of the data table. The values are included in an array.

Important
  • The number and types of primary key columns that you specify must be the same as the actual number and types of primary key columns in the data table.

  • If a primary key column is an auto-increment primary key column, you need to only set the value of the auto-increment primary key column to the placeholder null.

--attr

Yes

[{"c":"name", "v":"redchen"}, {"c":"country", "v":"china", "t":"string", "ts":15327798534}]

The settings of the attribute column of the data table. The value of this parameter is a JSON array. Each attribute column is configured by using the following fields:

  • c: required. The name of the attribute column.

  • v: required. The value of the attribute column.

  • t: optional. The type of the attribute column. Valid values: integer, string, binary, boolean, and double. If you set this field to string, the value of the attribute column is a string encoded in UTF-8. This field is required for an attribute column of the binary type.

  • ts: optional. The timestamp that is used as the version number of the data. The timestamp can be automatically generated or customized. If you do not configure this field, Tablestore automatically generates a timestamp. For more information, see Data versions and TTL.

--condition

No

ignore

The row existence condition of conditional update that is used to determine whether to insert a row of data. Default value: ignore. Valid values:

  • ignore: Data is inserted regardless of whether the row exists. If the row exists, existing data is overwritten by the inserted data.

  • exist: Data is inserted only when the row exists. Existing data is overwritten by the inserted data.

  • not_exist: Data is inserted only when the row does not exist.

For more information, see Conditional updates.

-i, --input

No

/temp/inputdata.json

The path of the JSON configuration file that is used to insert data.

You can also use a configuration file to insert data. The command syntax varies based on the operating system.

  • Windows

    put -i D:\\localpath\\filename.json
  • Linux and macOS

    put -i /localpath/filename.json

The following example shows the content of a configuration file:

{
    "PK":{
        "Values":[
            "86",
            6771
        ]
    },
    "Attr":{
        "Values":[
            {
                "C":"age",
                "V":32,
                "TS":1626860801604,
                "IsInt":true
            }
        ]
    }
}

Examples

  • Example 1

    The following sample command provides an example on how to insert a row of data into a data table. The value of the first primary key column in the row is "86". The value of the second primary key column in the row is 6771. The row contains the following two attribute columns: name and country. The name and country columns are of the STRING type.

    put --pk '["86", 6771]' --attr '[{"c":"name", "v":"redchen"}, {"c":"country", "v":"china"}]'
  • Example 2

    The following sample command provides an example on how to insert a row of data into a data table. The value of the first primary key column in the row is "86". The value of the second primary key column in the row is 6771. The row contains the following two attribute columns: name and country. The name and country columns are of the STRING type. Data is inserted regardless of whether the row exists. If the row exists, the inserted data overwrites the existing data.

    put --pk '["86", 6771]' --attr '[{"c":"name", "v":"redchen"}, {"c":"country", "v":"china"}]'  --condition ignore
  • Example 3

    The following sample command provides an example on how to insert a row of data into a data table. The value of the first primary key column in the row is "86". The value of the second primary key column in the row is 6771. The row contains the following two attribute columns: name and country. The name and country columns are of the STRING type. The timestamp of data in the country column is 15327798534.

    put --pk '["86", 6771]' --attr '[{"c":"name", "v":"redchen"}, {"c":"country", "v":"china", "t":"string", "ts":15327798534}]'
  • Example 4

    The following sample command provides an example on how to insert a row of data into a data table in which the second primary key column is an auto-increment primary key column. The value of the first primary key column in the row is "86". The value of the second primary key column in the row is null. The row contains two attribute columns: name and country. The name and country columns are of the STRING type.

    put --pk '["86", null]' --attr '[{"c":"name", "v":"redchen"}, {"c":"country", "v":"china"}]'

Read data

You can read data from a data table and export the data to a local JSON file.

Note

If the row that you want to read does not exist, an empty result is returned.

Command syntax

get --pk '[primaryKeyValue,primaryKeyValue]'

The following table describes the parameters of the command.

Parameter

Required

Example

Description

-p, --pk

Yes

["86",6771]

The values of the primary key columns of the data table. The values are included in an array.

Important

The number and types of primary key columns that you specify must be the same as the actual number and types of primary key columns in the data table.

--columns

No

name,uid

The columns that you want to read. You can specify the names of primary key columns or attribute columns. If you do not specify a column, all data in the row is returned.

--max_version

No

1

The maximum number of data versions that you can read.

--time_range_start

No

1626860469000

The version range of data that you want to read. time_range_start specifies the start timestamp and time_range_end specifies the end timestamp. The range includes the start value and excludes the end value.

--time_range_end

No

1626865270000

--time_range_specific

No

1626862870000

The specific version of data that you want to read.

-o, --output

No

/tmp/querydata.json

The local path of the JSON file to which the query results are exported.

Example

The following sample command provides an example on how to read a row of data whose value of the first primary key column is "86" and value of the second primary key column is 6771.

get --pk '["86",6771]'

Update data

You can update a row of data in a data table. Alternatively, you can import a JSON configuration file to update a row of data in a data table.

Command syntax

update --pk '[primaryKeyValue, primaryKeyValue]' --attr '[{"c":"attributeColumnName", "v":"attributeColumnValue"}, {"c":"attributeColumnName", "v":"attributeColumnValue", "ts":timestamp}]' --condition condition

The following table describes the parameters of the command.

Parameter

Required

Example

Description

-p, --pk

Yes

["86", 6771]

The values of the primary key columns of the data table. The values are included in an array.

Important

The number and types of primary key columns that you specify must be the same as the actual number and types of primary key columns in the data table.

--attr

Yes

[{"c":"name", "v":"redchen"}, {"c":"country", "v":"china", "ts":15327798534}]

The settings of the attribute column of the data table. The value of this parameter is a JSON array. Each attribute column is configured by using the following fields:

  • c: required. The name of the attribute column.

  • v: required. The value of the attribute column.

  • t: optional. The type of the attribute column. Valid values: integer, string, binary, boolean, and double. If you set this field to string, the value of the attribute column is a string encoded in UTF-8. This field is required for an attribute column of the binary type.

  • ts: optional. The timestamp that is used as the version number of the data. The timestamp can be automatically generated or customized. If you do not configure this field, Tablestore automatically generates a timestamp.

--condition

No

ignore

The row existence condition of conditional update that is used to determine whether to insert a row of data. Default value: ignore. Valid values:

  • ignore: Data is inserted regardless of whether the row exists. If the row exists, existing data is overwritten by the inserted data.

  • exist: Data is inserted only when the row exists. Existing data is overwritten by the inserted data.

  • not_exist: Data is inserted only when the row does not exist.

For more information, see Conditional updates.

-i, --input

No

/tmp/inputdata.json

The path of the JSON configuration file that is used to update data.

You can also use the configuration file to update data. The command syntax varies based on the operating system.

  • Windows

    update -i D:\\localpath\\filename.json
  • Linux and macOS

    update -i /localpath/filename.json

The following example shows the content of a configuration file:

{
    "PK":{
        "Values":[
            "86",
            6771
        ]
    },
    "Attr":{
        "Values":[
            {
                "C":"age",
                "V":32,
                "TS":1626860801604,
                "IsInt":true
            }
        ]
    }
}

Example

The following sample command provides an example on how to update a row of data in which the value of the first primary key column is "86" and the value of the second primary key column is 6771. Data is inserted regardless of whether the row exists. If the row exists, the inserted data overwrites the existing data.

update --pk '["86", 6771]' --attr '[{"c":"name", "v":"redchen"}, {"c":"country", "v":"china"}]'  --condition ignore

Delete data

You can delete a row of data with the specified primary key.

Command syntax

delete --pk '[primaryKeyValue,primaryKeyValue]'

Example

The following sample command provides an example on how to delete a row of data in which the value of the first primary key column value is "86" and the value of the second primary key column value is 6771.

delete --pk '["86", 6771]'

Scan data

You can scan a data table to obtain all data or a specified number of rows of data in the data table.

Command syntax

scan --limit limit

The following table describes the parameter of the command.

Parameter

Required

Example

Description

--limit

No

10

Optional. The maximum number of rows that you want to return. If you do not configure this parameter, all data in the data table is scanned.

Example

The following example provides an example on how to scan up to 10 rows of data in a data table.

scan --limit 10

Export data

You can export data from a data table to a local JSON file.

Command syntax

scan -o /localpath/filename.json -c attributeColumnName,attributeColumnName,attributeColumnName

The following table describes the parameters of the command.

Parameter

Required

Example

Description

-c, --columns

Yes

uid,name

The set of columns that you want to export. You can specify the name of a primary key column or an attribute column. If you do not specify a column name, all data in the row is exported.

--max_version

No

1

The maximum number of data versions that can be exported.

--time_range_start

No

1626865596000

The version range of data that you want to export. time_range_start specifies the start timestamp and time_range_end specifies the end timestamp. The range includes the start value and excludes the end value.

--time_range_end

No

1626869196000

--time_range_specific

No

1626867396000

The specific version of data that you want to export.

--backward

No

N/A

Specifies that the system sorts the exported data in descending order of primary keys.

-o, --output

Yes

/tmp/mydata.json

The local path of the JSON file to which the query results are exported.

-l, --limit

No

10

The maximum number of rows that you want to return in the query.

-b, --begin

No

'["86", 6771]'

The value range of data that you want to export. The range of the primary key is a left-closed, right-open interval.

-e, --end

No

'["86", 6775]'

Examples

  • Example 1

    The following sample command provides an example on how to export all data from the current table to the mydata.json local file.

    scan -o /tmp/mydata.json
  • Example 2

    The following sample command provides an example on how to export the data in the uid and name columns of the current table to the mydata.json local file.

    scan -o /tmp/mydata.json -c uid,name
  • Example 3

    The following sample command provides an example on how to export the data with the specified primary key of the current table to the mydata.json local file. The value of the first primary key column where the export starts is "86". The value of the second primary key column where the export starts is 6771. The value of the first primary key column where the export ends is "86". The value of the second primary key column where the export ends is 6775.

    scan -o D:\\0testreport\\myh3.json -b '["86", 6771]' -e '["86", 6775]'

Import data

You can import data from a local JSON file to a data table.

Important

If the path of the local JSON file contains Chinese characters, an error occurs when you import data.

Command syntax

import -i /localpath/filename.json --ignore_version

The following table describes the parameters of the command.

Parameter

Required

Example

Description

-a, --action

No

put

The mode in which data is imported. Default value: put. Valid values:

  • put: If the row exists, all versions of data in all columns of the existing row are deleted and a new row of data is written to the data table.

  • update: If the row exists, attribute columns can be added to or deleted from the row, the specified version of data in an attribute column can be deleted, or the existing data in an attribute column can be updated. If the row does not exist, a new row of data is added.

-i, --input

Yes

/tmp/inputdata.json

The path of the local JSON file from which data is imported to the current data table.

--ignore_version

No

N/A

The Tablestore CLI ignores timestamp checks. The current time is used as the timestamp.

The following example shows the content of a local configuration file:

{"PK":{"Values":["redchen",0]},"Attr":{"Values":[{"C":"country","V":"china0"},{"C":"name","V":"redchen0"}]}}
{"PK":{"Values":["redchen",1]},"Attr":{"Values":[{"C":"country","V":"china1"},{"C":"name","V":"redchen1"}]}}                              

Examples

  • Example 1

    The following sample command provides an example on how to import data from the mydata.json file to the current data table.

    import -i /tmp/mydata.json
  • Example 2

    The following sample command provides an example on how to import data from the mydata.json file to the current data table with the current time used as the timestamp.

    import -i /tmp/mydata.json --ignore_version