All Products
Search
Document Center

Tablestore:Read data

Last Updated:Sep 30, 2024

Tablestore provides API operations that allow you to read a single row of data, multiple rows of data at the same time, and data whose primary key value is in a specific range from a data table. When you read a single row of data or multiple rows of data at the same time from a table, you must specify the values of all primary key columns. When you read data whose primary key value is in a specific range from a table, you must specify a range for the values of all primary key columns or the prefix of the primary key. When you read data, you can specify the attribute columns and the number of data versions that you want to return, the time range used to query data, and the filter conditions.

Query methods

Tablestore provides the GetRow, BatchGetRow, and GetRange operations that you can call to read data. Before you read data, select a suitable query method based on the actual query scenario.

Important

If you want to read data from a table that contains an auto-increment primary key column, make sure that you have queried the values of all primary key columns including the auto-increment primary key column. For more information, see Configure an auto-increment primary key column. If no value is recorded for the auto-increment primary key column, you can call the GetRange operation to determine the range within which data is read based on primary key values from the first primary key column.

Query method

Description

Scenario

Read a single row of data

You can call the GetRow operation to read a single row of data.

This method is applicable to scenarios in which the values of all primary key columns of the row that you want to query can be determined and the number of rows that you want to query is small.

Read multiple rows of data at the same time

You can call the BatchGetRow operation to read multiple rows of data from one or more tables at the same time.

The BatchGetRow operation consists of multiple GetRow operations. When you call the BatchGetRow operation, the process of constructing each GetRow operation is the same as the process of constructing the GetRow operation when you call the GetRow operation.

This method is applicable to scenarios in which the values of all primary key columns of the rows that you want to query can be determined and the number of rows that you want to query is large or you want to read data from multiple tables.

Read data whose primary key value is in a specific range

You can call the GetRange operation to read data whose primary key value is in a specific range.

The GetRange operation allows you to read data whose primary key values are in the specified range in a forward or backward direction. You can also specify the number of rows to read. If the range is large and the number of scanned rows or the volume of scanned data exceeds the upper limit, the scan stops, and the rows that are read and information about the primary key of the next row are returned. You can initiate a request to start from where the last operation left off and read the remaining rows based on the information about the primary key of the next row returned by the previous operation.

This method is applicable to scenarios in which the range of the values of all primary key columns or the prefix of the primary key of the rows that you want to query can be determined.

Important

If you cannot determine the prefix of the primary key, you can specify the start primary key column whose data is of the INF_MIN type and the end primary key column whose data is of the INF_MAX type to scan all data in the table. This consumes a large amount of computing resources. Proceed with caution.

Prerequisites

  • An OTSClient instance is initialized. For more information, see Initialize an OTSClient instance.

  • A data table is created and data is written to the data table.

Read a single row of data

You can call the GetRow operation to read a single row of data. After you call the GetRow operation, one of the following results may be returned:

  • If the row exists, the primary key columns and attribute columns of the row are returned.

  • If the row does not exist, no row is returned and no error is reported.

API operation

"""
Description: This operation reads a single row of data. 
table_name: the name of the table. 
primary_key: the primary key. Type: dict. 
columns_to_get: optional. The names of the columns that you want to return. Type: list. If you do not specify this parameter, all columns are returned. 
column_filter: optional. The filter conditions for columns. Only rows that meet the conditions are returned. 
max_version: optional. The maximum number of data versions that can be returned.
time_range: optional. The range of versions or a specific version that you want to return. You must configure at least one of the max_version and time_range parameters.
Response: the number of capacity units (CUs) that are consumed by the operation, and the primary key columns and attribute columns that are returned. 
consumed: the number of CUs that are consumed by the operation. The consumed parameter is an instance of the tablestore.metadata.CapacityUnit class. 
return_row: the row data that is returned, including the primary key and attribute columns. Type: list. Example: [('PK0',value0), ('PK1',value1)]. 
next_token: the column from which the next wide-column read operation starts. Type: binary. 
Example:
    primary_key = [('gid',1), ('uid',101)]
    columns_to_get = ['name', 'address', 'age']
    consumed, return_row, next_token = client.get_row('myTable', primary_key, columns_to_get)
"""
def get_row(self, table_name, primary_key, columns_to_get=None,
            column_filter=None, max_version=1, time_range=None,
            start_column=None, end_column=None, token=None,
            transaction_id=None):

Parameters

Parameter

Description

table_name

The name of the data table.

primary_key

The primary key of the row. The value of this parameter consists of the name, type, and value of each primary key column.

Important

The number and types of primary key columns that you specify must be the same as the actual number and types of primary key columns in the table.

columns_to_get

The columns that you want to return. You can specify the names of primary key columns or attribute columns.

  • If you do not specify a column, all data in the row is returned.

  • If you specify columns but the row does not contain the specified columns, the return value is null. If the row contains some of the specified columns, the data in those columns of the row is returned.

Note
  • By default, Tablestore returns the data from all columns of a row when you query the row. You can specify the columns_to_get parameter to return specific columns. For example, if you specify col0 and col1 as the columns that you want to return, only the values of col0 and col1 are returned.

  • If you configure the columns_to_get and column_filter parameters, Tablestore queries the columns that are specified by columns_to_get, and then returns the rows that meet the filter conditions.

column_filter

The filter that you want to use to filter the query results on the server side. Only rows that meet the filter conditions are returned. For more information, see Filter.

Note

If you configure the columns_to_get and column_filter parameters, Tablestore queries the columns that are specified by columns_to_get, and then returns the rows that meet the filter conditions.

max_version

The maximum number of data versions that can be returned.

Important

You must configure at least one of max_version and time_range.

  • If you specify only the max_version parameter, data of the specified number of versions is returned from the most recent data entry to the earliest data entry.

  • If you specify only the time_range parameter, all data whose versions are in the specified range or data of the specified version is returned.

  • If you configure the max_version and time_range parameters, data of the specified number of versions in the specified version range is returned from the most recent data entry to the earliest data entry.

time_range

The range of versions or a specific version that you want to return. For more information, see TimeRange.

Important

You must configure at least one of max_version and time_range.

  • If you specify only the max_version parameter, data of the specified number of versions is returned from the most recent data entry to the earliest data entry.

  • If you specify only the time_range parameter, all data whose versions are in the specified range or data of the specified version is returned.

  • If you configure the max_version and time_range parameters, data of the specified number of versions in the specified version range is returned from the most recent data entry to the earliest data entry.

  • To query data whose versions are in a specific range, you must specify the start_time and end_time parameters. The start_time parameter specifies the start timestamp. The end_time parameter specifies the end timestamp. The specified range is a left-closed, right-open interval that is in the [start_time, end_time) format.

  • To query data of a specific version, you must specify the specific_time parameter. The specific_time parameter specifies a specific timestamp.

Only one of specific_time and [start_time, end_time) is required.

Valid values of the time_range parameter: 0 to INT64.MAX. Unit: millisecond.

transaction_id

The ID of the local transaction. If you want to use the local transaction feature to read data, you must configure this parameter.

Examples

The following sample code provides an example on how to read a row of data from a table:

# The first primary key column is gid and the value is the integer 1. The second primary key column is uid and the value is the integer 101. 
primary_key = [('gid', 1), ('uid', 101)]

# Specify the columns that you want to read. In this example, the columns are set to attribute columns name, growth, and type. If you leave the columns_to_get parameter empty, all attribute columns are returned. 
columns_to_get = ['name', 'growth', 'type']

# Specify a filter for the columns. In this example, the row in which the value of the growth column is not 0.9 and the value of the name column is Hangzhou is returned. 
cond = CompositeColumnCondition(LogicalOperator.AND)
cond.add_sub_condition(SingleColumnCondition("growth", 0.9, ComparatorType.NOT_EQUAL))
cond.add_sub_condition(SingleColumnCondition("name", 'Hangzhou', ComparatorType.EQUAL))

try:
    # Call the get_row operation to query data. 
    # Specify the table name. The last value 1 specifies that only one version of data is returned. 
    consumed, return_row, next_token = client.get_row('<table_name>', primary_key, columns_to_get, cond, 1)
    print('Read succeed, consume %s read cu.' % consumed.read)
    print('Value of primary key: %s' % return_row.primary_key)
    print('Value of attribute: %s' % return_row.attribute_columns)
    for att in return_row.attribute_columns:
        # Display the key, value, and version of each column. 
        print('name:%s\tvalue:%s\ttimestamp:%d' % (att[0], att[1], att[2]))
# In most cases, client exceptions are caused by parameter errors or network exceptions. 
except OTSClientError as e:
    print('get row failed, http_status:%d, error_message:%s' % (e.get_http_status(), e.get_error_message()))
# In most cases, server exceptions are caused by parameter or throttling errors. 
except OTSServiceError as e:
    print('get row failed, http_status:%d, error_code:%s, error_message:%s, request_id:%s' % (e.get_http_status(), e.get_error_code(), e.get_error_message(), e.get_request_id()))

For the detailed sample code, visit GetRow@GitHub.

Read multiple rows of data at the same time

You can call the BatchGetRow operation to read multiple rows of data from one or more tables at a time. The BatchGetRow operation consists of multiple GetRow operations. When you call the BatchGetRow operation, the process of constructing each GetRow operation is the same as the process of constructing the GetRow operation when you call the GetRow operation.

If you call the BatchGetRow operation, each GetRow operation is separately performed, and Tablestore separately returns the response to each GetRow operation.

Usage notes

  • When you call the BatchGetRow operation to read multiple rows at a time, some rows may fail to be read. If this happens, Tablestore does not return exceptions, but returns BatchGetRowResponse in which the information about the failed rows are included. Therefore, when you call the BatchGetRow operation, you must check the return values to determine whether data is successfully read from each row.

  • The BatchGetRow operation uses the same parameter settings for all rows. For example, if the ColumnsToGet parameter is set to [colA], only the value of the colA column is read from all rows.

  • You can call the BatchGetRow operation to read a maximum of 100 rows at a time.

API operation

"""
Description: This operation reads multiple rows of data at the same time. 
request = BatchGetRowRequest()
request.add(TableInBatchGetRowItem(myTable0, primary_keys, column_to_get=None, column_filter=None))
request.add(TableInBatchGetRowItem(myTable1, primary_keys, column_to_get=None, column_filter=None))
request.add(TableInBatchGetRowItem(myTable2, primary_keys, column_to_get=None, column_filter=None))
request.add(TableInBatchGetRowItem(myTable3, primary_keys, column_to_get=None, column_filter=None))
response = client.batch_get_row(request)
response: the results returned. The response parameter is an instance of the tablestore.metadata.BatchGetRowResponse class. 
"""
def batch_get_row(self, request):                    
  

Parameters

For more information about parameters, see the Parameters table of the "Read a single row of data" section.

Examples

The following sample code provides an example on how to read three rows of data at the same time from multiple tables:

# Specify the columns that you want to return. 
columns_to_get = ['name', 'mobile', 'address', 'age']
# Read three rows of data. 
rows_to_get = []
for i in range(0, 3):
    primary_key = [('gid', i), ('uid', i + 1)]
    rows_to_get.append(primary_key)

# Specify a filter for the columns. In this example, the filter conditions are met if the value of the name column is John and the value of the address column is China. 
cond = CompositeColumnCondition(LogicalOperator.AND)
cond.add_sub_condition(SingleColumnCondition("name", "John", ComparatorType.EQUAL))
cond.add_sub_condition(SingleColumnCondition("address", 'China', ComparatorType.EQUAL))

# Construct a request to read multiple rows of data. 
request = BatchGetRowRequest()

# Specify the rows that you want to read from a table. The last value 1 specifies that the latest version of data is read. 
request.add(TableInBatchGetRowItem('<table_name1>', rows_to_get, columns_to_get, cond, 1))

# Specify the rows that you want to read from another table. 
request.add(TableInBatchGetRowItem('<table_name2>', rows_to_get, columns_to_get, cond, 1))

try:
    result = client.batch_get_row(request)
    print('Result status: %s' % (result.is_all_succeed()))

    table_result_0 = result.get_result_by_table('<table_name1>')
    table_result_1 = result.get_result_by_table('<table_name2>')
    print('Check first table\'s result:')
    for item in table_result_0:
        if item.is_ok:
            print('Read succeed, PrimaryKey: %s, Attributes: %s' % (item.row.primary_key, item.row.attribute_columns))
        else:
            print('Read failed, error code: %s, error message: %s' % (item.error_code, item.error_message))
    print('Check second table\'s result:')
    for item in table_result_1:
        if item.is_ok:
            print('Read succeed, PrimaryKey: %s, Attributes: %s' % (item.row.primary_key, item.row.attribute_columns))
        else:
            print('Read failed, error code: %s, error message: %s' % (item.error_code, item.error_message))
# In most cases, client exceptions are caused by parameter errors or network exceptions. 
except OTSClientError as e:
    print('get row failed, http_status:%d, error_message:%s' % (e.get_http_status(), e.get_error_message()))
# In most cases, server exceptions are caused by parameter or throttling errors. 
except OTSServiceError as e:
    print('get row failed, http_status:%d, error_code:%s, error_message:%s, request_id:%s' % (e.get_http_status(), e.get_error_code(), e.get_error_message(), e.get_request_id()))

For the detailed sample code, visit BatchGetRow@GitHub.

Read data whose primary key value is in a specific range

You can call the GetRange operation to read data whose primary key values are in the specified range.

The GetRange operation allows you to read data whose primary key values are in the specified range in a forward or backward direction. You can also specify the number of rows to read. If the range is large and the number of scanned rows or the volume of scanned data exceeds the upper limit, the scan stops, and the rows that are read and information about the primary key of the next row are returned. You can initiate a request to start from where the last operation left off and read the remaining rows based on the information about the primary key of the next row returned by the previous operation.

Note

In Tablestore tables, all rows are sorted by the primary key. The primary key of a table sequentially consists of all primary key columns. Therefore, the rows are not sorted based on a specific primary key column.

Usage notes

The GetRange operation follows the leftmost matching principle. Tablestore compares values in sequence from the first primary key column to the last primary key column to read data whose primary key values are in the specified range. For example, the primary key of a data table consists of the following primary key columns: PK1, PK2, and PK3. When data is read, Tablestore first determines whether the PK1 value of a row is in the range that is specified for the first primary key column. If the PK1 value of a row is in the range, Tablestore stops determining whether the values of other primary key columns of the row are in the ranges that are specified for each primary key column and returns the row. If the PK1 value of a row is not in the range, Tablestore continues to determine whether the values of other primary key columns of the row are in the ranges that are specified for each primary key column in the same manner as PK1.

If one of the following conditions is met, the GetRange operation may stop and return data:

  • The amount of scanned data reaches 4 MB.

  • The number of scanned rows reaches 5,000.

  • The number of returned rows reaches the upper limit.

  • The read throughput is insufficient to read the next row of data because all reserved read throughput is consumed.

Each GetRange call scans data once. If the size of data that you want to scan by calling the GetRange operation is large, the scanning stops when the number of scanned rows reaches 5,000 or the size of scanned data reaches 4 MB. Tablestore does not return the remaining data that meets the query conditions. You can use the paging method to obtain the remaining data that meets the query conditions.

API operation

"""
Description: This operation reads rows whose primary key value is in a specific range. 
table_name: the name of the data table. 
direction: the order in which you want to sort the rows in the response. Type: string. Valid values: FORWARD and BACKWARD. 
inclusive_start_primary_key: the primary key from which the read operation starts. If the specified row exists, the row is included in the response. 
exclusive_end_primary_key: the primary key at which the read operation ends. Regardless of whether a row that contains the end primary key exists, the row is excluded from the response. 
columns_to_get: optional. The names of the columns that you want to return. Type: list. If you do not specify this parameter, all columns are returned. 
limit: optional. The maximum number of rows that can be returned. If you do not specify this parameter, all rows are returned. 
column_filter: optional. The filter conditions for columns. Only rows that meet the conditions are returned. 
max_version: optional. The maximum number of data versions that can be returned. You must specify at least one of the max_version and time_range parameters. 
time_range: optional. The range of versions or a specific version that you want to return. You must specify at least one of the max_version and time_range parameters. 
start_column: optional. The column from which the wide-column read operation starts. 
end_column: optional. The column at which the wide-column read operation ends. 
token: optional. The start column for the current wide-column read operation. The value of this parameter is returned by the previous wide-column read operation, and encoded as binary data. 

Response: the results that meet the conditions. 
consumed: the number of CUs that are consumed by the operation. The consumed parameter is an instance of the tablestore.metadata.CapacityUnit class. 
next_start_primary_key: the primary key from which you want to start the next GetRange operation. Type: dict. 
row_list: the returned rows of data. Format: [Row, ...].   
"""
def get_range(self, table_name, direction,
             inclusive_start_primary_key,
             exclusive_end_primary_key,
             columns_to_get=None,
             limit=None,
             column_filter=None,
             max_version=None,
             time_range=None,
             start_column=None,
             end_column=None,
             token=None):

Parameters

Parameter

Description

table_name

The name of the data table.

direction

The order in which you want to sort the rows in the response.

  • If you set this parameter to FORWARD, the value of the inclusive_start_primary_key parameter must be smaller than the value of the exclusive_end_primary_key parameter, and the rows in the response are sorted in ascending order of primary key values.

  • If you set this parameter to BACKWARD, the value of the inclusive_start_primary_key parameter must be greater than the value of the exclusive_end_primary_key parameter, and the rows in the response are sorted in descending order of primary key values.

For example, a table has two primary key values A and B, and Value A is smaller than Value B. If you set the direction parameter to FORWARD and specify a [A, B) range for the table, Tablestore returns the rows whose primary key value is greater than or equal to Value A but smaller than Value B in ascending order from Value A to Value B. If you set the direction parameter to BACKWARD and specify a [B, A) range for the table, Tablestore returns the rows whose primary key value is smaller than or equal to Value B and greater than Value A in descending order from Value B to Value A.

inclusive_start_primary_key

The start and end primary keys of the range to read. The start and end primary keys must be valid primary keys or virtual points that consist of the INF_MIN and INF_MAX type data. The number of columns for each virtual point must be the same as the number of columns of each primary key.

INF_MIN specifies an infinitely small value. All other values of other types are greater than INF_MIN. INF_MAX specifies an infinitely great value. All other values of other types are smaller than INF_MAX.

  • The inclusive_start_primary_key parameter specifies the start primary key. If a row that contains the start primary key exists, the row of data is returned.

  • The exclusive_end_primary_key parameter specifies the end primary key. Regardless of whether a row that contains the end primary key exists, the row of data is not returned.

The rows in a table are sorted in ascending order based on the primary key values. The range that is used to read data is a left-closed, right-open interval. If data is read in the forward direction, the rows whose primary key value is greater than or equal to the start primary key value but smaller than the end primary key value are returned.

exclusive_end_primary_key

limit

The maximum number of rows that you want to return. The value of this parameter must be greater than 0.

Tablestore stops an operation when the maximum number of rows that can be returned in the forward or backward direction is reached, even if some rows in the specified range are not returned. You can use the value of the next_start_primary_key parameter returned in the response to read data in the next request.

columns_to_get

The columns that you want to return. You can specify the names of primary key columns or attribute columns.

  • If you do not specify a column, all data in the row is returned.

  • If you specify columns but the row does not contain the specified columns, the return value is null. If the row contains some of the specified columns, the data in those columns of the row is returned.

Note
  • By default, Tablestore returns the data from all columns of a row when you query the row. You can specify the columns_to_get parameter to return specific columns. For example, if you specify col0 and col1 as the columns that you want to return, only the values of col0 and col1 are returned.

  • If a row is in the specified range that you want to read based on the primary key values but does not contain the specified columns that you want to return, the response excludes the row.

  • If you configure the columns_to_get and column_filter parameters, Tablestore queries the columns that are specified by columns_to_get, and then returns the rows that meet the filter conditions.

max_version

The maximum number of data versions that can be returned.

Important

You must configure at least one of max_version and time_range.

  • If you specify only the max_version parameter, data of the specified number of versions is returned from the most recent data entry to the earliest data entry.

  • If you specify only the time_range parameter, all data whose versions are in the specified range or data of the specified version is returned.

  • If you configure the max_version and time_range parameters, data of the specified number of versions in the specified version range is returned from the most recent data entry to the earliest data entry.

time_range

The range of versions or a specific version that you want to return. For more information, see TimeRange.

Important

You must configure at least one of max_version and time_range.

  • If you specify only the max_version parameter, data of the specified number of versions is returned from the most recent data entry to the earliest data entry.

  • If you specify only the time_range parameter, all data whose versions are in the specified range or data of the specified version is returned.

  • If you configure the max_version and time_range parameters, data of the specified number of versions in the specified version range is returned from the most recent data entry to the earliest data entry.

  • To query data whose versions are in a specific range, you must specify the start_time and end_time parameters. The start_time parameter specifies the start timestamp. The end_time parameter specifies the end timestamp. The specified range is a left-closed, right-open interval that is in the [start_time, end_time) format.

  • To query data of a specific version, you must specify the specific_time parameter. The specific_time parameter specifies a specific timestamp.

Only one of specific_time and [start_time, end_time) is required.

Valid values of the time_range parameter: 0 to INT64.MAX. Unit: millisecond.

column_filter

The filter that you want to use to filter the query results on the server side. Only rows that meet the filter conditions are returned. For more information, see Filter.

Note

If you configure the columns_to_get and column_filter parameters, Tablestore queries the columns that are specified by columns_to_get, and then returns the rows that meet the filter conditions.

next_start_primary_key

The start primary key of the next read request. The value of the next_start_primary_key parameter can be used to determine whether all data is read.

  • If the value of the next_start_primary_key parameter is not empty in the response, the value can be used as the value of the start primary key for the next GetRange operation.

  • If the value of the next_start_primary_key parameter is empty in the response, all data within the range is returned.

Examples

In the following example, data whose first primary key column values are in the specified range is read in ascending order based on the second primary key values from an INF_MIN value to an INF_MAX value. Then, the system checks whether the next_start_primary_key parameter is empty in the response. If no, the system calls the GetRange operation again until the next_start_primary_key parameter is empty.

# Specify the start primary key information. 
inclusive_start_primary_key = [('gid', 1), ('uid', INF_MIN)]

# Specify the end primary key information. 
exclusive_end_primary_key = [('gid', 5), ('uid', INF_MAX)]

# Query all columns. 
columns_to_get = []

# Set the limit parameter to 90 to return a maximum of 90 rows of data. If a total of 100 rows meet the query conditions, the number of rows that are returned in the first read operation ranges from 0 to 90. The value of the next_start_primary_key parameter is not None. 
limit = 90

# Specify a filter for the columns. In this example, the rows in which the value of the address column is China and the value of the age column is smaller than 50 meet the filter conditions. 
cond = CompositeColumnCondition(LogicalOperator.AND)
# Specify the pass_if_missing parameter to determine whether a row meets the filter conditions if the row does not contain a specific column. 
# If you do not specify the pass_if_missing parameter or set the parameter to True, a row meets the filter conditions if the row does not contain a specific column. 
# If you set the pass_if_missing parameter to False, a row does not meet the filter conditions if the row does not contain a specific column. 
cond.add_sub_condition(SingleColumnCondition("address", 'China', ComparatorType.EQUAL, pass_if_missing = False))
cond.add_sub_condition(SingleColumnCondition("age", 50, ComparatorType.LESS_THAN, pass_if_missing = False))

try:
    # Call the GetRange operation. 
    consumed, next_start_primary_key, row_list, next_token = client.get_range(
        '<table_name>', Direction.FORWARD,
        inclusive_start_primary_key, exclusive_end_primary_key,
        columns_to_get,
        limit,
        column_filter=cond,
        max_version=1,
        time_range = (1557125059000, 1557129059000) # Specify that the value of the start_time parameter is equal to or greater than 1557125059000 and the value of the end_time parameter is smaller than 1557129059000. 
    )

    all_rows = []
    all_rows.extend(row_list)

    # If the next_start_primary_key parameter is not empty, continue to read data. 
    while next_start_primary_key is not None:
        inclusive_start_primary_key = next_start_primary_key
        consumed, next_start_primary_key, row_list, next_token = client.get_range(
            '<table_name>', Direction.FORWARD,
            inclusive_start_primary_key, exclusive_end_primary_key,
            columns_to_get, limit,
            column_filter=cond,
            max_version=1
        )
        all_rows.extend(row_list)

    # Display the primary key columns and attribute columns. 
    for row in all_rows:
        print(row.primary_key, row.attribute_columns)
    print('Total rows: ', len(all_rows))
# In most cases, client exceptions are caused by parameter errors or network exceptions. 
except OTSClientError as e:
    print('get row failed, http_status:%d, error_message:%s' % (e.get_http_status(), e.get_error_message()))
# In most cases, server exceptions are caused by parameter or throttling errors. 
except OTSServiceError as e:
    print('get row failed, http_status:%d, error_code:%s, error_message:%s, request_id:%s' % (e.get_http_status(), e.get_error_code(), e.get_error_message(), e.get_request_id()))

For the detailed sample code, visit GetRange@GitHub.

FAQ

References

  • If you want to use indexes to accelerate data queries, you can use the secondary index or search index feature. For more information, see Secondary index or Search index.

  • If you want to visualize data in a table, you can connect the table to DataV or Grafana. For more information, see Data visualization tools.

  • If you want to download data from a table to a local file, you can use DataX or the Tablestore CLI. For more information, see Download data in Tablestore to a local file.

  • If you want to compute and analyze data in a table, you can use the SQL query feature of Tablestore. For more information, see Overview.

    Note

    You can also use compute engines such as MaxCompute, Spark, Hive, HadoopMR, Function Compute, and Flink to compute and analyze data in tables. For more information, see Overview.